implementing an effective test management process

14
IMPLEMENTING AN EFFECTIVE TEST-MANAGEMENT PROCESS A MERCURY INTERACTIVE WHITE PAPER

Upload: kapil-samadhiya

Post on 10-Apr-2015

983 views

Category:

Documents


1 download

DESCRIPTION

Implementing an Effective Test Management Process: This document is very useful for all Software Testing Professionals. You can get clear concept of Software Testing / Quality Assurance.Thanks,Kapil Samadhiya

TRANSCRIPT

Page 1: Implementing an Effective Test Management Process

IMPLEMENTING AN EFFECTIVE TEST-MANAGEMENT PROCESS

A MERCURY INTERACTIVE WHITE PAPER

Page 2: Implementing an Effective Test Management Process

3

What is Test Management? 3

Do it Right or Do it Over — the Reasons for Test Management 3

Requirements Management 4

Planning Tests 5

Running Tests 6

Managing Issues 7

Mercury Interactive TestDirector 8

Global Web-based Test Management 8

Requirements Management 8

Planning Tests 9

Running Tests 10

Managing Issues 12

Summary 13

Fueled by competitive pressures and high costs

of downtime, application testing is evolving into

a multi-stage process with its own methodology,

structure, organization and documentation.

Increasingly more organizations are starting to

adopt the process where testing takes place in

parallel with the application development,

starting as soon as the project has commenced.

Like much of the development process, the

testing process needs a methodical building-

block approach that includes requirements

definition, planning, design, execution and

analysis — to ensure coverage, consistency and

reusability of testing assets.

As soon as the project requirements have been

defined, test planning can get underway. The

advantage of doing it in parallel with develop-

ment is that the knowledge that was put into

the application design is not lost and can

applied to designing the testing strategy.

Based on the project objectives, test managers

can build a master test plan, which will

communicate the testing strategy, priorities and

objectives to the rest of the team.

Based on the master plan, testers can define

test requirements, or specific testing goals.

Requirements define WHAT needs to be tested

and what objectives — such as performance

goals — testers have in front of them. Once the

requirements have been defined, testers can

start developing the testing procedures

designed to meet/validate those requirements

and run the tests.

The aim of a test-management process is to

create one central point of control that is acces-

sible to all members of the testing team. This

central point houses all testing assets and

provides a clear foundation for the entire testing

process — from deciding what needs to be

tested, to building tests, running scenarios and

tracking defects. The test-management method-

ology also supports the analysis of test data and

coverage statistics, to provide a clear picture of

the application’s accuracy and quality at each

point in its lifecycle.

WHAT IS TEST MANAGEMENT?

Test management is a method of organizing

application test assets and artifacts — such as

test requirements, test plans, test documenta-

tion, test scripts and test results — to enable

easy accessibility and reusability. Its aim is to

deliver quality applications in less time. Test

management is firmly rooted in the concepts of

better organization, collaboration and informa-

tion sharing. Planning, designing and running

tests represent a considerable amount of work,

but these efforts are well rewarded when all

testing assets are shared by testers, developers

and managers alike; preserved when a tester

leaves the team; and reused throughout the

application’s lifecycle.

DO IT RIGHT OR DO IT OVER — THEREASONS FOR TEST MANAGEMENT

Despite the claim that test project management

is widely accepted as a common practice, most

organizations don’t have a standard process for

organizing, managing and documenting their

testing efforts. Often, testing is conducted as an

ad-hoc activity, which changes with every new

T a b l e o f C o n t e n t s

2

Quite often in the world of application development, testing is not seriously considered until program-

ming has been almost completed. Clearly, this approach to testing is inadequate in light of the

increasingly high demands for software quality and short release cycles. As a result, the place of

testing in the application lifecycle is beginning to change.

A B S T R A C T

RECENT YEARS HAVE BROUGHT A MAJOR BREAKTHROUGH IN THE FIELD OF APPLICATION TESTING. GROWING COMPLEXITY

OF TODAY’S APPLICATIONS, COMBINED WITH INCREASED COMPETITIVE PRESSURES AND SKYROCKETING COSTS OF

APPLICATION FAILURE AND DOWNTIME, HAVE CATAPULTED THE NEED FOR TESTING TO NEW HIGHS.

WHILE THE PRESSURES TO DELIVER HIGH-QUALITY APPLICATIONS CONTINUE TO MOUNT, SHRINKING DEVELOPMENT AND

DEPLOYMENT SCHEDULES, GEOGRAPHICALLY DISTRIBUTED ORGANIZATIONS, LIMITED RESOURCES AND HIGH TURNOVER

RATES FOR SKILLED EMPLOYEES MAKE APPLICATION TESTING THE ULTIMATE CHALLENGE.

FACED WITH THE REALITY OF HAVING TO DO MORE WITH LESS, JUGGLE MULTIPLE PROJECTS AND MANAGE DIVERSE

AND DISTRIBUTED PROJECT TEAMS, MANY ORGANIZATIONS ARE ADOPTING TEST-MANAGEMENT METHODOLOGIES AND

TURNING TO AUTOMATED TEST-MANAGEMENT TOOLS TO HELP CENTRALIZE, ORGANIZE, PRIORITIZE AND DOCUMENT THEIR

TESTING EFFORTS.

THIS PAPER EXPLORES THE CHALLENGES AND REWARDS OF TEST MANAGEMENT AND PROVIDES PRACTICAL WAYS TO

HELP PUT IN PLACE AN ORGANIZED AND STRUCTURED TESTING PROCESS. IN ADDITION, THIS PAPER DISCUSSES THE

BENEFITS OF MERCURY INTERACTIVE’S TESTDIRECTOR — THE INDUSTRY’S FIRST TOOL TO ADDRESS THE NEED FOR GLOBAL

TEST MANAGEMENT.

project. Without a standard foundation for test

planning, execution and defect tracking, testing

efforts are non-repeatable, non-reusable and

difficult to measure.

Better, Faster, Cheaper

Testing without following any planning and

design standards can result in the creation of

tests, designs and plans that are not repeatable

and therefore unable to be reused for future

iterations of the test. Organizations that think a

testing process is difficult to implement, and

even more challenging to enforce and maintain,

should consider the cost of redundant work and

accidentally lost and overwritten testing assets.

Without a central point of control and clear,

repeatable methodologies, it’s difficult to keep

the testing project on track and deliver quality

applications on time with limited resources.

Daily Builds and Smoke Tests

The process of releasing a new build every day

and then checking it for consistency, function-

ality and compatibility is becoming increasingly

popular not only in the Web environment, but

in any organization that builds complex,

dynamic applications. While smoke tests are

fairly simple, the sheer number of tests and

versions can make the testing process over-

whelming and nearly impossible to manage.

A well-defined, systematic approach to testing

and a centralized repository for tests, plans and

execution results can significantly increase the

accuracy of smoke tests and add value to

having frequent builds.

Managing Changing Requirements

Complete requirements-based testing is the way

to ensure that the finished system meets the

user’s needs. In an ideal world, each require-

ment would be tested at least once, and some

requirements would be tested several times. But

constant time and resource constraints make it

Page 3: Implementing an Effective Test Management Process

3

What is Test Management? 3

Do it Right or Do it Over — the Reasons for Test Management 3

Requirements Management 4

Planning Tests 5

Running Tests 6

Managing Issues 7

Mercury Interactive TestDirector 8

Global Web-based Test Management 8

Requirements Management 8

Planning Tests 9

Running Tests 10

Managing Issues 12

Summary 13

Fueled by competitive pressures and high costs

of downtime, application testing is evolving into

a multi-stage process with its own methodology,

structure, organization and documentation.

Increasingly more organizations are starting to

adopt the process where testing takes place in

parallel with the application development,

starting as soon as the project has commenced.

Like much of the development process, the

testing process needs a methodical building-

block approach that includes requirements

definition, planning, design, execution and

analysis — to ensure coverage, consistency and

reusability of testing assets.

As soon as the project requirements have been

defined, test planning can get underway. The

advantage of doing it in parallel with develop-

ment is that the knowledge that was put into

the application design is not lost and can

applied to designing the testing strategy.

Based on the project objectives, test managers

can build a master test plan, which will

communicate the testing strategy, priorities and

objectives to the rest of the team.

Based on the master plan, testers can define

test requirements, or specific testing goals.

Requirements define WHAT needs to be tested

and what objectives — such as performance

goals — testers have in front of them. Once the

requirements have been defined, testers can

start developing the testing procedures

designed to meet/validate those requirements

and run the tests.

The aim of a test-management process is to

create one central point of control that is acces-

sible to all members of the testing team. This

central point houses all testing assets and

provides a clear foundation for the entire testing

process — from deciding what needs to be

tested, to building tests, running scenarios and

tracking defects. The test-management method-

ology also supports the analysis of test data and

coverage statistics, to provide a clear picture of

the application’s accuracy and quality at each

point in its lifecycle.

WHAT IS TEST MANAGEMENT?

Test management is a method of organizing

application test assets and artifacts — such as

test requirements, test plans, test documenta-

tion, test scripts and test results — to enable

easy accessibility and reusability. Its aim is to

deliver quality applications in less time. Test

management is firmly rooted in the concepts of

better organization, collaboration and informa-

tion sharing. Planning, designing and running

tests represent a considerable amount of work,

but these efforts are well rewarded when all

testing assets are shared by testers, developers

and managers alike; preserved when a tester

leaves the team; and reused throughout the

application’s lifecycle.

DO IT RIGHT OR DO IT OVER — THEREASONS FOR TEST MANAGEMENT

Despite the claim that test project management

is widely accepted as a common practice, most

organizations don’t have a standard process for

organizing, managing and documenting their

testing efforts. Often, testing is conducted as an

ad-hoc activity, which changes with every new

T a b l e o f C o n t e n t s

2

Quite often in the world of application development, testing is not seriously considered until program-

ming has been almost completed. Clearly, this approach to testing is inadequate in light of the

increasingly high demands for software quality and short release cycles. As a result, the place of

testing in the application lifecycle is beginning to change.

A B S T R A C T

RECENT YEARS HAVE BROUGHT A MAJOR BREAKTHROUGH IN THE FIELD OF APPLICATION TESTING. GROWING COMPLEXITY

OF TODAY’S APPLICATIONS, COMBINED WITH INCREASED COMPETITIVE PRESSURES AND SKYROCKETING COSTS OF

APPLICATION FAILURE AND DOWNTIME, HAVE CATAPULTED THE NEED FOR TESTING TO NEW HIGHS.

WHILE THE PRESSURES TO DELIVER HIGH-QUALITY APPLICATIONS CONTINUE TO MOUNT, SHRINKING DEVELOPMENT AND

DEPLOYMENT SCHEDULES, GEOGRAPHICALLY DISTRIBUTED ORGANIZATIONS, LIMITED RESOURCES AND HIGH TURNOVER

RATES FOR SKILLED EMPLOYEES MAKE APPLICATION TESTING THE ULTIMATE CHALLENGE.

FACED WITH THE REALITY OF HAVING TO DO MORE WITH LESS, JUGGLE MULTIPLE PROJECTS AND MANAGE DIVERSE

AND DISTRIBUTED PROJECT TEAMS, MANY ORGANIZATIONS ARE ADOPTING TEST-MANAGEMENT METHODOLOGIES AND

TURNING TO AUTOMATED TEST-MANAGEMENT TOOLS TO HELP CENTRALIZE, ORGANIZE, PRIORITIZE AND DOCUMENT THEIR

TESTING EFFORTS.

THIS PAPER EXPLORES THE CHALLENGES AND REWARDS OF TEST MANAGEMENT AND PROVIDES PRACTICAL WAYS TO

HELP PUT IN PLACE AN ORGANIZED AND STRUCTURED TESTING PROCESS. IN ADDITION, THIS PAPER DISCUSSES THE

BENEFITS OF MERCURY INTERACTIVE’S TESTDIRECTOR — THE INDUSTRY’S FIRST TOOL TO ADDRESS THE NEED FOR GLOBAL

TEST MANAGEMENT.

project. Without a standard foundation for test

planning, execution and defect tracking, testing

efforts are non-repeatable, non-reusable and

difficult to measure.

Better, Faster, Cheaper

Testing without following any planning and

design standards can result in the creation of

tests, designs and plans that are not repeatable

and therefore unable to be reused for future

iterations of the test. Organizations that think a

testing process is difficult to implement, and

even more challenging to enforce and maintain,

should consider the cost of redundant work and

accidentally lost and overwritten testing assets.

Without a central point of control and clear,

repeatable methodologies, it’s difficult to keep

the testing project on track and deliver quality

applications on time with limited resources.

Daily Builds and Smoke Tests

The process of releasing a new build every day

and then checking it for consistency, function-

ality and compatibility is becoming increasingly

popular not only in the Web environment, but

in any organization that builds complex,

dynamic applications. While smoke tests are

fairly simple, the sheer number of tests and

versions can make the testing process over-

whelming and nearly impossible to manage.

A well-defined, systematic approach to testing

and a centralized repository for tests, plans and

execution results can significantly increase the

accuracy of smoke tests and add value to

having frequent builds.

Managing Changing Requirements

Complete requirements-based testing is the way

to ensure that the finished system meets the

user’s needs. In an ideal world, each require-

ment would be tested at least once, and some

requirements would be tested several times. But

constant time and resource constraints make it

Page 4: Implementing an Effective Test Management Process

4 5

worded and badly defined requirements. All too

often, requirements are neglected in the testing

effort, leading to a chaotic process in which

testers fix what they can and accept that certain

functionality will not be verified.

Quality can be achieved only by testing the

system against its requirements. Although unit

testing of individual components may be suffi-

cient during the development process, only

testing the entire system against its require-

ments will ensure that the application functions

as expected.

Of the many types of requirements, functional

and performance requirements are the ones that

most often apply to the testing process.

Functional requirements help validate that the

system contains specified functionality and

should be derived directly from the use cases

that developers use to build the system.

Performance requirements cover any perform-

ance standards and specifications identified by

the project team, such as transaction response

times or maximum numbers of users. These

requirements — also referred to as service-level

objectives (SLOs) — are aimed at ensuring that

the system can scale to the expected number of

users and provide a positive user experience.

Both functional and performance requirements

are designed to give the testing team a clear,

concise and functional blueprint with which to

develop test cases. Most software testing

managers will agree that it's impossible to test

everything — even in a rather simple system.

To test each feature and functional permutation,

you would potentially need to develop thou-

sands if not millions of test cases, which is not

realistic given time-to-market and other

pressures. Requirements-based testing is one

way to help you prioritize the testing effort.

Requirements can be grouped according to how

critical they are to mission success. The ones

that affect the core functionality of the system

must be extensively tested, while the not-so-

critical requirements can be covered by minimal

testing effort or be tested later in a lifecycle.

For example, if the application under test is a

new release of a mature product with only a

few functional changes, the testing effort

should be concentrated on the modified or

enhanced areas. These changes are usually

well-documented in the development and/or

marketing specification documents, and testing

requirements can be derived directly from them.

Recognizing the importance of requirements-

based development and testing, the testing

industry began marketing a variety of tools

designed to aid the creation, traceability and

management of application requirements. Some

of these enterprise requirements tools are

sophisticated and complex. Others offer the

basic functionality designed to assist in

creating, storing and documenting require-

ments. The key to selecting the appropriate tool

is the functionality. Most organizations are still

using word-processing tools or spreadsheets to

track requirements; however, commercial tools

designed for requirements management are a

better choice for organizations that want to

create a solid, flexible requirements-based

testing process.

Requirements-based testing is the way to

keep the testing effort on track — even when

priorities are shifting, resources become tight or

time for testing runs out. Requirements-based

testing is the way to measure quality against the

end-user needs.

PLANNING TESTS

Comprehensive planning is critical, even with

short testing cycles. Planning tests and

designing applications simultaneously will

ensure a complete set of tests that covers each

function the system is designed to perform. If

test planning is not addressed until later in the

application lifecycle, much of the design knowl-

edge will be lost, and testers will have to return

to the analysis stage in order to try to recreate

what has already been done before.

During the test planning phase, the testing

team identifies test-creation standards and

guidelines; selects hardware and software for

the test environment; assigns roles and respon-

sibilities; defines the test schedule; and sets

procedures for running, controlling and meas-

uring the testing process.

The test plan itself contains test designs, test

procedures, naming conventions and other test

attributes. The planning phase is used to define

which tests need to be performed, how these

tests must be executed, and which risks and

contingencies require planning.

Set the Ground Rules:

Testers don’t need overly bureaucratic processes

to slow them down, but ambiguities and lack of

clearly defined procedures can make testing a

bottleneck for product deliverables. This stage

can be used for setting the ground rules for

keeping test logs and documentation, assigning

roles within the team, agreeing on the naming

convention for tests and defects, and defining

the procedure for tracking progress against the

project goals.

extremely difficult to conduct comprehensive

testing of all requirements. As a result, testers

must prioritize the requirements they wish to

test. And like any other area of application

design and development, requirements undergo

frequent revisions and changes, which have to

be reflected in the tests.

Without a test-management process that ties test

plans to application functional requirements and

allows organizations to track requirements

changes to test cases (and vice versa), it’s nearly

impossible to design tests to verify that the

system contains specified functionality.

Global Testing

Testing, like most other IT departments, is

seeing the effects of globalization. Even small

companies often have their testing teams

distributed throughout the country, or even the

world. Cost considerations, higher availability of

qualified testers and better retention of skilled

employees force companies to turn to a “distrib-

uted” testing model. It’s not uncommon to have

multiple development and testing teams, spread

around the world, working on the same project.

True, the Internet is making communication

easier, but how does a distributed team like this

ensure that its members are not accidentally

deleting each other’s files or working on dupli-

cate defects? Without a clearly defined testing

process and an easy-to-use, intuitive collabora-

tion method, any attempt to set up a

geographically distributed “virtual” testing team

will likely bring more problems than benefits.

Many Tests, Many Projects

Effective testing requires a systematic process.

When a testing team is handling multiple proj-

ects, each with its own cycles and priorities, it’s

easy to lose focus. Almost no organization has

the luxury of dedicating a testing team to one

development project. At best, a testing team is

part of the development group that is working

on a particular product. Still, there are multiple

functional areas, builds and baselines; different

platforms and environments; and an endless

array of integrations that need to be validated.

To keep track of multiple test cases, testers

need a process that will allow them to manage

multiple projects and clearly define the objec-

tives of each one.

More than Bug Tracking

In some IT shops, testing is still regarded

mostly as a bug-tracking activity. It’s true —

finding defects is the testing team’s primary

responsibility. However, there’s much more to

the testing process than recording bugs and

passing them over to R&D. Testing today

focuses on verifying that the application’s

design and functionality meet the business

constraints and user requirements. To success-

fully achieve those goals, a testing process

requires clearly defined system specifications

and application business rules.

Early in the Lifecycle

As mentioned earlier, testing no longer fits

between the end of development and the begin-

ning of implementation. Early problem

detection (which is also up to 10 times cheaper

than finding issues at the end of the lifecycle)

requires that testing and development begin

simultaneously. However, for testing to be

implemented earlier in the lifecycle, testers

need a clearly defined set of goals and priorities

that would help them better understand system

design, requirements and testing objectives.

Only a well-defined, structured and intuitive

test-management process can help ensure that

the testing process meets its goals and

contributes to enhancing application quality.

The Test-Management Process

No matter what the system does, how it’s

written or what platform it’s running on, the

basic principles of test management are the

same. It begins with gathering and documenting

the testing requirements; continues through

designing and developing tests; running the

tests — both manual and automated, functional

and load — and analyzing application defects.

The testing process is not linear, and naturally

it differs depending on each organization’s prac-

tices and methodologies. The underlying

principles of every testing process, however, are

the same. This section of the white paper

delves deeper into each phase of the test-

management process and explores what it

means to have an effective method for organ-

izing, documenting, prioritizing and analyzing

an organization’s entire testing effort.

REQUIREMENTS MANAGEMENT

The testing process begins with defining clear,

complete requirements that can be tested.

Requirements development and management

play a critical role in both the development and

testing of software applications. After all, engi-

neers cannot build, and testers will not test,

what business analysts were not able to define.

To use a simple metaphor: If a team of archi-

tects builds a house without first consulting the

homeowner and thoroughly documenting his/her

requirements, they may end up with something

completely different than what the owner had

envisioned. Building a house without require-

ments is difficult enough, but the problem

becomes greater when dealing with software

applications, which are intangible, complex and

constantly changing.

Research shows that more than 30 percent of

all software projects fail before ever being

completed, and that five out of eight main

reasons for failure are ambiguous, poorly

Page 5: Implementing an Effective Test Management Process

4 5

worded and badly defined requirements. All too

often, requirements are neglected in the testing

effort, leading to a chaotic process in which

testers fix what they can and accept that certain

functionality will not be verified.

Quality can be achieved only by testing the

system against its requirements. Although unit

testing of individual components may be suffi-

cient during the development process, only

testing the entire system against its require-

ments will ensure that the application functions

as expected.

Of the many types of requirements, functional

and performance requirements are the ones that

most often apply to the testing process.

Functional requirements help validate that the

system contains specified functionality and

should be derived directly from the use cases

that developers use to build the system.

Performance requirements cover any perform-

ance standards and specifications identified by

the project team, such as transaction response

times or maximum numbers of users. These

requirements — also referred to as service-level

objectives (SLOs) — are aimed at ensuring that

the system can scale to the expected number of

users and provide a positive user experience.

Both functional and performance requirements

are designed to give the testing team a clear,

concise and functional blueprint with which to

develop test cases. Most software testing

managers will agree that it's impossible to test

everything — even in a rather simple system.

To test each feature and functional permutation,

you would potentially need to develop thou-

sands if not millions of test cases, which is not

realistic given time-to-market and other

pressures. Requirements-based testing is one

way to help you prioritize the testing effort.

Requirements can be grouped according to how

critical they are to mission success. The ones

that affect the core functionality of the system

must be extensively tested, while the not-so-

critical requirements can be covered by minimal

testing effort or be tested later in a lifecycle.

For example, if the application under test is a

new release of a mature product with only a

few functional changes, the testing effort

should be concentrated on the modified or

enhanced areas. These changes are usually

well-documented in the development and/or

marketing specification documents, and testing

requirements can be derived directly from them.

Recognizing the importance of requirements-

based development and testing, the testing

industry began marketing a variety of tools

designed to aid the creation, traceability and

management of application requirements. Some

of these enterprise requirements tools are

sophisticated and complex. Others offer the

basic functionality designed to assist in

creating, storing and documenting require-

ments. The key to selecting the appropriate tool

is the functionality. Most organizations are still

using word-processing tools or spreadsheets to

track requirements; however, commercial tools

designed for requirements management are a

better choice for organizations that want to

create a solid, flexible requirements-based

testing process.

Requirements-based testing is the way to

keep the testing effort on track — even when

priorities are shifting, resources become tight or

time for testing runs out. Requirements-based

testing is the way to measure quality against the

end-user needs.

PLANNING TESTS

Comprehensive planning is critical, even with

short testing cycles. Planning tests and

designing applications simultaneously will

ensure a complete set of tests that covers each

function the system is designed to perform. If

test planning is not addressed until later in the

application lifecycle, much of the design knowl-

edge will be lost, and testers will have to return

to the analysis stage in order to try to recreate

what has already been done before.

During the test planning phase, the testing

team identifies test-creation standards and

guidelines; selects hardware and software for

the test environment; assigns roles and respon-

sibilities; defines the test schedule; and sets

procedures for running, controlling and meas-

uring the testing process.

The test plan itself contains test designs, test

procedures, naming conventions and other test

attributes. The planning phase is used to define

which tests need to be performed, how these

tests must be executed, and which risks and

contingencies require planning.

Set the Ground Rules:

Testers don’t need overly bureaucratic processes

to slow them down, but ambiguities and lack of

clearly defined procedures can make testing a

bottleneck for product deliverables. This stage

can be used for setting the ground rules for

keeping test logs and documentation, assigning

roles within the team, agreeing on the naming

convention for tests and defects, and defining

the procedure for tracking progress against the

project goals.

extremely difficult to conduct comprehensive

testing of all requirements. As a result, testers

must prioritize the requirements they wish to

test. And like any other area of application

design and development, requirements undergo

frequent revisions and changes, which have to

be reflected in the tests.

Without a test-management process that ties test

plans to application functional requirements and

allows organizations to track requirements

changes to test cases (and vice versa), it’s nearly

impossible to design tests to verify that the

system contains specified functionality.

Global Testing

Testing, like most other IT departments, is

seeing the effects of globalization. Even small

companies often have their testing teams

distributed throughout the country, or even the

world. Cost considerations, higher availability of

qualified testers and better retention of skilled

employees force companies to turn to a “distrib-

uted” testing model. It’s not uncommon to have

multiple development and testing teams, spread

around the world, working on the same project.

True, the Internet is making communication

easier, but how does a distributed team like this

ensure that its members are not accidentally

deleting each other’s files or working on dupli-

cate defects? Without a clearly defined testing

process and an easy-to-use, intuitive collabora-

tion method, any attempt to set up a

geographically distributed “virtual” testing team

will likely bring more problems than benefits.

Many Tests, Many Projects

Effective testing requires a systematic process.

When a testing team is handling multiple proj-

ects, each with its own cycles and priorities, it’s

easy to lose focus. Almost no organization has

the luxury of dedicating a testing team to one

development project. At best, a testing team is

part of the development group that is working

on a particular product. Still, there are multiple

functional areas, builds and baselines; different

platforms and environments; and an endless

array of integrations that need to be validated.

To keep track of multiple test cases, testers

need a process that will allow them to manage

multiple projects and clearly define the objec-

tives of each one.

More than Bug Tracking

In some IT shops, testing is still regarded

mostly as a bug-tracking activity. It’s true —

finding defects is the testing team’s primary

responsibility. However, there’s much more to

the testing process than recording bugs and

passing them over to R&D. Testing today

focuses on verifying that the application’s

design and functionality meet the business

constraints and user requirements. To success-

fully achieve those goals, a testing process

requires clearly defined system specifications

and application business rules.

Early in the Lifecycle

As mentioned earlier, testing no longer fits

between the end of development and the begin-

ning of implementation. Early problem

detection (which is also up to 10 times cheaper

than finding issues at the end of the lifecycle)

requires that testing and development begin

simultaneously. However, for testing to be

implemented earlier in the lifecycle, testers

need a clearly defined set of goals and priorities

that would help them better understand system

design, requirements and testing objectives.

Only a well-defined, structured and intuitive

test-management process can help ensure that

the testing process meets its goals and

contributes to enhancing application quality.

The Test-Management Process

No matter what the system does, how it’s

written or what platform it’s running on, the

basic principles of test management are the

same. It begins with gathering and documenting

the testing requirements; continues through

designing and developing tests; running the

tests — both manual and automated, functional

and load — and analyzing application defects.

The testing process is not linear, and naturally

it differs depending on each organization’s prac-

tices and methodologies. The underlying

principles of every testing process, however, are

the same. This section of the white paper

delves deeper into each phase of the test-

management process and explores what it

means to have an effective method for organ-

izing, documenting, prioritizing and analyzing

an organization’s entire testing effort.

REQUIREMENTS MANAGEMENT

The testing process begins with defining clear,

complete requirements that can be tested.

Requirements development and management

play a critical role in both the development and

testing of software applications. After all, engi-

neers cannot build, and testers will not test,

what business analysts were not able to define.

To use a simple metaphor: If a team of archi-

tects builds a house without first consulting the

homeowner and thoroughly documenting his/her

requirements, they may end up with something

completely different than what the owner had

envisioned. Building a house without require-

ments is difficult enough, but the problem

becomes greater when dealing with software

applications, which are intangible, complex and

constantly changing.

Research shows that more than 30 percent of

all software projects fail before ever being

completed, and that five out of eight main

reasons for failure are ambiguous, poorly

Page 6: Implementing an Effective Test Management Process

Run Manual Tests:

Running manual tests involves manually inter-

acting with an application, then recording

actual results against expected outcomes. While

some manual tests may be routine, they are an

essential part of the testing procedure, allowing

the tester to verify functionality and conditions

that automated tools are unable to handle.

Schedule Automated Test Runs:

After the decision to automate has been made

and tests have been developed and assigned to

the host machines, the testing team needs to

create an execution schedule. Scheduling is

another way to avoid conflicts with hardware

and system resources — tests can be scheduled

to run unattended, overnight or when the

system is in least demand for other tasks.

Analyze Test-Run Results:

During the test-execution phase, testers will

uncover application inconsistencies, broken func-

tionality, missing features and other problems

commonly referred to as “bugs” or “defects.”

The next step is to view the list of all failed tests

and determine what caused the test to fail. If the

tester determines that the test failed due to an

application defect, this defect has to be reported

into the defect tracking system for further investi-

gation, correction and re-test.

MANAGING ISSUES

Managing or “tracking” issues and defects is a

critical step in the testing process. As today’s

systems become more complex and mission-

critical, so does the severity of the defects. A

well-defined method for defect management will

benefit not just the testing team. Developers,

managers, customer support, QA and even Beta

customers can effectively contribute to the

testing process by having access to an open,

easy-to-use, functional defect-tracking system.

The key to making a good defect-reporting and

resolution process is setting up the defect work-

flow and assigning permission rules. This way, a

tester can clearly define how a lifecycle of a

defect should progress and who has the

authority to open a new defect, who can change

its status to fixed, and under what conditions

the defect can be officially closed. It’s also crit-

ical to maintain a complete history and audit

trail throughout the defect lifecycle. Extra time

spent on documenting the defect and its history

is often well rewarded by easier analysis, shorter

resolution times and better application quality.

Analyzing defects is what essentially helps

managers make the go/no-go decisions about

application deployment. By analyzing the defect

statistics, a tester can take a snapshot of the

application under test and tell exactly how

many defects it currently has, their status,

severity, priority, age, etc. By giving different

team members access to defect information, a

tester can greatly improve communication in the

organization and get everyone on the same page

as to the current status of the application.

Agree on the Naming Convention:

The key to effective defect management is

communication among different parties

involved in the process. Before reporting mech-

anisms can be put into place, the testing team

needs to set the ground rules, such as define

the severity of the bugs and agree on what

information must be included in the defect

report. For example, many testing organizations

categorize defects as follows:

• Cosmetic/UI

• Inconsistency

• Loss of functionality

• System crash

• Loss of data

Cosmetic and inconsistency defects are the

easiest to report and handle. Although they

make it more difficult to use the system, they

don’t affect system functionality. The defects

that result in loss of functionality are more

severe and therefore more urgently need to be

fixed. Defects that cause the system to crash, or

worse — lose data — are commonly referred to

as “show-stoppers.” These must be documented

as thoroughly as possible, and must be fixed

before the system goes live.

Establish the Reporting Procedure:

The key to making a good defect report is

supplying developers with as much information

as they will need to reproduce and fix the

problem. A defect report generally includes a

summary and detailed description of the

problem, version and system configuration infor-

mation, list of steps needed to reproduce the

problem, and any relevant information and

attachments that will help illustrate the issue. It

is critical that a defect report not only provides

information describing the bug, but also includes

all data necessary to help fix the problem.

Set the Permission Rules:

For each step in the test-management process,

there must be different access privileges. This

is especially true in the defect-management

stage, since the defect information helps the

management make a “go/no go” decision on the

application release. Before the testing begins,

testers must decide on which members of the

team will have permission to report, open or

re-open defects, who can adjust the status, and

who has the authority to determine that the bug

has been fixed and is now closed.

Set Up Test Environment:

A test environment needs to be set up to

support all testing activities. All issues that

have long lead-times need to be considered as

early in the testing process as possible. The

planning phase is the time to get done with any

hardware and software procurements, network

installation, test environment design and

making any special arrangements that may be

necessary during the actual testing phase.

Define Test Procedures:

During the planning stage, testers determine if

tests are going to be manual or automated. If

the test is automated, testers need to under-

stand which tools will be used for automation,

and estimate what techniques and skills will be

needed to effectively use these tools.

Develop Test Design:

Test design may be represented as a sequence

of steps that need to be performed to execute a

test (in case of a manual test), or a collection of

algorithms and code to support more sophisti-

cated, complex tests. During test planning,

testers create a detailed description of each

test, as well as identify what kind of data or

ranges of data are required for the tests.

Map Test Data:

Test-data requirements need to be mapped

against test procedures. The testing team must

understand what types of data need to be

obtained to support each type of test, and how

this data could be obtained or generated.

Design Test Architecture:

Test architecture helps testers get a clear picture

of the test building blocks and assists in devel-

oping the actual tests. Test architecture helps

plan for data dependencies, map the workflow

between tests and identify common scripts that

can potentially be reused for future testing.

Communicate the Test Plan:

Once the test cases have been created, it’s

helpful to create a master test plan and

communicate it to the rest of the organization

— specifically project leaders, developers and

marketing — letting them know the areas on

which the testing group will be working. This

way, more people in the organization have the

visibility into the project and can add their

input, questions or comments before the actual

testing begins.

RUNNING TESTS

After the test design and development issues

have been addressed, the testing team is ready

to start running tests. To test the system as a

whole, testers need to perform various types of

testing — functional, regression, load, unit and

integration — each with its own set of require-

ments, schedules and procedures.

The test environment has been set up back in

the planning phase, so at this point testers can

view all available hardware and choose which

tests will run on which machines. Most applica-

tions must be tested on different operating

systems, different browser versions or other

configurations. During the test-run phase,

testers can set up groups of machines to most

efficiently use their lab resources.

Scheduling automated tests is another way to

optimize the use of lab resources, as well as save

testers time by running multiple tests at the

same time across multiple machines on the

network. Tests can be scheduled to run unat-

tended, overnight or when the system is in least

demand for other tasks. It also makes the scripts

much more reusable and easier to maintain by

allowing the ability to create more modular tests

and schedule them in certain sequence.

Having an organized, documented process does

not only help with automated tests, it also helps

make manual test runs more accurate by

providing testers with the clearly defined

procedure that specifies the tasks that need to

be performed at each step of manual testing.

For both manual and automated tests, testers

need to keep a complete history of all test runs,

creating audit trails to help trace history of tests

and test runs.

Create Test Sets:

A popular way to manage multiple tests is

grouping them into test sets, according to the

business process, environment or feature. For

example, all tests designed to verify the login

process can be grouped together under the

“Login” test set. Individual tests (both manual

and automated) can be assigned to test sets to

help testers ensure thorough coverage of each

functional area of the application.

Set Execution Logic:

In order to verify application functionality and

usability, tests have to realistically emulate the

end-user behavior. To achieve this, test execu-

tion should follow the predefined logic, such as

running certain tests after other tests have

passed, failed or have completed. Consider this

example: A user logs into the system, enters a

new order and then exits the system. To

emulate this simple business process, it makes

sense to run the tests following the exact same

sequence: login, insert order, logout. The execu-

tion logic rules should be set prior to executing

the actual tests.

Manage Test Resources:

After the test environment has been set up and

configured, testers define what tests or groups

of tests should be run on which machines.

Hardware can be set up to help test the on a

particular operating system, platform or browser.

Additionally, managing test lab resources by

assigning tests to individual machines helps

ensure that hardware and network resources are

being used most efficiently and effectively.

6 7

Page 7: Implementing an Effective Test Management Process

Run Manual Tests:

Running manual tests involves manually inter-

acting with an application, then recording

actual results against expected outcomes. While

some manual tests may be routine, they are an

essential part of the testing procedure, allowing

the tester to verify functionality and conditions

that automated tools are unable to handle.

Schedule Automated Test Runs:

After the decision to automate has been made

and tests have been developed and assigned to

the host machines, the testing team needs to

create an execution schedule. Scheduling is

another way to avoid conflicts with hardware

and system resources — tests can be scheduled

to run unattended, overnight or when the

system is in least demand for other tasks.

Analyze Test-Run Results:

During the test-execution phase, testers will

uncover application inconsistencies, broken func-

tionality, missing features and other problems

commonly referred to as “bugs” or “defects.”

The next step is to view the list of all failed tests

and determine what caused the test to fail. If the

tester determines that the test failed due to an

application defect, this defect has to be reported

into the defect tracking system for further investi-

gation, correction and re-test.

MANAGING ISSUES

Managing or “tracking” issues and defects is a

critical step in the testing process. As today’s

systems become more complex and mission-

critical, so does the severity of the defects. A

well-defined method for defect management will

benefit not just the testing team. Developers,

managers, customer support, QA and even Beta

customers can effectively contribute to the

testing process by having access to an open,

easy-to-use, functional defect-tracking system.

The key to making a good defect-reporting and

resolution process is setting up the defect work-

flow and assigning permission rules. This way, a

tester can clearly define how a lifecycle of a

defect should progress and who has the

authority to open a new defect, who can change

its status to fixed, and under what conditions

the defect can be officially closed. It’s also crit-

ical to maintain a complete history and audit

trail throughout the defect lifecycle. Extra time

spent on documenting the defect and its history

is often well rewarded by easier analysis, shorter

resolution times and better application quality.

Analyzing defects is what essentially helps

managers make the go/no-go decisions about

application deployment. By analyzing the defect

statistics, a tester can take a snapshot of the

application under test and tell exactly how

many defects it currently has, their status,

severity, priority, age, etc. By giving different

team members access to defect information, a

tester can greatly improve communication in the

organization and get everyone on the same page

as to the current status of the application.

Agree on the Naming Convention:

The key to effective defect management is

communication among different parties

involved in the process. Before reporting mech-

anisms can be put into place, the testing team

needs to set the ground rules, such as define

the severity of the bugs and agree on what

information must be included in the defect

report. For example, many testing organizations

categorize defects as follows:

• Cosmetic/UI

• Inconsistency

• Loss of functionality

• System crash

• Loss of data

Cosmetic and inconsistency defects are the

easiest to report and handle. Although they

make it more difficult to use the system, they

don’t affect system functionality. The defects

that result in loss of functionality are more

severe and therefore more urgently need to be

fixed. Defects that cause the system to crash, or

worse — lose data — are commonly referred to

as “show-stoppers.” These must be documented

as thoroughly as possible, and must be fixed

before the system goes live.

Establish the Reporting Procedure:

The key to making a good defect report is

supplying developers with as much information

as they will need to reproduce and fix the

problem. A defect report generally includes a

summary and detailed description of the

problem, version and system configuration infor-

mation, list of steps needed to reproduce the

problem, and any relevant information and

attachments that will help illustrate the issue. It

is critical that a defect report not only provides

information describing the bug, but also includes

all data necessary to help fix the problem.

Set the Permission Rules:

For each step in the test-management process,

there must be different access privileges. This

is especially true in the defect-management

stage, since the defect information helps the

management make a “go/no go” decision on the

application release. Before the testing begins,

testers must decide on which members of the

team will have permission to report, open or

re-open defects, who can adjust the status, and

who has the authority to determine that the bug

has been fixed and is now closed.

Set Up Test Environment:

A test environment needs to be set up to

support all testing activities. All issues that

have long lead-times need to be considered as

early in the testing process as possible. The

planning phase is the time to get done with any

hardware and software procurements, network

installation, test environment design and

making any special arrangements that may be

necessary during the actual testing phase.

Define Test Procedures:

During the planning stage, testers determine if

tests are going to be manual or automated. If

the test is automated, testers need to under-

stand which tools will be used for automation,

and estimate what techniques and skills will be

needed to effectively use these tools.

Develop Test Design:

Test design may be represented as a sequence

of steps that need to be performed to execute a

test (in case of a manual test), or a collection of

algorithms and code to support more sophisti-

cated, complex tests. During test planning,

testers create a detailed description of each

test, as well as identify what kind of data or

ranges of data are required for the tests.

Map Test Data:

Test-data requirements need to be mapped

against test procedures. The testing team must

understand what types of data need to be

obtained to support each type of test, and how

this data could be obtained or generated.

Design Test Architecture:

Test architecture helps testers get a clear picture

of the test building blocks and assists in devel-

oping the actual tests. Test architecture helps

plan for data dependencies, map the workflow

between tests and identify common scripts that

can potentially be reused for future testing.

Communicate the Test Plan:

Once the test cases have been created, it’s

helpful to create a master test plan and

communicate it to the rest of the organization

— specifically project leaders, developers and

marketing — letting them know the areas on

which the testing group will be working. This

way, more people in the organization have the

visibility into the project and can add their

input, questions or comments before the actual

testing begins.

RUNNING TESTS

After the test design and development issues

have been addressed, the testing team is ready

to start running tests. To test the system as a

whole, testers need to perform various types of

testing — functional, regression, load, unit and

integration — each with its own set of require-

ments, schedules and procedures.

The test environment has been set up back in

the planning phase, so at this point testers can

view all available hardware and choose which

tests will run on which machines. Most applica-

tions must be tested on different operating

systems, different browser versions or other

configurations. During the test-run phase,

testers can set up groups of machines to most

efficiently use their lab resources.

Scheduling automated tests is another way to

optimize the use of lab resources, as well as save

testers time by running multiple tests at the

same time across multiple machines on the

network. Tests can be scheduled to run unat-

tended, overnight or when the system is in least

demand for other tasks. It also makes the scripts

much more reusable and easier to maintain by

allowing the ability to create more modular tests

and schedule them in certain sequence.

Having an organized, documented process does

not only help with automated tests, it also helps

make manual test runs more accurate by

providing testers with the clearly defined

procedure that specifies the tasks that need to

be performed at each step of manual testing.

For both manual and automated tests, testers

need to keep a complete history of all test runs,

creating audit trails to help trace history of tests

and test runs.

Create Test Sets:

A popular way to manage multiple tests is

grouping them into test sets, according to the

business process, environment or feature. For

example, all tests designed to verify the login

process can be grouped together under the

“Login” test set. Individual tests (both manual

and automated) can be assigned to test sets to

help testers ensure thorough coverage of each

functional area of the application.

Set Execution Logic:

In order to verify application functionality and

usability, tests have to realistically emulate the

end-user behavior. To achieve this, test execu-

tion should follow the predefined logic, such as

running certain tests after other tests have

passed, failed or have completed. Consider this

example: A user logs into the system, enters a

new order and then exits the system. To

emulate this simple business process, it makes

sense to run the tests following the exact same

sequence: login, insert order, logout. The execu-

tion logic rules should be set prior to executing

the actual tests.

Manage Test Resources:

After the test environment has been set up and

configured, testers define what tests or groups

of tests should be run on which machines.

Hardware can be set up to help test the on a

particular operating system, platform or browser.

Additionally, managing test lab resources by

assigning tests to individual machines helps

ensure that hardware and network resources are

being used most efficiently and effectively.

6 7

Page 8: Implementing an Effective Test Management Process

8

their testing needs at all stages of the testing

process. If a testing requirement changes, they

can immediately identify which tests and defects

are affected, and who is responsible. They can

group and sort requirements in the tree, monitor

task allocation and progress of requirements, and

generate detailed reports and graphs.

At any stage of the testing process, TestDirector

will help testers generate quick graphs and

reports to gain accurate status information on the

application under test. For example, once the

requirements-creation stage has been completed,

testers can quickly extract information on what

percentage of requirements are covered by tests,

what the status is of those tests, how many

requirements were rejected in the review process

and have to be repaired, as well as which

requirements have “urgent” priority and need to

be validated as soon as possible.

In TestDirector, all filters, reports and graphs

can be saved as “Favorites” — public or

private. Only by the person who creates these

private favorite views can access them. Public

favorites, however, can be saved for use by the

entire team. For example, for the weekly

meeting, a manager needs to have updated

information on the status of all testing require-

ments for a particular application. She only

needs to create the report once and save the

filter under Favorites. TestDirector will automati-

cally update her view with new information, and

the manager will have an updated status of all

requirements every week for her meetings.

TestDirector also helps organizations maintain

an audit trail by providing the history of any

changes made to a particular requirement. This

helps preserve information integrity and ensure

that testers can trace every event throughout

the requirement lifecycle — from initial creation

through any changes to its status, priority or

test coverage.

PLANNING TESTS

The test-plan tree in TestDirector is a graphical

representation of the organization’s test plan. It

is a hierarchical list of tests organized according

to topic and describes the set of tests that must

be implemented to meet the quality require-

ments defined in the previous steps. In

TestDirector, testers can also associate a test

with specific defects. This is useful, for

example, when a new test is created specifically

for a known defect. By creating an association,

testers can determine whether the test should

be run based on the status of the defect.

In TestDirector, testers can link each test in the

test-plan tree with a requirement in the require-

ments tree. By defining requirements coverage

for a test, testers can keep track of the relation-

ship between the tests in their test plan and

their original testing requirements.

For each test, TestDirector allows testers to

create test steps describing which operations

need to be performed, what specific areas must

be checked, as well as the expected results.

After testers define the test steps, they can

decide whether to perform the test manually or

to automate it.

In addition to supporting both manual and auto-

mated tests, TestDirector supports the migration

from manual to automated tests. If testers

choose to automate a certain test, TestDirector

will create a template for the specific type of

automated test based on the design steps.

Testers only need to use an automated testing

tool (such as WinRunner) to record the business

process and complete the test.

9

Establish the Process Flow:

After the defect has been found, it is submitted

into the defect repository. The next step is to

have it reviewed by a developer who determines

if a reported issue is indeed a defect, and if it

is — assigns it a “new” status. Since few devel-

opment organizations have the bandwidth to

repair all known defects, a developer or project

manager must prioritize. If R&D is under pres-

sure to release the next version of the

application, a project manager may decide to

only fix the “show-stopper” level defects, and

leave the rest for after the release cycle.

Re-Test Repairs:

Whatever fixes or changes have been made to

repair a known defect, the application needs to

be re-tested to verify that the changes have

taken effect and that the fix did not introduce

additional problems and unexpected “side-

effects.” If the defect does not appear during

the re-testing phase, its status can be changed

to “closed.” If the problem persists and/or the

fix has introduced additional problems, the

defect is reopened and the cycle is repeated.

Analyze Defects:

Analyzing defects is the most critical part of the

defect-tracking process. It allows testers to take

a snapshot of the application under test and

view the number of known defects, their status,

severity, priority, age, etc. Based on defect

analysis, management is able to make an

informed decision as to whether the application

is ready to be deployed.

MERCURY INTERACTIVE TESTDIRECTOR

The test-management process is the main prin-

ciple behind Mercury Interactive’s TestDirector

— the industry’s first Web-enabled test-manage-

ment tool that combines the entire

test-management process in one powerful, scal-

able and flexible solution. TestDirector’s four

modules — Requirements Management, Test

Planning, Test Lab and Defect Manager — are

designed with the testing process flow in mind.

The seamless integration between these

modules enables smooth information flow

among different stages of the testing process.

GLOBAL WEB-BASED TESTMANAGEMENT

TestDirector is the global test-management tool.

By being completely Web-enabled, TestDirector

supports communication and collaboration

among distributed testing teams — whether

they are located in different parts of the world

or separated by organizational boundaries.

In today’s organizations, different groups are

involved in the testing process — managers,

developers, customer support and even

customers. By simply using a browser, all these

groups can easily access testing information.

In addition, TestDirector’s ability to configure

user groups and set up permissions helps

maintain access privileges and preserve infor-

mation integrity.

Another tremendous benefit of a Web-based tool

is its ability to keep everyone in synch by

instantly upgrading the version of the tool or

installing new modules. As a result, users no

longer need to be taken off-line while their test-

management tool is being upgraded. They can

simply refresh a browser and automatically be

in synch with the rest of the organization.

REQUIREMENTS MANAGEMENT

TestDirector’s Requirements Manager links

test cases to testing requirements, ensuring

traceability throughout the testing process.

TestDirector enables the user to easily see what

percentage of the application functional require-

ments are covered by tests, how many of these

tests have been run, and how many have passed

or failed.

The process of developing testing requirements

starts with reviewing all available documenta-

tion on the application under test, such as

marketing and business requirements docu-

ments, system requirements specifications and

design documents. These documents are used

to obtain a thorough understanding of the appli-

cation under test and determine the testing

scope — test goals, objectives and strategies.

Based on the testing scope and goals, QA

managers can start developing requirements,

log them into the TestDirector requirements

tree and assign responsibilities for specific

areas. TestDirector’s requirements tree is a

graphical representation of the organization’s

specific requirements and displays the hierar-

chical relationship between the different

requirements. Each requirement in the tree is

described in detail, such as the reviewed status,

priority and creation date, and includes any

relevant attachments.

TestDirector’s Requirements Manager provides

testers with two ways to link their requirements

to tests. Testers can automatically generate tests

based on application requirements. Or, if they

have an existing test plan, they can link require-

ments to tests that are in turn associated with

defects. In this way, testers can keep track of

TestDirector’s Requirements Manager links test cases to testing requirements, ensuring traceabilitythroughout the testing process.

Page 9: Implementing an Effective Test Management Process

8

their testing needs at all stages of the testing

process. If a testing requirement changes, they

can immediately identify which tests and defects

are affected, and who is responsible. They can

group and sort requirements in the tree, monitor

task allocation and progress of requirements, and

generate detailed reports and graphs.

At any stage of the testing process, TestDirector

will help testers generate quick graphs and

reports to gain accurate status information on the

application under test. For example, once the

requirements-creation stage has been completed,

testers can quickly extract information on what

percentage of requirements are covered by tests,

what the status is of those tests, how many

requirements were rejected in the review process

and have to be repaired, as well as which

requirements have “urgent” priority and need to

be validated as soon as possible.

In TestDirector, all filters, reports and graphs

can be saved as “Favorites” — public or

private. Only by the person who creates these

private favorite views can access them. Public

favorites, however, can be saved for use by the

entire team. For example, for the weekly

meeting, a manager needs to have updated

information on the status of all testing require-

ments for a particular application. She only

needs to create the report once and save the

filter under Favorites. TestDirector will automati-

cally update her view with new information, and

the manager will have an updated status of all

requirements every week for her meetings.

TestDirector also helps organizations maintain

an audit trail by providing the history of any

changes made to a particular requirement. This

helps preserve information integrity and ensure

that testers can trace every event throughout

the requirement lifecycle — from initial creation

through any changes to its status, priority or

test coverage.

PLANNING TESTS

The test-plan tree in TestDirector is a graphical

representation of the organization’s test plan. It

is a hierarchical list of tests organized according

to topic and describes the set of tests that must

be implemented to meet the quality require-

ments defined in the previous steps. In

TestDirector, testers can also associate a test

with specific defects. This is useful, for

example, when a new test is created specifically

for a known defect. By creating an association,

testers can determine whether the test should

be run based on the status of the defect.

In TestDirector, testers can link each test in the

test-plan tree with a requirement in the require-

ments tree. By defining requirements coverage

for a test, testers can keep track of the relation-

ship between the tests in their test plan and

their original testing requirements.

For each test, TestDirector allows testers to

create test steps describing which operations

need to be performed, what specific areas must

be checked, as well as the expected results.

After testers define the test steps, they can

decide whether to perform the test manually or

to automate it.

In addition to supporting both manual and auto-

mated tests, TestDirector supports the migration

from manual to automated tests. If testers

choose to automate a certain test, TestDirector

will create a template for the specific type of

automated test based on the design steps.

Testers only need to use an automated testing

tool (such as WinRunner) to record the business

process and complete the test.

9

Establish the Process Flow:

After the defect has been found, it is submitted

into the defect repository. The next step is to

have it reviewed by a developer who determines

if a reported issue is indeed a defect, and if it

is — assigns it a “new” status. Since few devel-

opment organizations have the bandwidth to

repair all known defects, a developer or project

manager must prioritize. If R&D is under pres-

sure to release the next version of the

application, a project manager may decide to

only fix the “show-stopper” level defects, and

leave the rest for after the release cycle.

Re-Test Repairs:

Whatever fixes or changes have been made to

repair a known defect, the application needs to

be re-tested to verify that the changes have

taken effect and that the fix did not introduce

additional problems and unexpected “side-

effects.” If the defect does not appear during

the re-testing phase, its status can be changed

to “closed.” If the problem persists and/or the

fix has introduced additional problems, the

defect is reopened and the cycle is repeated.

Analyze Defects:

Analyzing defects is the most critical part of the

defect-tracking process. It allows testers to take

a snapshot of the application under test and

view the number of known defects, their status,

severity, priority, age, etc. Based on defect

analysis, management is able to make an

informed decision as to whether the application

is ready to be deployed.

MERCURY INTERACTIVE TESTDIRECTOR

The test-management process is the main prin-

ciple behind Mercury Interactive’s TestDirector

— the industry’s first Web-enabled test-manage-

ment tool that combines the entire

test-management process in one powerful, scal-

able and flexible solution. TestDirector’s four

modules — Requirements Management, Test

Planning, Test Lab and Defect Manager — are

designed with the testing process flow in mind.

The seamless integration between these

modules enables smooth information flow

among different stages of the testing process.

GLOBAL WEB-BASED TESTMANAGEMENT

TestDirector is the global test-management tool.

By being completely Web-enabled, TestDirector

supports communication and collaboration

among distributed testing teams — whether

they are located in different parts of the world

or separated by organizational boundaries.

In today’s organizations, different groups are

involved in the testing process — managers,

developers, customer support and even

customers. By simply using a browser, all these

groups can easily access testing information.

In addition, TestDirector’s ability to configure

user groups and set up permissions helps

maintain access privileges and preserve infor-

mation integrity.

Another tremendous benefit of a Web-based tool

is its ability to keep everyone in synch by

instantly upgrading the version of the tool or

installing new modules. As a result, users no

longer need to be taken off-line while their test-

management tool is being upgraded. They can

simply refresh a browser and automatically be

in synch with the rest of the organization.

REQUIREMENTS MANAGEMENT

TestDirector’s Requirements Manager links

test cases to testing requirements, ensuring

traceability throughout the testing process.

TestDirector enables the user to easily see what

percentage of the application functional require-

ments are covered by tests, how many of these

tests have been run, and how many have passed

or failed.

The process of developing testing requirements

starts with reviewing all available documenta-

tion on the application under test, such as

marketing and business requirements docu-

ments, system requirements specifications and

design documents. These documents are used

to obtain a thorough understanding of the appli-

cation under test and determine the testing

scope — test goals, objectives and strategies.

Based on the testing scope and goals, QA

managers can start developing requirements,

log them into the TestDirector requirements

tree and assign responsibilities for specific

areas. TestDirector’s requirements tree is a

graphical representation of the organization’s

specific requirements and displays the hierar-

chical relationship between the different

requirements. Each requirement in the tree is

described in detail, such as the reviewed status,

priority and creation date, and includes any

relevant attachments.

TestDirector’s Requirements Manager provides

testers with two ways to link their requirements

to tests. Testers can automatically generate tests

based on application requirements. Or, if they

have an existing test plan, they can link require-

ments to tests that are in turn associated with

defects. In this way, testers can keep track of

TestDirector’s Requirements Manager links test cases to testing requirements, ensuring traceabilitythroughout the testing process.

Page 10: Implementing an Effective Test Management Process

In TestDirector, testers can run tests locally or

remotely, on any available machine on the

network. Through the Host Manager, they can

define which machines will be available for

running tests and group them by task (func-

tional or load testing), operating system

(Windows or UNIX), browser type and version

(Netscape or Internet Explorer), or by any other

specification or configuration.

TestDirector offers tight integration with

Mercury Interactive’s automated functional and

load-testing tools. It can also be configured

through its open API to run tests using a

third-party testing tool. With TestDirector,

testers can schedule their automated tests to

run unattended, either overnight or when the

test lab machines are in least demand for other

tasks. Using its scheduling mechanism,

TestDirector will invoke the automated testing

tool, run tests and report results back into the

central repository.

Before moving to the execution phase, it’s

essential to review the test plan to determine

how well it meets the goals that testers defined

at the start of the testing process. They can

analyze their test plan by generating

TestDirector reports and graphs. For example,

they can create a report that displays design

step data for each test in a test-plan tree. They

could use this report to help determine their

test design priorities.

As with reports and graphs in the requirements

tab, testers can save their test plan views as

Favorites and ensure that this information is

available at any time.

If an organization already has planning informa-

tion stored in a word-processing tool, such as

Microsoft Word or Excel, testers can reuse this

information and import it into TestDirector,

preserving their investment and eliminating the

need for redundant planning efforts.

Additionally, TestDirector allows testers to

create attachments so they can add any infor-

mation — such as design documents or feature

specifications — to a specific test.

RUNNING TESTS

TestDirector’s Test Lab helps the testing team

manage the scheduling and running of many

different types of tests. To help organize the

tests that need to be run, TestDirector supports

the concept of a “test set.” A test set is a group

of tests in a TestDirector project database that

is designed to achieve specific testing goals.

Test sets may include all tests that validate

specific functionality (such as the login process)

or that verify the application works on a partic-

ular version of the browser. In TestDirector, tests

can be added to the test set directly from the

planning tree by simply dragging and dropping.

Testers also can create a filter inside the plan-

ning tree and export into a test set all tests that

satisfy the specified criteria.

In TestDirector, testers can define the execution

conditions. TestDirector enables testers to

control the execution of automated tests in a

test set. They can set conditions and schedule

the date and time for executing their automated

tests. They also can set the sequence in which

to execute the tests.

For example, to verify a simple business process

in which the user logs into the system, creates

a reservation, enters payment information and

logs out, it makes sense to create four modular

tests: “Login,” “Create Reservation,” “Insert

Payment” and “Logout.” To realistically emulate

this simple business process, testers can

arrange the four tests sequentially, beginning

with “Login,” followed by “Create Reservation,”

“Insert Payment” and “Logout.” In TestDirector,

testers also can define the execution conditions.

In this example, the “Create Reservation” test

can be set to run only after the “Login” test has

passed. The “Logout” test, on the other hand,

can be run regardless of whether the previous

test passes or fails. So, the tester can set the

condition to run “Logout” after “Insert

Payment” has finished, regardless of the result.

10 11

The test plan tree in TestDirector is a graphical representation of the organization’s test plan.

In TestDirector, tests can be run locally or remotely, on any available machine on the network.

FEATURES AND BENEFITS

• Supports the Entire Testing Process: TestDirector

incorporates all aspects of the testing process —

requirements management, planning, scheduling,

running tests, issue management and project status

analysis — into a single browser-based application.

• Provides Anytime, Anywhere Access to Testing Assets:

Using TestDirector’s Web interface, testers, devel-

opers and business analysts can participate in and

contribute to the testing process by collaborating

across geographic and organizational boundaries.

• Provides Traceability Throughout the Testing Process:

TestDirector links requirements to test cases, and

test cases to issues to ensure traceability

throughout the testing cycle. When requirement

changes or the defect is fixed, the tester is notified

of the change.

• Integrates with Third-Party Applications: Whether a

tester uses an industry-standard configuration

management solution, Microsoft Office or a home-

grown defect-management tool, any of these

applications can be integrated into TestDirector.

Through the open API, TestDirector preserves the

users’ investment into their existing solutions and

enables the users to create an end-to-end lifecycle-

management solution.

• Manages Manual and Automated Tests: TestDirector

stores and runs both manual and automated tests,

and can help jumpstart an automation project by

converting manual tests to automated test scripts.

• Accelerates Testing Cycles: TestDirector’s TestLab

manager accelerates the test execution cycles by

scheduling and running tests automatically —

unattended, even overnight. The results are

reported into TestDirector’s central repository,

creating an accurate audit trail for analysis.

• Facilitates a Consistent, Repetitive Testing Process:

By providing a central repository for all testing

assets, TestDirector facilitates the adoption of a

more consistent testing process, which can be

repeated throughout the application lifecycle or

shared across multiple applications or lines of

business.

• Provides Analysis and Decision-Support Tools:

TestDirector’s integrated graphs and reports help

analyze application readiness at any point in the

testing process. Using information about require-

ments coverage, planning progress, run schedules

or defect statistics, managers are able to make

informed decisions on whether the application is

ready to go live.

Page 11: Implementing an Effective Test Management Process

In TestDirector, testers can run tests locally or

remotely, on any available machine on the

network. Through the Host Manager, they can

define which machines will be available for

running tests and group them by task (func-

tional or load testing), operating system

(Windows or UNIX), browser type and version

(Netscape or Internet Explorer), or by any other

specification or configuration.

TestDirector offers tight integration with

Mercury Interactive’s automated functional and

load-testing tools. It can also be configured

through its open API to run tests using a

third-party testing tool. With TestDirector,

testers can schedule their automated tests to

run unattended, either overnight or when the

test lab machines are in least demand for other

tasks. Using its scheduling mechanism,

TestDirector will invoke the automated testing

tool, run tests and report results back into the

central repository.

Before moving to the execution phase, it’s

essential to review the test plan to determine

how well it meets the goals that testers defined

at the start of the testing process. They can

analyze their test plan by generating

TestDirector reports and graphs. For example,

they can create a report that displays design

step data for each test in a test-plan tree. They

could use this report to help determine their

test design priorities.

As with reports and graphs in the requirements

tab, testers can save their test plan views as

Favorites and ensure that this information is

available at any time.

If an organization already has planning informa-

tion stored in a word-processing tool, such as

Microsoft Word or Excel, testers can reuse this

information and import it into TestDirector,

preserving their investment and eliminating the

need for redundant planning efforts.

Additionally, TestDirector allows testers to

create attachments so they can add any infor-

mation — such as design documents or feature

specifications — to a specific test.

RUNNING TESTS

TestDirector’s Test Lab helps the testing team

manage the scheduling and running of many

different types of tests. To help organize the

tests that need to be run, TestDirector supports

the concept of a “test set.” A test set is a group

of tests in a TestDirector project database that

is designed to achieve specific testing goals.

Test sets may include all tests that validate

specific functionality (such as the login process)

or that verify the application works on a partic-

ular version of the browser. In TestDirector, tests

can be added to the test set directly from the

planning tree by simply dragging and dropping.

Testers also can create a filter inside the plan-

ning tree and export into a test set all tests that

satisfy the specified criteria.

In TestDirector, testers can define the execution

conditions. TestDirector enables testers to

control the execution of automated tests in a

test set. They can set conditions and schedule

the date and time for executing their automated

tests. They also can set the sequence in which

to execute the tests.

For example, to verify a simple business process

in which the user logs into the system, creates

a reservation, enters payment information and

logs out, it makes sense to create four modular

tests: “Login,” “Create Reservation,” “Insert

Payment” and “Logout.” To realistically emulate

this simple business process, testers can

arrange the four tests sequentially, beginning

with “Login,” followed by “Create Reservation,”

“Insert Payment” and “Logout.” In TestDirector,

testers also can define the execution conditions.

In this example, the “Create Reservation” test

can be set to run only after the “Login” test has

passed. The “Logout” test, on the other hand,

can be run regardless of whether the previous

test passes or fails. So, the tester can set the

condition to run “Logout” after “Insert

Payment” has finished, regardless of the result.

10 11

The test plan tree in TestDirector is a graphical representation of the organization’s test plan.

In TestDirector, tests can be run locally or remotely, on any available machine on the network.

FEATURES AND BENEFITS

• Supports the Entire Testing Process: TestDirector

incorporates all aspects of the testing process —

requirements management, planning, scheduling,

running tests, issue management and project status

analysis — into a single browser-based application.

• Provides Anytime, Anywhere Access to Testing Assets:

Using TestDirector’s Web interface, testers, devel-

opers and business analysts can participate in and

contribute to the testing process by collaborating

across geographic and organizational boundaries.

• Provides Traceability Throughout the Testing Process:

TestDirector links requirements to test cases, and

test cases to issues to ensure traceability

throughout the testing cycle. When requirement

changes or the defect is fixed, the tester is notified

of the change.

• Integrates with Third-Party Applications: Whether a

tester uses an industry-standard configuration

management solution, Microsoft Office or a home-

grown defect-management tool, any of these

applications can be integrated into TestDirector.

Through the open API, TestDirector preserves the

users’ investment into their existing solutions and

enables the users to create an end-to-end lifecycle-

management solution.

• Manages Manual and Automated Tests: TestDirector

stores and runs both manual and automated tests,

and can help jumpstart an automation project by

converting manual tests to automated test scripts.

• Accelerates Testing Cycles: TestDirector’s TestLab

manager accelerates the test execution cycles by

scheduling and running tests automatically —

unattended, even overnight. The results are

reported into TestDirector’s central repository,

creating an accurate audit trail for analysis.

• Facilitates a Consistent, Repetitive Testing Process:

By providing a central repository for all testing

assets, TestDirector facilitates the adoption of a

more consistent testing process, which can be

repeated throughout the application lifecycle or

shared across multiple applications or lines of

business.

• Provides Analysis and Decision-Support Tools:

TestDirector’s integrated graphs and reports help

analyze application readiness at any point in the

testing process. Using information about require-

ments coverage, planning progress, run schedules

or defect statistics, managers are able to make

informed decisions on whether the application is

ready to go live.

Page 12: Implementing an Effective Test Management Process

SUMMARY

The application-testing process is unique for

every organization. However, many testing teams

follow a similar methodology that includes

identifying the functional requirements, creating

a test plan, running tests and tracking applica-

tion defects. This approach to testing requires a

powerful tool that can promote communication

and collaboration between teams, while adding

organization and structure to the testing

process.

Mercury Interactive’s TestDirector pioneered

the concept of global test management. By

integrating requirements management with test

planning, test scheduling, test execution and

defect tracking in a single Web-based applica-

tion, TestDirector streamlines and accelerates

the testing process.

MANAGING ISSUES

Information about issues or defects is critical

for testers who must determine the status of an

application and decide whether it is ready for

deployment. TestDirector’s Defect Manager is a

complete system for logging, tracking, managing

and analyzing application defects. It allows

different types of users — testers, project

managers, developers, beta customers and

many others — to contribute to the testing

process by entering defects directly into

TestDirector’s database.

An effective defect-tracking process is firmly

rooted in the concepts of well-defined workflow

and permission rules. In TestDirector, testers

can define exactly how the defect should

progress through its lifecycle — from initial

problem detection through solving the problem

and verifying the fix — as well as set permis-

sion rules and access privileges for any member

of the organization.

Using TestDirector’s many customization

options, organizations can set up the workflow

rules that are most appropriate for their

process. For example, a defect must start its

cycle with the status “new.” It could not be

closed without first being reviewed by the QA

manager, after which it is transferred to R&D. A

member of the R&D team either rejects a defect

or fixes the problem and assigns the new status

— “fixed.” But the defect cannot be closed

until the project manager reviews the fixes and

changes the status to “closed.” By setting the

workflow and permission rules for different

members of the team, testers can configure

TestDirector to reflect their organization’s

process flow and organizational requirements.

TestDirector’s customizable analysis toolscan be shared across the organization.

Based on one’s role in the organization, he/she

can be restricted from viewing certain records in

the defect-tracking process. For example, some

members of the development team can be

restricted from viewing defects other than the

ones that have been assigned to them.

Similarly, beta customers who have access to

the defect database are not able to view the

“target fix date” for any defect.

Similar to using attachments in other modules,

testers can attach information to a defect, such

as a description file or a snapshot of the appli-

cation under test, to help illustrate a problem.

To further help R&D reproduce the issue, testers

can attach information on all system compo-

nents, such as memory, operating system or

color settings. Attached as a text file, all this

data can help reproduce the problem and accel-

erate problem resolution.

The filters, reports, graphs and Favorites in the

defect-tracking module can be used to help

assess whether an application is ready to be

deployed. Are testers finding more defects than

they are fixing? Are developers overloaded with

too many urgent bugs assigned to them? Are

there more defects in the current version than

in the previous release? All this information is

available through TestDirector’s customizable

analysis tools, and can be shared across the

organization.

12 13

TestDirector’s Defect Manager is a complete system for logging, tracking, managing and analyzing application defects.

Page 13: Implementing an Effective Test Management Process

SUMMARY

The application-testing process is unique for

every organization. However, many testing teams

follow a similar methodology that includes

identifying the functional requirements, creating

a test plan, running tests and tracking applica-

tion defects. This approach to testing requires a

powerful tool that can promote communication

and collaboration between teams, while adding

organization and structure to the testing

process.

Mercury Interactive’s TestDirector pioneered

the concept of global test management. By

integrating requirements management with test

planning, test scheduling, test execution and

defect tracking in a single Web-based applica-

tion, TestDirector streamlines and accelerates

the testing process.

MANAGING ISSUES

Information about issues or defects is critical

for testers who must determine the status of an

application and decide whether it is ready for

deployment. TestDirector’s Defect Manager is a

complete system for logging, tracking, managing

and analyzing application defects. It allows

different types of users — testers, project

managers, developers, beta customers and

many others — to contribute to the testing

process by entering defects directly into

TestDirector’s database.

An effective defect-tracking process is firmly

rooted in the concepts of well-defined workflow

and permission rules. In TestDirector, testers

can define exactly how the defect should

progress through its lifecycle — from initial

problem detection through solving the problem

and verifying the fix — as well as set permis-

sion rules and access privileges for any member

of the organization.

Using TestDirector’s many customization

options, organizations can set up the workflow

rules that are most appropriate for their

process. For example, a defect must start its

cycle with the status “new.” It could not be

closed without first being reviewed by the QA

manager, after which it is transferred to R&D. A

member of the R&D team either rejects a defect

or fixes the problem and assigns the new status

— “fixed.” But the defect cannot be closed

until the project manager reviews the fixes and

changes the status to “closed.” By setting the

workflow and permission rules for different

members of the team, testers can configure

TestDirector to reflect their organization’s

process flow and organizational requirements.

TestDirector’s customizable analysis toolscan be shared across the organization.

Based on one’s role in the organization, he/she

can be restricted from viewing certain records in

the defect-tracking process. For example, some

members of the development team can be

restricted from viewing defects other than the

ones that have been assigned to them.

Similarly, beta customers who have access to

the defect database are not able to view the

“target fix date” for any defect.

Similar to using attachments in other modules,

testers can attach information to a defect, such

as a description file or a snapshot of the appli-

cation under test, to help illustrate a problem.

To further help R&D reproduce the issue, testers

can attach information on all system compo-

nents, such as memory, operating system or

color settings. Attached as a text file, all this

data can help reproduce the problem and accel-

erate problem resolution.

The filters, reports, graphs and Favorites in the

defect-tracking module can be used to help

assess whether an application is ready to be

deployed. Are testers finding more defects than

they are fixing? Are developers overloaded with

too many urgent bugs assigned to them? Are

there more defects in the current version than

in the previous release? All this information is

available through TestDirector’s customizable

analysis tools, and can be shared across the

organization.

12 13

TestDirector’s Defect Manager is a complete system for logging, tracking, managing and analyzing application defects.

Page 14: Implementing an Effective Test Management Process

M E R C U R Y I N T E R A C T I V E C O R P O R AT E H E A D Q U A R T E R S1325 Borregas Avenue, Sunnyvale, CA 94089 U.S.A. Phone: 408-822-5200 or 800-837-8911www.mercuryinteractive.com

Optane, TestDirector, Mercury Interactive and the Mercury Interactive logo are registered trademarks of Mercury Interactive Corporation. All other company, brandand product names are marks of their respective holders. © 2003 Mercury Interactive Corporation. Patents pending. All rights reserved. WP-0461-0203.

ABOUT MERCURY INTERACTIVEMercury Interactive is the global leader in business technology optimization (BTO). Our Optane suite of testing, tuning andperformance-management solutions enables companies to unlock the value of their information technology (IT) invest-ments by optimizing business and technology performance to meet business requirements. With Mercury Interactive,customers can measure the quality of their IT-enabled business processes, maximize technology and business performanceat every stage of the application lifecycle, and manage their IT operations for continuous optimization throughout thelifecycle. Our leading-edge BTO software and services are complemented by technologies and services from our globalbusiness partners, and are used by more than 30,000 customers — including 75 percent of Fortune 500 companies — toimprove quality, reduce costs and align IT with business goals.