presentation supplemental materials...bio presentation supplemental materials international...

33
BIO PRESENTATION SUPPLEMENTAL MATERIALS International Conference On Software Testing Analysis and Review May 17-21, 2004 Orlando, Florida USA T8 May 20, 2004 11:15AM TEST METRICS: A PRACTICAL APPROACH TO TRACKING AND INTERPRETATION Shaun Bradshaw Questcon Technologies Inc

Upload: others

Post on 19-Jun-2020

5 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: PRESENTATION SUPPLEMENTAL MATERIALS...BIO PRESENTATION SUPPLEMENTAL MATERIALS International Conference On Software Testing Analysis and Review May 17-21, 2004 Orlando, Florida USA

BIOPRESENTATIONSUPPLEMENTAL MATERIALS

International Conference OnSoftware Testing Analysis and Review

May 17-21, 2004Orlando, Florida USA

T8

May 20, 2004 11:15AM

TEST METRICS: A PRACTICALAPPROACH TO TRACKING AND

INTERPRETATION

Shaun BradshawQuestcon Technologies Inc

Page 2: PRESENTATION SUPPLEMENTAL MATERIALS...BIO PRESENTATION SUPPLEMENTAL MATERIALS International Conference On Software Testing Analysis and Review May 17-21, 2004 Orlando, Florida USA

Shaun Bradshaw Shaun Bradshaw is the Director of Quality Solutions for Questcon Technologies and is responsible for managing Questcon's team of Senior Practice Managers in the areas of quality solutions development and service delivery. Shaun is a graduate of the University of North Carolina at Greensboro with a BS in Information Systems. He has been working in the IT industry for 12 years during which he has worked to improve Quality Assurance and software test processes for clients in a variety of industries including, Financial Services, Manufacturing, and Telecommunications. Shaun is the co-author and editor of Questcon's QuestAssured (tm) suite of methodologies including the Test Methodology, Test Management Methodology, and Test Metrics Methodology. Shaun is also the author of an industry white paper on software test metrics.

Page 3: PRESENTATION SUPPLEMENTAL MATERIALS...BIO PRESENTATION SUPPLEMENTAL MATERIALS International Conference On Software Testing Analysis and Review May 17-21, 2004 Orlando, Florida USA

Test Metrics:A Practical Approach to Tracking & Interpretation

Presented By:Shaun BradshawDirector of Quality Solutions

May 20, 2004

Test Metrics:Test Metrics:A Practical Approach to Tracking & Interpretation

Presented By:Shaun BradshawDirector of Quality Solutions

May 20, 2004

Quality Quality Quality --- Innovation Innovation Innovation --- VisionVisionVision

Page 4: PRESENTATION SUPPLEMENTAL MATERIALS...BIO PRESENTATION SUPPLEMENTAL MATERIALS International Conference On Software Testing Analysis and Review May 17-21, 2004 Orlando, Florida USA

Slide Slide 22

ObjectivesObjectives

Why Measure?DefinitionMetrics PhilosophyTypes of MetricsInterpreting the ResultsMetrics Case StudyQ & A

Page 5: PRESENTATION SUPPLEMENTAL MATERIALS...BIO PRESENTATION SUPPLEMENTAL MATERIALS International Conference On Software Testing Analysis and Review May 17-21, 2004 Orlando, Florida USA

Slide Slide 33

“Software bugs cost the U.S. economy

an estimated $59.5 billion per year.

An estimated $22.2 billion

could be eliminated by

improved testing that enables

earlier and more effective

identification and removal of defects.”

- US Department of Commerce (NIST)

“Software bugs cost the U.S. economy

an estimated $59.5 billion per year.

An estimated $22.2 billion

could be eliminated by

improved testing that enables

earlier and more effective

identification and removal of defects.”

- US Department of Commerce (NIST)

Why Measure?Why Measure?

Page 6: PRESENTATION SUPPLEMENTAL MATERIALS...BIO PRESENTATION SUPPLEMENTAL MATERIALS International Conference On Software Testing Analysis and Review May 17-21, 2004 Orlando, Florida USA

Slide Slide 44

It is often said,

“You cannot improve what you

cannot measure.”

It is often said,

“You cannot improve what you

cannot measure.”

Why Measure?Why Measure?

Page 7: PRESENTATION SUPPLEMENTAL MATERIALS...BIO PRESENTATION SUPPLEMENTAL MATERIALS International Conference On Software Testing Analysis and Review May 17-21, 2004 Orlando, Florida USA

Slide Slide 55

DefinitionDefinition

Test Metrics:Are a standard of measurement.

Gauge the effectiveness and efficiency of several

software development activities.

Are gathered and interpreted throughout the test

effort.

Provide an objective measurement of the success

of a software project.

Test Metrics:Test Metrics:Are a standard of measurement.

Gauge the effectiveness and efficiency of several

software development activities.

Are gathered and interpreted throughout the test

effort.

Provide an objective measurement of the success

of a software project.

Page 8: PRESENTATION SUPPLEMENTAL MATERIALS...BIO PRESENTATION SUPPLEMENTAL MATERIALS International Conference On Software Testing Analysis and Review May 17-21, 2004 Orlando, Florida USA

Slide Slide 66

Metrics PhilosophyMetrics Philosophy

When tracked and used properly, test metrics can aid in software development process improvement by providing pragmatic & objective evidence of process change initiatives.

Keep It SimpleKeep It SimpleKeep It Simple

Make It MeaningfulMake It MeaningfulMake It Meaningful

Track ItTrack ItTrack It

Use ItUse ItUse It

Page 9: PRESENTATION SUPPLEMENTAL MATERIALS...BIO PRESENTATION SUPPLEMENTAL MATERIALS International Conference On Software Testing Analysis and Review May 17-21, 2004 Orlando, Florida USA

Slide Slide 77

Metrics PhilosophyMetrics Philosophy

Measure the basics first

Clearly define each metric

Get the most “bang for your buck”

Keep It SimpleKeep It SimpleKeep It Simple

Make It MeaningfulMake It Meaningful

Track ItTrack It

Use ItUse It

Page 10: PRESENTATION SUPPLEMENTAL MATERIALS...BIO PRESENTATION SUPPLEMENTAL MATERIALS International Conference On Software Testing Analysis and Review May 17-21, 2004 Orlando, Florida USA

Slide Slide 88

Metrics PhilosophyMetrics Philosophy

Metrics are useless if they are meaningless (use GQM model)

Must be able to interpret the results

Metrics interpretation should be objective

Make It MeaningfulMake It MeaningfulMake It Meaningful

Keep It SimpleKeep It Simple

Track ItTrack It

Use ItUse It

Page 11: PRESENTATION SUPPLEMENTAL MATERIALS...BIO PRESENTATION SUPPLEMENTAL MATERIALS International Conference On Software Testing Analysis and Review May 17-21, 2004 Orlando, Florida USA

Slide Slide 99

Metrics PhilosophyMetrics Philosophy

Incorporate metrics tracking into the Run Log or defect tracking system

Automate tracking process to remove time burdens

Accumulate throughout the test effort & across multiple projects

Track ItTrack ItTrack It

Keep It SimpleKeep It Simple

Use ItUse It

Make It MeaningfulMake It Meaningful

Page 12: PRESENTATION SUPPLEMENTAL MATERIALS...BIO PRESENTATION SUPPLEMENTAL MATERIALS International Conference On Software Testing Analysis and Review May 17-21, 2004 Orlando, Florida USA

Slide Slide 1010

Metrics PhilosophyMetrics Philosophy

Use ItUse ItUse It

Keep It SimpleKeep It Simple

Make It MeaningfulMake It Meaningful

Track ItTrack It

Interpret the results

Provide feedback to the Project Team

Implement changes based on objective data

Page 13: PRESENTATION SUPPLEMENTAL MATERIALS...BIO PRESENTATION SUPPLEMENTAL MATERIALS International Conference On Software Testing Analysis and Review May 17-21, 2004 Orlando, Florida USA

Slide Slide 1111

Types of MetricsTypes of Metrics

# Blocked# Blocked

Base MetricsBase Metrics ExamplesExamples

The number of distinct test cases that cannot be executed during the test effort due to an application or environmental constraint.Defines the impact of known system defects on the ability to execute specific test cases

Raw data gathered by Test AnalystTracked throughout test effortUsed to provide project status and evaluations/feedback

# Test Cases# Executed# Passed# Failed# Under Investigation# Blocked# 1st Run Failures# Re-ExecutedTotal ExecutionsTotal PassesTotal Failures

Page 14: PRESENTATION SUPPLEMENTAL MATERIALS...BIO PRESENTATION SUPPLEMENTAL MATERIALS International Conference On Software Testing Analysis and Review May 17-21, 2004 Orlando, Florida USA

Slide Slide 1212

Types of MetricsTypes of Metrics

11stst Run Fail RateRun Fail Rate

Calculated MetricsCalculated Metrics ExamplesExamples

The percentage of executed test cases that failed on their first execution.Used to determine the effectiveness of the analysis and development process. Comparing this metric across projects shows how process changes have impacted the quality of the product at the end of the development phase.

Tracked by Test Lead/ManagerConverts base metrics to useful dataCombinations of metrics can be used to evaluate process changes

% Complete% Test Coverage% Test Cases Passed% Test Cases Blocked1st Run Fail RateOverall Fail Rate% Defects Corrected% Rework% Test EffectivenessDefect Discovery Rate

Page 15: PRESENTATION SUPPLEMENTAL MATERIALS...BIO PRESENTATION SUPPLEMENTAL MATERIALS International Conference On Software Testing Analysis and Review May 17-21, 2004 Orlando, Florida USA

Slide Slide 1313

Sample Run LogSample Run Log

Sample System Test

Current # of TC ID Run Date Actual Results Status Runs

SST-001 01/01/04 Actual results met expected results. P P 1SST-002 01/01/04 Sample failure F F 1SST-003 01/02/04 Sample multiple failures F F P P 3SST-004 01/02/04 Actual results met expected results. P P 1SST-005 01/02/04 Actual results met expected results. P P 1SST-006 01/03/04 Sample Under Investigation U U 1SST-007 01/03/04 Actual results met expected results. P P 1SST-008 Sample Blocked B B 0SST-009 Sample Blocked B B 0SST-010 01/03/04 Actual results met expected results. P P 1SST-011 01/03/04 Actual results met expected results. P P 1SST-012 01/03/04 Actual results met expected results. P P 1SST-013 01/03/04 Actual results met expected results. P P 1SST-014 01/03/04 Actual results met expected results. P P 1SST-015 01/03/04 Actual results met expected results. P P 1SST-016 0SST-017 0SST-018 0SST-019 0SST-020 0

Run Status

Page 16: PRESENTATION SUPPLEMENTAL MATERIALS...BIO PRESENTATION SUPPLEMENTAL MATERIALS International Conference On Software Testing Analysis and Review May 17-21, 2004 Orlando, Florida USA

Slide Slide 1414

Sample Run LogSample Run Log

Metric Value Metric ValueTotal # of TCs 100 % Complete 11.0%# Executed 13 % Test Coverage 13.0%# Passed 11 % TCs Passed 84.6%# Failed 1 % TCs Blocked 2.0%# UI 1 % 1st Run Failures 15.4%# Blocked 2 % Failures 20.0%# Unexecuted 87 % Defects Corrected 66.7%# Re-executed 1 % Rework 100.0%Total Executions 15Total Passes 11Total Failures 31st Run Failures 2

Calculated MetricsBase Metrics

Page 17: PRESENTATION SUPPLEMENTAL MATERIALS...BIO PRESENTATION SUPPLEMENTAL MATERIALS International Conference On Software Testing Analysis and Review May 17-21, 2004 Orlando, Florida USA

Slide Slide 1515

Issue:The test team tracks and reports various test metrics, but there is no effort to analyze the data.

Result:Potential improvements are not implemented leaving process gaps throughout the SDLC. This reduces the effectiveness of the project team and the quality of the applications.

Metrics Program Metrics Program –– No AnalysisNo Analysis

Page 18: PRESENTATION SUPPLEMENTAL MATERIALS...BIO PRESENTATION SUPPLEMENTAL MATERIALS International Conference On Software Testing Analysis and Review May 17-21, 2004 Orlando, Florida USA

Slide Slide 1616

Solution:Closely examine all available data Use the objective information to determine the root causeCompare to other projects

Are the current metrics typical of software projects in your organization?What effect do changes have on the software development process?

Metrics Analysis & InterpretationMetrics Analysis & Interpretation

Result:Future projects benefit from a more effective and efficient application development process.

Page 19: PRESENTATION SUPPLEMENTAL MATERIALS...BIO PRESENTATION SUPPLEMENTAL MATERIALS International Conference On Software Testing Analysis and Review May 17-21, 2004 Orlando, Florida USA

Slide Slide 1717

Volvo IT of North America had little or no testing involvement in its IT projects. The organization’s projects were primarily maintenance related and operated in a COBOL/CICS/Mainframe environment. The organization had a desire to migrate to newer technologies and felt that testing involvement would assure and enhance this technological shift.

While establishing a test team we also instituted a metrics program to track the benefits of having a QA group.

Metrics Case StudyMetrics Case Study

Page 20: PRESENTATION SUPPLEMENTAL MATERIALS...BIO PRESENTATION SUPPLEMENTAL MATERIALS International Conference On Software Testing Analysis and Review May 17-21, 2004 Orlando, Florida USA

Slide Slide 1818

Project V

Introduced a test methodology and metrics programProject was 75% complete (development was nearly finished)Test team developed 355 test scenarios30.7% - 1st Run Fail Rate31.4% - Overall Fail RateDefect Repair Costs = $519,000

Metrics Case StudyMetrics Case Study

Page 21: PRESENTATION SUPPLEMENTAL MATERIALS...BIO PRESENTATION SUPPLEMENTAL MATERIALS International Conference On Software Testing Analysis and Review May 17-21, 2004 Orlando, Florida USA

Slide Slide 1919

Project T

Instituted requirements walkthroughs and design reviews with test team inputSame resources comprised both project teamsTest team developed 345 test scenarios17.9% - 1st Run Fail Rate18.0% - Overall Fail RateDefect Repair Costs = $346,000

Metrics Case StudyMetrics Case Study

Page 22: PRESENTATION SUPPLEMENTAL MATERIALS...BIO PRESENTATION SUPPLEMENTAL MATERIALS International Conference On Software Testing Analysis and Review May 17-21, 2004 Orlando, Florida USA

Slide Slide 2020

Metrics Case StudyMetrics Case Study

Project V Project T Reduction of1st Run Fail Rate 30.7% 17.9% 41.7%Overall Failure Rate 31.4% 18.0% 42.7%Cost of Defects 519,000.00$ 346,000.00$ 173,000.00$

Reduction of 33.3% in the cost of defect repairs

Every project moving forward, using the same QA principles can achieve the same type of savings.

Page 23: PRESENTATION SUPPLEMENTAL MATERIALS...BIO PRESENTATION SUPPLEMENTAL MATERIALS International Conference On Software Testing Analysis and Review May 17-21, 2004 Orlando, Florida USA

Slide Slide 2121

Q & AQ & A

Page 24: PRESENTATION SUPPLEMENTAL MATERIALS...BIO PRESENTATION SUPPLEMENTAL MATERIALS International Conference On Software Testing Analysis and Review May 17-21, 2004 Orlando, Florida USA

Test Metrics:A Practical Approach to Tracking & Interpretation

Prepared By:Shaun BradshawDirector of Quality SolutionsQuestcon Technologies

AbstractIt is often said, “You cannot improve what you cannot measure.” This paper discussesQuestcon's philosophy regarding software test metrics and their ability to show objectiveevidence necessary to make process improvementsin a development organization. When usedproperly, test metrics assist in theimprovement of the software developmentprocess by providing pragmatic, objectiveevidence of process change initiatives.This paper also describes several testmetrics that can be implemented, amethod for creating and interpretingthe metrics, and illustrates oneorganization’s use of test metrics toprove the effectiveness of processchanges.

Page 25: PRESENTATION SUPPLEMENTAL MATERIALS...BIO PRESENTATION SUPPLEMENTAL MATERIALS International Conference On Software Testing Analysis and Review May 17-21, 2004 Orlando, Florida USA

Page 2

TABLE OF CONTENTS

TABLE OF CONTENTS........................................................................... 2

INTRODUCTION.................................................................................. 3

DEFINITION....................................................................................... 3

THE QUESTCON TEST METRICS PHILOSOPHY ........................................... 4Keep It Simple ............................................................................................................4Create Meaningful Metrics............................................................................................5Track the Metrics.........................................................................................................5Use Metrics to Manage the Project ...............................................................................6

TYPES OF METRICS ............................................................................. 6Base Metrics ...............................................................................................................6Calculated Metrics .......................................................................................................7

THE FINAL STEP - INTERPRETATION AND CHANGE .................................... 8

METRICS CASE STUDY ......................................................................... 8Background.................................................................................................................8Course of Action..........................................................................................................8Results .......................................................................................................................9

CONCLUSION..................................................................................... 9

Page 26: PRESENTATION SUPPLEMENTAL MATERIALS...BIO PRESENTATION SUPPLEMENTAL MATERIALS International Conference On Software Testing Analysis and Review May 17-21, 2004 Orlando, Florida USA

Page 3

Introduction

In 1998, Questcon Technologies assembled a special Methodology Committee to begin

the process of defining, collecting, and creating a set of methodologies to be used by our

consultants in instances where our clients had no existing or preferred QA or testing

process. These methodologies were crafted using various sources, including industry

standards, best practices, and the practical experience of the Methodology Committee

members. The intent was to develop a set of practical, repeatable, and useful

approaches, tasks and deliverables in each of the following areas:

• Software Testing,• Software Test Management,• Software Configuration Management,• Software Test Automation, and• Software Performance Testing.

During the development of these methodologies the Committee recognized the need to

measure the effectiveness of the tasks and deliverables suggested in the various

methodologies, which led to the inclusion of a Metrics section in each of the

methodologies. However, as time went on we found that there was significant interest

on the part of our clients in the metrics themselves. For that reason we developed a

stand-alone Metrics methodology. The goal of this paper is to briefly describe the

resulting Metrics methodology.

Definition

Metrics are defined as “standards of measurement” and have long been used in the IT

industry to indicate a method of gauging the effectiveness and efficiency of a particular

activity within a project. Test metrics exist in a variety of forms. The question is not

whether we should use them, but rather, which ones should we use. Simpler is almost

always better. For example, it may be interesting to derive the Binomial Probability Mass

Function1 for a particular project, although it may not be practical in terms of the

resources and time required to capture and analyze the data. Furthermore, the resulting

information may not be meaningful or useful to the current effort of process

improvement.

1 The Binomial Probability Mass Function is used to derive the probability of having x successes out of N tries. This might be used

by a QA organization to predict the number of passes (or failures) given a specific number of test cases to be executed. Forfurther information on this metric go to: http://www.itl.nist.gov/div898/handbook/eda/section3/eda366i.htm.

Metrics are astandard ofmeasurement,used to gaugethe effective-ness & efficiencyof projectactivities.

Page 27: PRESENTATION SUPPLEMENTAL MATERIALS...BIO PRESENTATION SUPPLEMENTAL MATERIALS International Conference On Software Testing Analysis and Review May 17-21, 2004 Orlando, Florida USA

Page 4

One thing that makes Test Metrics unique, in a way, is that they are gathered during the

test effort (towards the end of the SDLC), and can provide measurements of many

different activities that have been performed throughout the project. Because of this

attribute, Test Metrics can be used in conjunction with Root Cause Analysis to

quantitatively track issues from points of occurrence throughout the development

process. Finally, when Test Metrics data is accumulated, updated and reported on a

consistent basis, it allows trend analysis to be performed on the information, which is

useful in observing the effect of process changes over multiple projects.

The Questcon Test Metrics Philosophy

Keep It Simple

Questcon’s Metrics methodology begins by showing the value of tracking the easy

metrics first. So, what are “easy metrics”? Most Test Analysts are required to know the

number of test cases they will execute, the current state of each test case

(Executed/Unexecuted, Passed/Failed/Blocked, etc.), and the time and date of execution.

This is basic information that is generally tracked in some way by every Test Analyst.

When we say “keep it simple” we also mean that the Metrics should be easy to

understand and objectively quantifiable. Metrics are easy to understand when they have

clear, unambiguous definitions and explanations. Below is an example of the definition

and explanation of a 'Blocked' test case.

Definition Explanation

The number of distinct testcases that cannot be executedduring the test effort due to anapplication or environmentalconstraint.

This metric defines the impact that known systemdefects are having on the test team’s ability toexecute the remaining test cases. A ‘Blocked’ test isone that cannot be executed due to a knownenvironmental problem. A test case is also ‘Blocked’when a previously discovered system defect is knownto cause test failure. Because of the potential delaysinvolved, it is useful to know how many cases cannotbe tested until program fixes are received andverified.

Questcon believes it is important to both define the metric and explain why it is

important and how it is used. In the example above, the definition and explanation

clearly indicated that the metric is intended to track unintentional blocks. It eliminates

confusion for the Test Analyst as to when to set a test case to a ‘Blocked’ status.

Track theeasiest metricsearly in thedevelopment ofa metrics pro-gram.

Page 28: PRESENTATION SUPPLEMENTAL MATERIALS...BIO PRESENTATION SUPPLEMENTAL MATERIALS International Conference On Software Testing Analysis and Review May 17-21, 2004 Orlando, Florida USA

Page 5

Furthermore, this metric is objectively quantifiable. Based on the definition, a test case

can be counted as 'Blocked' or not, but not both; ensuring that a blocked test case

cannot be placed in multiple status “buckets”. Finally, by sticking to the definition and

explanation, a Test Analyst cannot apply some other subjective criteria that would allow

a test case to be flagged as 'Blocked'.

Create Meaningful Metrics

Test Metrics are meaningful if they provide objective feedback to the Project Team

regarding any of the development processes - from analysis, to coding, to testing – and

support a project goal. If a metric does not support a project goal, then there is no

reason to track it – it is meaningless to the organization. Tracking meaningless metrics

wastes time and does little to improve the development process.

Metrics should also be objective. As indicated in the sample definition shown in the

previous section, an objective metric can only be tracked one way, no matter who is

doing the tracking. This prevents the data from being corrupted and makes it easier for

the project team to trust the information and analysis resulting from the metrics. While

it would be best if all metrics were objective, this may be an unrealistic expectation. The

problem is that subjective metrics can be difficult to track and interpret on a consistent

basis, and team members may not trust them. Without trust, objectivity, and solid

reasoning, which is provided by the Test Metrics, it is difficult to implement process

changes.

Track the Metrics

Tracking Test Metrics throughout the test effort is extremely important because it allows

the Project Team to see developing trends, and provides a historical perspective at the

end of the project. Tracking metrics requires effort, but that effort can be minimized

through the simple automation of the Run Log (by using a spreadsheet or a database) or

through customized reports from a test management or defect tracking system. This

underscores the 'Keep It Simple' part of the philosophy, in that metrics should be simple

to track, and simple to understand. The process of tracking test metrics should not

create a burden on the Test Team or Test Lead; otherwise it is likely that the metrics will

not be tracked and valuable information will be lost. Furthermore, by automating the

process by which the metrics are tracked it is less likely that human error or bias can be

introduced into the metrics.

Ensure that allmetrics aremeaningful toyour organiza-tion by clearlydefining themand ensuringthey are used toachieve anestablishedgoal.

Automate theprocess bywhich metricsare tracked.This reduces theamount of timerequired forcollection andthe possibility ofintroducinghuman error orbias.

Page 29: PRESENTATION SUPPLEMENTAL MATERIALS...BIO PRESENTATION SUPPLEMENTAL MATERIALS International Conference On Software Testing Analysis and Review May 17-21, 2004 Orlando, Florida USA

Page 6

Use Metrics to Manage the Project

Many times, members of a Project Team intuitively understand that changes in the

development lifecycle could improve the quality and reduce the cost of a project, but

they are unwilling to implement changes without objective proof of where the changes

should occur. By meeting on a regular basis throughout a project, but especially at the

end of a project, the team can review the Test Metrics and other available information to

determine what improvements might be made. Here is an example of how metrics can

be used to make process changes during a test effort.

Imagine that the Test team is halfway through the test execution phase

of a project, and the Project Team is reviewing the existing metrics. One

metric stands out - less than 50% of the test cases have been executed.

The Project Manager is concerned that half of the testing time has

elapsed, but less than half the tests are completed. Initially this looks

bad for the test team, but the Test Lead points out that 30% of the test

cases are 'Blocked'. The Test Lead explains that one failure in a

particular module is preventing the Test Team from executing the

'Blocked' test cases. Moreover, the next test release is scheduled in 4

days and none of the 'Blocked' test cases can be executed until then. At

this point, using objective information available from the metrics, the

Project Manager is able to make a decision to push an interim release to

the Test Team with a fix for the problem that is causing so many of the

test cases to be 'Blocked'.

As seen by this example, metrics can provide the information necessary to make critical

decisions that result in saving time and money and makes it easier for a project team to

move from a “blame” culture to a “results-oriented” culture.

Types of Metrics

Base Metrics

Base metrics constitute the raw data gathered by a Test Analyst throughout the testing

effort. These metrics are used to provide project status reports to the Test Lead and

Project Manager; they also feed into the formulas used to derive Calculated Metrics.

Questcon suggests that every project should track the following Test Metrics:

It is not enoughto have a set ofmetrics that aretracked on aregular basis.The metricsmust also bereviewed andanalyzed regu-larly, as theycan providevalue feedbackduring and aftera softwaredevelopmentproject.

Base metrics arethe raw datagathered by aTest Analystduring a testeffort and aremade up ofbasic counts ofdata elements.

Page 30: PRESENTATION SUPPLEMENTAL MATERIALS...BIO PRESENTATION SUPPLEMENTAL MATERIALS International Conference On Software Testing Analysis and Review May 17-21, 2004 Orlando, Florida USA

Page 7

# of Test Cases # of First Run Failures# of Test Cases Executed Total Executions# of Test Cases Passed Total Passes# of Test Cases Failed Total Failures# of Test Cases Under Investigation Test Case Execution Time*# of Test Cases Blocked Test Execution Time*# of Test Cases Re-executed

As seen in the ‘Keep It Simple’ section, many of the Base Metrics are simple counts that

most Test Analysts already track in one form or another. While there are other Base

Metrics that could be tracked, Questcon believes this list is sufficient for most Test Teams

that are starting a Test Metrics program.

Calculated Metrics

Calculated Metrics convert the Base Metrics data into more useful information. These

types of metrics are generally the responsibility of the Test Lead and can be tracked at

many different levels (by module, tester, or project). The following Calculated Metrics

are recommended for implementation in all test efforts:

% Complete % Defects Corrected% Test Coverage % Rework% Test Cases Passed % Test Effectiveness% Test Cases Blocked % Test Efficiency1st Run Fail Rate Defect Discovery RateOverall Fail Rate Defect Removal Cost

These metrics provide valuable information that, when used and interpreted, oftentimes

leads to significant improvements in the overall SDLC. For example, the 1st Run Fail

Rate, as defined in the Questcon Metrics Methodology, indicates how clean the code is

when it is delivered to the Test Team. If this metric has a high value, it may be

indicative of a lack of unit testing or code peer review during the coding phase. With this

information, as well as any other relevant information available to the Project Team, the

Project Team may decide to institute some preventative QA techniques that they believe

will improve the process. Of course, in the next project, when the metric is observed it

should be noted how it has trended to see if the process change was in fact an

improvement.

* The unit of measure for time should be represented equally for all time-related metrics (i.e. if Test Case Execution Time is

measured in quarter hour increments, then Test Execution Time should also be measured in quarter hour increments).

Calculatedmetrics use thebase data todevelop moreuseful infor-mation for doingmetrics analysisand identifyingpotential SDLCprocess im-provements.

Page 31: PRESENTATION SUPPLEMENTAL MATERIALS...BIO PRESENTATION SUPPLEMENTAL MATERIALS International Conference On Software Testing Analysis and Review May 17-21, 2004 Orlando, Florida USA

Page 8

The Final Step - Interpretation and Change

As mentioned earlier, test metrics should be reviewed and interpreted on a regular basis

throughout the test effort and particularly after the application is released into

production. During the review meetings, the Project Team should closely examine ALL

available data, and use that information to determine the root cause of identified

problems. It is important to look at several of the Base Metrics and Calculated Metrics in

conjunction with one another, as this will allow the Project Team to have a clearer

picture of what took place during the test effort.

If metrics have been gathered across several projects, then a comparison should be done

between the results of the current project and the average or baseline results from the

other projects. This makes trends across the projects easy to see, particularly when

development process changes are being implemented. Always take note to determine if

the current metrics are typical of software projects in your organization. If not, observe if

the change is positive or negative, and then follow up by doing a root cause analysis to

ascertain the reason for the change.

Metrics Case Study

Background

A recent client, the IT department of a major truck manufacturer, had little or no testing

in its IT projects. The company's projects were primarily maintenance related and

operated in a COBOL / CICS / Mainframe environment. The company had a desire to

migrate to more up-to-date technologies for some upcoming new development projects

and felt that testing involvement should accompany this technological shift. Another

consulting firm was brought in to provide development resources experienced in the new

technologies and Questcon was hired to help establish a testing process, and aid in the

acquisition and training of new test team members.

Course of Action

As a first step, Questcon introduced the Test, Test Management, and Metrics

Methodologies to the new test team. The team’s first project, ‘Project V’, was primarily

developed in Visual Basic and HTML, and was accessed via standard web browser

technologies. By the time the Test Team became involved in the project, all of the

analysis and most of the development (approximately 70%) had been completed. The

Use all availableinformationwhen inter-preting themetricsgathered,including othermetrics and in-tangible infor-mation that maynot be quan-tifiable. Also,when available,compare themetrics acrossseveral projects.

The establish-ment of ametrics programfor one organiz-ation led todevelopmentprocess changesresulting in areduction ofrework coststotaling morethan $170,000for one three-month project.A savings thatcontinues witheach subse-quent project.

Page 32: PRESENTATION SUPPLEMENTAL MATERIALS...BIO PRESENTATION SUPPLEMENTAL MATERIALS International Conference On Software Testing Analysis and Review May 17-21, 2004 Orlando, Florida USA

Page 9

test team tracked test metrics throughout the test effort and noted the following key

metrics data:

o Developed 355 test scenarios,

o Had a 30.7% 1st Run Fail Rate, and

o Had an Overall Failure Rate of 31.4%.

Using the information available from the analysis of the test metrics, Questcon suggested

that earlier Test Team involvement, in the form of requirements walkthroughs and

design reviews, would improve the failure rate. Working with the same project team

from ‘Project V’, Questcon convinced the Project Manager to institute these changes for

the second project, ‘Project T”. While ‘Project T’ was more complex than ‘Project V’ (it

added XML to the new development environment of Visual Basic and HTML) the overall

size of the project was very similar, requiring 345 test scenarios.

Results

The 1st Run Failure Rate for ‘Project T’ was only 17.9% and the Overall Fail Rate was

18.0%, dramatically better than the results from ‘Project V’. Given that the project team

remained the same between each of the projects and the fact that team was already

highly skilled in the technologies being used, our root cause analysis indicated that the

addition of the requirements walkthroughs and design reviews were the primary factor in

reducing the number of defects delivered to test. Furthermore, when costs were

reviewed for each of the projects it was noted that the accumulated costs of rework were

also reduced in the amount of just over $170,000.

Conclusion

There are several key factors in establishing a metrics program, beginning with

determining the goal for developing and tracking metrics, followed by the identification

and definition of useful metrics to be tracked, and ending with sufficient analysis

interpretation of the resulting data to be able to make changes to the software

development lifecycle. This paper has laid out Questcon’s philosophy in regards to those

key factors. Furthermore, we have demonstrated, through the use of the Business Case,

that while the test metrics themselves do not create improvements, the objective

information that they provide can be analyzed – leading to reductions in the number of

defects created and the cost of rework that would normally result from those defects.

While metricsthemselves donot createimprovements,they do providethe objectiveinformationnecessary tounderstandwhat changesare necessaryand when theyare working.

Page 33: PRESENTATION SUPPLEMENTAL MATERIALS...BIO PRESENTATION SUPPLEMENTAL MATERIALS International Conference On Software Testing Analysis and Review May 17-21, 2004 Orlando, Florida USA

Page 10

About Questcon

Questcon Technologies is a Quality Assurance (QA)

and software testing company, headquartered in

Greensboro, NC that has focused on researching,

developing, and delivering innovative quality

solutions for its clients since 1991. Utilizing its

proprietary QuestAssured™ methodologies,

Questcon furnishes customized QA and testing best

practices that are customized to each client's

processes and business objectives. We have helped

hundreds of companies, from large government and

private sector enterprises to smaller software

organizations establish QA, development, and

testing standards.

Copyright 2004by Questcon Technologies,a division of Howard Systems International, Inc.All rights reserved. All information included in this publication is the exclusive property of QuestconTechnologies and may not be copied, reproduced, or used without express permission in writing fromQuestcon Technologies. Information in this document is subject to change without notice.