atlm.pdf

10
Software project managers and software developers building today's applications face the challenge of doing so within an ever-shrinking schedule and with mini- mal resources. As part of their attempt to do more with less, organizations want to test software adequately, but as quickly and thoroughly as possible. To accom- plish this goal, organizations are turning to automated testing. Faced with this reality and realizing that many tests cannot be executed manually, such as simulating 1,000 virtual users for volume testing, software professionals are introducing automated testing to their projects. While needing to introduce automated testing, software professionals may not know what's involved in intro- ducing an automated test tool to a soft- ware project, and they may be unfamiliar with the breadth of application that auto- mated test tools have today. The Automated Testing Life-cycle Methodology (ATLM), depicted in Figure 1, provides guidance in these areas. By using the systematic approach out- lined within the ATLM, organizations are able to organize and execute test activi- ties in such a way as to maximize test coverage within the limits of testing resources. This structured test methodol- ogy involves a multi-stage process, sup- porting the detailed, and inter-related activities that are required to introduce and utilize an automated test tool; devel- op test design; develop and execute test cases; develop and manage test data and the test environment; as well as docu- ment, track and obtain closure on issue/trouble reports. Clearly, the emphasis on automated test- ing represents a paradigm change for the software industry. This change does not simply involve the application of tools and the performance of test automation. Rather, it is pervasive across the entire test life cycle and the system develop- ment life cycle. The ATLM implementa- tion takes place in parallel with the sys- tem development lifecycle. For software professionals to successfully make the leap to automated testing, structured approaches to testing must be embraced. The ATLM is revolutionary in the fact that it promulgates a new structured, building-block approach to the entire test life cycle, which enables software test professionals to approach software test- ing in a methodical and repeatable fash- ion. The growth of automated test capability has stemmed in large part from the grow- ing popularity of the iterative and incre- mental development life cycle, a software development methodology that focuses on minimizing the development schedule while providing frequent, incremental software builds. The objective of this incremental and iterative development is to engage the user and the test team early throughout design and development of each build so as to refine the software, thereby ensuring that it more closely September 2000 Journal of Software Testing Professionals http://www.softdim.com 6 The Automated Testing Life- Cycle Methodology (ATLM) By Elfriede Dustin 4. Test Planning, Design & Development 3. Automated Testing Introduction Process 6. Test Program Review & Assessment 1. Decision to Automate Test Automated Testing Lifecycle Methodology (ATLM) 2. Test Tool Acquisition 5. Execution and Management of Tests Figure 1

Upload: saikrishnasonti

Post on 01-Jan-2016

13 views

Category:

Documents


0 download

DESCRIPTION

321

TRANSCRIPT

Page 1: ATLM.pdf

Software project managers and softwaredevelopers building today's applicationsface the challenge of doing so within anever-shrinking schedule and with mini-mal resources. As part of their attempt todo more with less, organizations want totest software adequately, but as quicklyand thoroughly as possible. To accom-plish this goal, organizations are turningto automated testing.

Faced with this reality and realizing thatmany tests cannot be executed manually,such as simulating 1,000 virtual users forvolume testing, software professionalsare introducing automated testing to theirprojects. While needing to introduceautomated testing, software professionalsmay not know what's involved in intro-ducing an automated test tool to a soft-ware project, and they may be unfamiliarwith the breadth of application that auto-mated test tools have today. TheAutomated Testing Life-cycleMethodology (ATLM), depicted inFigure 1, provides guidance in theseareas.

By using the systematic approach out-lined within the ATLM, organizations areable to organize and execute test activi-

ties in such a way as to maximize testcoverage within the limits of testingresources. This structured test methodol-ogy involves a multi-stage process, sup-porting the detailed, and inter-relatedactivities that are required to introduceand utilize an automated test tool; devel-op test design; develop and execute testcases; develop and manage test data andthe test environment; as well as docu-ment, track and obtain closure onissue/trouble reports.

Clearly, the emphasis on automated test-ing represents a paradigm change for the

software industry. This change does notsimply involve the application of toolsand the performance of test automation.Rather, it is pervasive across the entiretest life cycle and the system develop-ment life cycle. The ATLM implementa-tion takes place in parallel with the sys-tem development lifecycle. For softwareprofessionals to successfully make theleap to automated testing, structuredapproaches to testing must be embraced.The ATLM is revolutionary in the factthat it promulgates a new structured,building-block approach to the entire testlife cycle, which enables software testprofessionals to approach software test-ing in a methodical and repeatable fash-ion.

The growth of automated test capabilityhas stemmed in large part from the grow-ing popularity of the iterative and incre-mental development life cycle, a softwaredevelopment methodology that focuseson minimizing the development schedulewhile providing frequent, incrementalsoftware builds. The objective of thisincremental and iterative development isto engage the user and the test team earlythroughout design and development ofeach build so as to refine the software,thereby ensuring that it more closely

September 2000 Journal of Software Testing Professionals http://www.softdim.com 6

The Automated Testing Life-Cycle Methodology (ATLM)

By Elfriede Dustin

4. Test Planning,Design & Development

3. Automated Testing Introduction Process

6. Test ProgramReview & Assessment

1. Decision toAutomate Test

Automated TestingLifecycle Methodology

(ATLM)

2. T

est T

ool

Acqu

isiti

on

5. E

xecu

tion

and

Man

agem

ent o

f Tes

ts

Figure 1

Page 2: ATLM.pdf

reflects the needs and preferences of theuser and thus addressing the riskiestaspects of development in early builds.

In this environment of continual changesand additions to the software througheach software build, software testingtakes on an iterative nature itself. Eachnew build is accompanied by a consider-able number of new tests as well asrework to existing test scripts, just asthere is rework on previously releasedsoftware modules. Given the continualchanges and additions to software appli-cations, especially web applications,automated software testing becomes animportant control mechanism to ensureaccuracy and stability of the softwarethrough each build.

The ATLM, invoked to support testefforts involving automated test tools,incorporates a multi-stage process. Themethodology supports the detailed andinter-related activities that are required todecide whether to acquire an automatedtesting tool. The methodology includesthe process of how to introduce and uti-lize an automated test tool, covers testdevelopment and test design, andaddresses test execution and manage-ment. The methodology also supports thedevelopment and management of testdata and the test environment, andaddresses test documentation to includeproblem reports.

The ATLM methodology represents astructured approach, which depicts aprocess with which to approach and exe-cute test. This structured approach isnecessary to help steer the test team awayfrom the common test program mistakesbelow.

· Implementing the use of an automatedtest tool without a testing process in placeresulting in an ad-hoc, non-repeatable,non-measurable test program.

· Implementing a test design without fol-lowing any design standards, resulting inthe creation of test scripts that are notrepeatable and therefore not reusable forincremental software builds.

· Efforts attempting to automate 100% oftest requirements, when tools or in-house

developed automated test harnesses beingapplied do not support automation of alltests required.

· Using the wrong tool or developing atoo elaborate in house test harness.

· Test tool implementation initiated toolate in the application development lifecycle, not allowing sufficient time fortool setup and test tool introductionprocess (i.e. learning curve).

· Test engineer involvement initiated toolate in the application development lifecycle resulting in poor understanding ofthe application and system design, whichresults in incomplete testing.

The Automated Test Life-cycleMethodology (ATLM) is comprised ofsix primary processes or components.Each primary process is further com-posed of subordinate processes asdescribed below.

1. Decision to Automate Test

The Decision to Automate Test representsthe first phase of the ATLM. This phasecovers the entire process that goes intothe automated testing decision. Duringthis phase it is important for the test teamto manage automated testing expectationsand to outline the potential benefits ofautomated testing when implementedcorrectly. A test tool proposal needs to beoutlined, which will be helpful in acquir-ing management support.

1.1 Overcoming False Expectations forAutomated Testing

While it has been proven successfullythat automated testing is valuable and canproduce a return on investment, thereisn't always an immediate payback oninvestment. It is important that some ofthe misconceptions that persist in thesoftware industry are addressed and thatthe automated testing utopia is managed.What follows is a list of just a few of themisconceptions that need to beaddressed. It is important to note thatpeople often see test automation as a sil-ver bullet and, when they find that testautomation requires a significant short-term investment of time and energy to

achieve a long-term return on investment(ROI) of faster and cheaper regressiontesting (for example), the testing tooloften becomes "shelf-ware" the tool. Thisis why it is important that in order tointroduce automated testing correctly intoa project, expectations are managed.

Automatic Test Plan Generation

Currently, there is no commercially avail-able tool that can automatically create acomprehensive test plan, while also sup-porting test design and execution.

Throughout a software test career, the testengineer can expect to witness test tooldemonstrations and review an abundantamount of test tool literature. Often thetest engineer will be asked to stand beforea senior manager or a small number ofmanagers to give a test tool functionalityoverview. As always the presenter mustbear in mind the audience. In this case,the audience may represent individualswith just enough technical knowledge tomake them enthusiastic about automatedtesting, while not aware of the complexi-ty involved with an automated test effort.Specifically, the managers may haveobtained information about automatedtest tools third hand, and may havereached the wrong interpretation of theactual capability of automated test tools.

What the audience at the managementpresentation may be waiting to hear, isthat the tool that you are proposing auto-matically develops the test plan, designsand creates the test procedures, executesall the test procedures and analyzes theresults automatically. You meanwhilestart out the presentation by informingthe group that automated test tools shouldbe viewed as enhancements to manualtesting, and that automated test tools willnot automatically develop the test plan,design and create the test procedures andexecute the test procedures.

Soon into the presentation and after sev-eral management questions, it becomesvery apparent just how much of a divideexists between the reality of the test toolcapabilities and the perceptions of theindividuals in the audience. The termautomated test tool seems to bring with ita great deal of wishful thinking that is not

September 2000 Journal of Software Testing Professionals http://www.softdim.com 7

Page 3: ATLM.pdf

closely aligned with reality. An automat-ed test tool will not replace the humanfactor necessary for testing a product.The proficiencies of test engineers andother quality assurance experts will stillbe needed to keep the testing machineryrunning. A test tool can be viewed as anadditional part of the machinery that sup-ports the release of a good product.

Test Tool Fits All

Currently not one single test tool existsthat can be used to support all operatingsystem environments.

Generally, a single test tool will not fulfillall the testing requirements for an organ-ization. Consider the experience of onetest engineer encountering such a situa-tion. The test engineer, Dave, was askedby a manager to find a test tool that couldbe used to automate the testing of all ofthe department's applications. Thedepartment was using various technolo-gies to include mainframe computers andSun workstations; operating systems suchas Windows 3.1, Windows 95 andWindows NT, Windows 2000, program-ming languages such as VC++, andVisual Basic, other client server tech-nologies and web technologies, such asDHTML, XML, ASP, etc.

After conducting a tool evaluation, thetest engineer determined that the tool ofchoice was not compatible with theVC++ third party add-ons (in this case,Stingray grids). Another tool had to bebrought in, that was compatible with thisspecific application.

It is important to note that expectationshave to be managed and it has to be madeclear that currently there does not existone single tool on the market that is com-patible with all of the operating systemsand programming languages simultane-ously. More than one tool is required totest the various technologies.

Immediate Reduction in Schedule - Anautomated test tool will not immediatelyminimize the testing schedule.

Another automated test misconception isthe expectation that the use of an auto-mated testing tool on a new project will

immediately minimize the test schedule.Since the testing effort actually increases,as previously described, the testingschedule will not experience the antici-pated decrease at first, and an allowancefor schedule increase is required, wheninitially introducing an automated testtool. This is due to the fact that whenrolling out an automated test tool, the cur-rent testing process has to be augmentedor an entirely new testing process has tobe developed and implemented. Theentire test team and possibly the develop-ment team needs to become familiar withthis new automated testing process (i.e.ATLM) and needs to follow it. Once anautomatic testing process has been estab-lished and effectively implemented, theproject can expect to experience gains inproductivity and turn around time thathave a positive effect on schedule andcost.

1.2 Benefits of Automated Testing

The previous discussion points out andclarifies some of the false automated test-ing expectations that exist. The test engi-neer will also need to be able to elaborateon the benefits of automated testing,when automated testing is implementedcorrectly and a process is followed. Thetest engineer must evaluate whetherpotential benefits fit required improve-ment criteria and whether the pursuit ofautomated testing on the project is still alogical fit, given the organizationalneeds. There are three significant auto-mated test benefits (in combination withmanual test) which include a) producinga reliable system, b) improving the quali-ty of the test effort, and c) reducing testeffort and minimizing schedule.

Many return on investment case studieshave been done with regard to the imple-mentation of automated testing. Oneexample is a research effort conducted byimbus GmbH (see www.imbus.de). Theyconducted a test automation value studyin order to collect test automation meas-urements with the purpose of studyingthe benefits of test automation versus theimplementation of manual test methods.Their research determined that the break-even point of automated testing is onaverage at 2.03 test runs (seewww.imbus.de for more detail) .1

1.3 Acquiring Management Support

Whenever an organization tries to adopt anew technology, they encounter a signifi-cant effort when determining how toapply it to their needs. Even with com-pleted training, organizations wrestlewith time-consuming false starts beforethey become capable with the new tech-nology. For the test team interested inimplementing automated test tools, thechallenge is how to best present the casefor a new test automation technology andits implementation to the managementteam.

Test engineers need to influence manage-ment's expectations for the use of auto-mated testing on projects. Test engineerscan help to manage expectations of othersin the organization by forwarding helpfulinformation to the management staff.Bringing up test tool issues during strate-gy and planning meetings can also helpdevelop better understanding of test toolcapabilities of everyone involved on aproject or within the organization. A testengineer can develop training material onthe subject of automated testing and canadvocate to management that a seminarbe scheduled to conduct the training.

The first step in moving toward a deci-sion to automate testing on a projectrequires that the test team adjust manage-ment understanding of the appropriateapplication of automated testing for thespecific need at hand. The test team, forexample, needs to check early on whethermanagement is cost averse and would beunwilling to accept the estimated cost ofautomated test tools for a particulareffort. If so, test personnel need to con-vince management about the potentialreturn on investment by conducting costbenefit analysis.

If management is willing to invest in anautomated test tool, but is not able orwilling to staff a test team with individu-als having the proper software skill levelor provide for adequate test tool training,

September 2000 Journal of Software Testing Professionals http://www.softdim.com 8

1Linz, T., Daigl, M. GUI Testing MadePainless. Implementation and Results ofthe ESSI Project Number 24306, 1998,www.imbus.de.

Page 4: ATLM.pdf

the test team will need to point out therisks involved and/or may need to recon-sider a recommendation to automate test.

Management also needs to be madeaware of the additional cost involvedwhen introducing a new tool, not only forthe tool purchase, but for initial sched-ule/cost increase, additional trainingcosts and for enhancing an existing test-ing process or implementing a new test-ing process.

Test automation represents highly flexi-ble technology, which provides severalways to accomplish an objective. Use ofthis technology requires new ways ofthinking, which only amplifies the prob-lem of test tool implementation. Manyorganizations can readily come up withexamples of their own experience oftechnology that failed to deliver on itspotential because of the difficulty ofovercoming the "Now what?" syndrome.The issues that organizations face whenadopting automated test systems includethose outlined below.

· Finding/hiring test tool experts.

· Using the correct tool for the task athand.

· Developing and implementing an auto-mated testing process, which includesdeveloping automated test design anddevelopment standards.

· Analyzing various applications to deter-mine those which are best suited forautomation.

· Analyzing the test requirements to deter-mine the ones suitable for automation.

· Training the test team on the automatedtesting process, automated test design,development and execution.

· Initial increase in schedule and cost

2. Test Tool Acquisition

Test Tool Acquisition represents the 2ndphase of the ATLM. This phase guidesthe test engineer through the entire testtool evaluation and selection process,starting with confirmation of manage-

ment support. Since a tool should sup-port most of the organizations' testingrequirements, whenever feasible, the testengineer will need to review the systemsengineering environment and other orga-nizational needs and come up with a listof tool evaluation criteria. A review of thedifferent types of tools available to sup-port aspects of the entire testing life-cycleis provided in the book AutomatedSoftware Testing (as part of the ATLM),enabling the reader to make an informeddecision with regard to the types of teststo be performed on a particular project.The test engineer then needs to define anevaluation domain to pilot the test tool.Finally, after all those steps have beencompleted, the test engineer can makevendor contact to bring in the selectedtool(s). Test personnel then evaluate thetool, based on sample criteria provided.

3. Automated Testing IntroductionPhase

The Process of Introducing AutomatedTesting to a new project team representsthe 3rd phase of the ATLM. This phaseoutlines the steps necessary to successful-ly introduce automated testing to a newproject, which are summarized below.

Test Process Analysis

Test Process Analysis ensures that anoverall test process and strategy are inplace and are modified, if necessary, toallow automated test to be introduced in asuccessful fashion. The test engineersdefine and collect test process metrics inorder to allow for process improvement.Here test goals, objectives and strategiesneed to be defined and test process needsto be documented and communicated tothe test team. In this phase, the kinds oftesting applicable for the technical envi-ronment will be defined and tests aredefined that can be supported by auto-mated tools.

During the test process analysis, tech-niques get defined. Best practices, suchas conducting performance testing duringthe unit-testing phase are laid out.

Plans for user involvement are assessed,and test team personnel skills are ana-lyzed against test requirements and

planned test activities. Early test teamparticipation is emphasized, supportingrefinement of requirement specificationsinto terms, which can be adequately test-ed while also supporting test team under-standing of application requirements anddesign.

Test Tool Consideration

The Test Tool Consideration phaseincludes steps that investigate whetherincorporation of automated test tools thathave been brought into the companywithout a specific project in mind nowwould be beneficial to a specific project,given the project testing requirements,available test environment and personnelresources, the user environment, plat-form, and product features of the applica-tion under test. Schedule is reviewed toensure sufficient time for test tool setupand development of requirements hierar-chy; potential test tools and utilities aremapped to test requirements; test toolcompatibility with the application andenvironment is verified; workaroundsolutions are investigated to incompati-bility problems surfaced during compati-bility tests.

4. Test Planning, Design andDevelopment

Test Planning, Design and Developmentis the 4th phase of the ATLM. These sub-jects are summarized below.

Test Planning

The Test Planning phase represents theneed to review long lead-time test plan-ning activities. During this phase, the testteam identifies test procedure creationstandards and guidelines; hardware, soft-ware and network required to support testenvironment; test data requirements; apreliminary test schedule; performancemeasure requirements; a procedure tocontrol test configuration and environ-ment; defect tracking procedure and asso-ciated tracking tool.

The test plan contains the results of eachpreliminary phase of the structured testmethodology (ATLM). The test plan willdefine roles and responsibilities, projecttest schedule, test planning and design

September 2000 Journal of Software Testing Professionals http://www.softdim.com 9

Page 5: ATLM.pdf

activities, test environment preparation,test risks and contingencies, and accept-able level of thoroughness (i.e. testacceptance criteria). Test plan appendicesmay include test procedures, naming con-vention description, test procedure for-mat standards and a test procedure trace-ability matrix.

The Test Environment Setup is part oftest planning. It represents the need toplan, track and manage test environmentset up activities, where material procure-ments may have long lead-times. The testteam needs to schedule and track envi-ronment set up activities; install test envi-ronment hardware, software and networkresources; integrate and install test envi-ronment resources; obtain/refine testdatabases; and develop environmentsetup scripts and test bed scripts.

Test Design

The Test Design component addressesthe need to define the number of tests tobe performed, the ways that test will beapproached (paths, functions), and thetest conditions which need to be exer-cised. Test design standards need to bedefined and followed.

An effective test program, incorporatingthe automation of software testing,involves a mini-development life cycle ofits own, complete with strategy and goalplanning, test requirement definition,analysis, design and coding. Similar tosoftware application development, testrequirements must be specified beforetest design is constructed. Test require-ments need to be clearly defined and doc-umented, so that all project personnelwill understand the basis of the test effort.Test requirements are defined withinrequirement statements as an outcome oftest requirement analysis.

After Test Requirements have beenderived using the described techniques,test procedure design can begin. Test pro-cedure definition consists of the defini-tion of logical groups of test proceduresand a naming convention for the suite oftest procedures. With a test proceduredefinition in place, each test procedure isthen identified as either an automated ora manual test. During the test planning

phase the test team gets an understandingof the number of test techniques beingemployed and an estimate for the numberof test procedures that will be required.The test team also will have an estimateof the number of test procedures that willneed to be performed manually, as well aswith an automated test tool.

Much like a software development effort,the test program must be mapped out andconsciously designed to ensure that testactivities performed represent the mostefficient and effective tests for the systemunder test. Test program resources arelimited, yet ways of testing the system areendless. Test design is developed, whichportrays the test effort, in order to giveproject and test personnel a mentalframework on the boundary and scope ofthe test program.

Following test analysis, the test teamdevelops the test program design models.The first of these design models, the TestProgram Model, consists of a graphicillustration that depicts the scope of thetest program. This model typicallydepicts the test techniques required tosupport the dynamic test effort and alsooutline static test strategies.

Having defined a test program model, thetest team constructs a test architecture,which depicts the structure of the testprogram, and defines the way that testprocedures will be organized in supportof the test effort.

The next step in the test procedure designprocess as depicted in Table 1, is to iden-tify those test procedures which stand outas being more sophisticated, and as aresult, are required to be defined furtheras part of detailed test design. These testprocedures are flagged and a detaileddesign document is prepared in supportof the more sophisticated test procedures.Following detailed test design, test datarequirements are mapped against thedefined test procedures. In order to createa repeatable, reusable process for produc-ing test procedures, the test team needs tocreate a document that outlines test pro-cedure design standards. Only whenthese standards are followed can the auto-mated test program achieve real efficien-cy and success, by being repeatable and

maintainable.

The exercise of developing the test pro-cedure definition not only aids in testdevelopment, but this definition alsohelps to quantify or bound the test effort.The development of the test proceduredefinition involves the identification ofthe suite of test procedures that will needto be developed and executed in supportof the test effort. The design exerciseinvolves the organization of test proce-dures into logical groups and the defini-tion of a naming convention for the suiteof test procedures.

At the system level, it may be worthwhileto develop a detailed test design forsophisticated tests. These tests mightinvolve test procedures that performcomplex algorithms, consist of both man-ual and automated steps, and test pro-gramming scripts that are modified foruse in multiple test procedures. The firststep in the detailed design process is toreview the test procedure definition at thesystem test level. This review is conduct-ed for the purpose of identifying thosetest procedures that stand out as beingmore sophisticated, which as a result, arerequired to be defined further as part ofdetailed test design.

Detailed test design may take the form oftest program pseudocode, when test pro-gramming is required. The detaileddesign may be represented simply as asequence of steps that need to be per-formed in support of a test. When pro-gramming variables and multiple datavalues are involved, the detailed designmay reflect the programming construct ofa loop supporting an iterative series oftests involving different values togetherwith a list or table identifying the kinds ofdata or ranges of data required for thetest.

Following the performance of detailedtest design, test data requirements need tobe mapped against the defined test proce-dures. Once test data requirements areoutlined, the test team needs to plan the means for obtaining, generating or devel-oping the test data.

The structure of the test program (test

September 2000 Journal of Software Testing Professionals http://www.softdim.com 10

Page 6: ATLM.pdf

architecture) is commonly portrayed intwo different ways. One test procedureorganization method involves the logicalgrouping of test procedures with the sys-tem application design components, andis referred to as a design-based test archi-tecture. Another method represents a testtechnique perspective and associates testprocedures with the various kinds of testtechniques represented within the testprogram model, and is referred to as atechnique-based test architecture.

An understanding of test techniques isnecessary when developing test designand the test program design models.Personnel performing test need to befamiliar with the test techniques associat-ed with the white box and black box testapproach methods. White box test tech-niques are aimed at exercising softwareprogram internals, while black box tech-niques generally compare the applicationunder test behavior against requirementsthat address testing via established, pub-lic interfaces such as the user interface orthe published application programminginterface (API).

Test Development

In order for automated tests to bereusable, repeatable and maintainable,test development standards need to bedefined and followed.

After performing test analysis and designthe test team is now ready to perform testdevelopment. Keep in mind that the testdesign and development activities followan iterative and incremental approach, inorder to address the highest risk function-ality up front. Table 2 correlates the

development process phases to the testprocess phases. The testing processes andsteps outlined in the table are strategical-ly aligned with the development processand the execution of these steps results inthe refinement of test procedures at thesame time as developers are creating thesoftware modules. Automated and/ormanual test procedures are developedduring the integration test phase with theintention of reusing them during the sys-tem test phase.

Many preparation activities need to takeplace, before test development can begin.A test development architecture is devel-oped (Figure 2), which provides the testteam with a clear picture of the test devel-opment preparation activities or buildingblocks necessary for the efficient creationof test procedures. The test team willneed to modify and tailor the sample testdevelopment architecture in order toreflect the priorities of their particularproject. Part of these setup and prepara-tion activities include the need to trackand manage test environment set upactivities, where material procurementsmay have long lead-times. Prior to thecommencement of test development, thetest team also needs to perform analysisto identify the potential for reuse ofalready existing test procedures andscripts within the AutomationInfrastructure (reuse library).

The test team needs to develop test pro-cedures according to a test proceduredevelopment/execution schedule. Thisschedule needs to allocate personnelresources and reflect development duedates, among other factors. The test teamneeds to monitor development progress

and produce progress status reports.Prior to the creation of a complete suiteof test procedures, the test team performsa modularity relationship analysis. Theresults of this analysis help to incorporatedata dependencies, plan for workflowdependencies between tests, and identifycommon scripts that can be repeatedlyapplied to the test effort. As test proce-dures are being developed, the test teamneeds to ensure that configuration controlis performed for the entire test bed toinclude test design, test scripts, and testdata, as well as for each individual testprocedure. The test bed needs to be base-lined using a configuration managementtool.

Test development involves the develop-ment of test procedures that are maintain-able, reusable, simple and robust, whichin itself can be as challenging as thedevelopment of the application-under-test. Test procedure development stan-dards need to be in place supportingstructured and consistent development ofautomated tests. Test development stan-dards can be based on the scripting lan-guage standards of a particular test tool.For example, Rational's Robot usesSQABasic, a Visual Basic like scriptinglanguage and therefore the script devel-opment standards could be based on theVisual Basic development standards, out-lined in a number of books on the subject. Usually internal development standardsexist that can be followed if the organiza-tion is developing in a language similar tothe tool's scripting language. The adop-tion or slight modification of existingdevelopment standards is generally a bet-ter approach than creating a standardfrom scratch. If no development stan-

STEP DESCRIPTION

1 Test Architecture Review . The test team reviews the test architecture in order to identify the test techniques which apply.

2 Test Procedure Definition (Development Level) . A Test Procedure Definition is constructed at the development test level, that identi - fies the test procedure series that applies for the different design components and test techniques.

3 Test Procedure Definition (System Level). A Test Procedure Definition is constructed at the system test level, that identifies the test procedure series that applies for the different test techniques.

4 Test Procedure Design Standards . Design standards are adopted and a naming c onvention is adopted that uniquely identifies the test procedures on the project from test procedures developed in the past or on other projects.

5 Manual Versus Automated Tests . Test procedures will be depicted as being either performed manually or as p art of an automated test.

6 Test Procedures Flagged for Detailed Design . Test procedures that stand out as more sophisticated are flagged. These test procedures are further defined as part of detailed test design.

7 Detailed Design. Those test procedures flagged as part of step above, are designed in further detail within a Detailed Test Design file or document. Test procedure detailed design may consist of pseudocode of algorithms, preliminary test step definition, or pseudocode of test automation programs.

8 Test Data Mapping. Test Procedure Matrix is modified to reflect test data requirements for each test procedure.

Table 1 - Test Procedure Design Process

September 2000 Journal of Software Testing Professionals http://www.softdim.com 11

Page 7: ATLM.pdf

dards exist within the organization for theparticular tool scripting language, it isimportant for the test team to developscript development guidelines. Suchguidelines can include directions on con-text independence, which addresses theparticular place where a test procedureshould start and where it should end.Additionally, modularity and reusabilityguidelines need to be addressed.

By developing test procedures based ondevelopment guidelines, the test teamcreates the initial building blocks for anAutomation Infrastructure. TheAutomation Infrastructure will eventual-ly contain a library of common, reusablescripts. Throughout the test effort and infuture releases, the test engineer can

make use of the AutomationInfrastructure in order to support reuse ofarchived test procedures, minimize dupli-cation, and thus enhance the entireautomation effort.

Test Development Architecture

Test team members responsible for testdevelopment need to be prepared with theproper materials. Test team personnelneed to follow a test development archi-tecture that includes, for example, a list-ing of the test procedures assigned tothem and a listing of the outcome of auto-mated vs. manual test analysis. Also, test

team personnel will need to decide whento automate. At times a test team mightwant to avoid automating using a GUItesting tool before the interface -whetherAPI, character UI, or GUI is stabilized toavoid re-engineering the automated testsin response to non-bug-related changes.Other times the test team can findworkaround solutions when automatingan unstable GUI, such as focusingautomation on the known stable partsonly.

The test engineer needs to adhere to thetest procedure development and execu-tion schedule, test design information,automated test tool user manuals, and testprocedure development guidelines.Armed with the proper instructions, doc-umentation and guidelines, test engineerswill have the foundation that allows themto develop a more cohesive and struc-tured set of test procedures. It is impor-tant to note that the ability for the testteam to repeat a process and repeatedlydemonstrate a strong test program,depends upon the availability of docu-mented processes and standard guidelinessuch as the test development architecture.

An example of the graphical illustration,

September 2000 Journal of Software Testing Professionals http://www.softdim.com 12

PHASE DEVELOPMENT PROCESS TEST PROCESS Module (Unit) Development

Design module from requirements Perform test planning and test environment set up.

Code module Create test design and develop test data. Debug module Write test scripts or record test sc enario using

module. Unit test module Debug automated test script by running against

module. Use tools that support unit testing, Correct defects Rerun automated test script to regression test as

defects are corrected. Conduct Performance Testing Verify system is scaleable and will meet performance

requirements Integration Build system by connecting modules.

Integration test connected modules. Review trouble reports.

Combine unit test scripts and add new scripts that demonstrate module inter -connectivity. Use test tool to support automated integration testing.

Correct defects and update defect status. Rerun automated test script as part of regression test, as defects are corrected.

Continued Performance Testing Activities Verify system is scaleable and will meet performance requirements

System Test Review trouble reports. Integrate automated test scripts into system level test procedures, where possible, and develop additional system level test procedures. Execute system test and record test results.

Correct defects and update defect status. Rerun automated test script as part of regression test as defects are corrected.

Acceptance Test Review incident reports. Perform subset of system test as part of demonstration of user acceptance test.

Correct defects Rerun automated test script as part of regression test as defects are corrected.

Technical Environmentfacilities and hardware

Environment ReadinessCheck

Test Tool, all Software andApplication Installed

Manual Test Procedures(Test Plan)

Test DesignStandards

Test DevelopmentStandards

Reuse Analysis

Test ProcedureExecutionSchedule

ModularityRelationship

Analysis

AutomatedTesting ToolUser Manual

CalibrateTest Tool

Test ToolCompatibilitywork around

solutions

DevelopAutomated

Test Procedures

PeerReview

AutomatedFramework andResuse Library

CM

Table 2 - Development / Test Relationship

Figure 2 - Building Blocks of the TestDevelopment Architecture

Page 8: ATLM.pdf

containing the major activities to be per-formed as part of the test developmentarchitecture, is depicted in Figure 2. Testdevelopment starts with test environmentsetup and preparation activities, dis-cussed previously. Once they are con-cluded, the test team needs to make surethat pertinent information, necessary tosupport development has been document-ed or gathered. The test team will need tomodify and tailor the sample test devel-opment architecture, depicted in Figure 2,in order to reflect the priorities of theirparticular project. Note that Figure 2should be read from bottom to top.

Technical Environment

Test procedure development needs to bepreceded by several setup activities. Thetest development activity needs to be sup-ported by a technical environment, whichfacilitates the development of test proce-dures. As a result, the test environmentneeds to be set up and ready to go. Thetest environment includes the technicalenvironment, which may include facilityresources as well as the hardware andsoftware necessary to support test devel-opment and execution. The test teamneeds to ensure that there are enoughworkstations to support the entire team.The various elements of the test environ-ment need to be outlined within the testplan, as discussed previously.

Environment setup activities can alsoinclude the use of an environment setupscript to load test data restore a driveimage, and to calibrate the test tool to theenvironment. When test tool compatibili-ty problems arise with the applicationunder test, workaround solutions have tobe identified. When developing test pro-cedures it is important that the schedulefor developing test procedures is consis-tent with the test execution schedule. It isalso important that the test team followtest procedure development guidelines.

The test team will need to ensure that theproper test room or laboratory facilitiesare reserved and setup. Once the physi-cal environment is established, the testteam will need to ensure that all neces-sary equipment is installed and opera-tional. The test plan defined the requiredtechnical environment and addressed test

environment planning. Also within thetest environment section of the test plan,the test team has already identified oper-ational support required to install andcheckout the operational readiness of thetechnical environment. The test teamneeds to ensure that operational supportactivities have been properly scheduledand must monitor progress of these tasks.

Specific tasks and potential issues out-lined in the test plan should have nowbeen addressed and resolved. Suchissues could include network installation,network server configuration and allocat-ed disk space, network access privileges,required desktop computer processingspeed and memory, number and types ofdesktop computers (clients), video reso-lution requirements, and any additionalsoftware required to support the applica-tion such as browser software.Automated test tools that apply shouldhave been scheduled for installation andcheckout. These tools now should be con-figured to support the test team and beoperational within the test environment.

The test environment setup activityincludes the need to track and managetest environment set up activities, wherematerial procurements may have longlead-times. These activities include theneed to schedule and track environmentset up activities; install test environmenthardware, software and networkresources; integrate and test install testenvironment resources; obtain/refine testdatabases; and develop environmentsetup scripts and test bed scripts.

The hardware supporting the test envi-ronment must be sufficient to ensurecomplete functionality of the productionapplication. Test environment hardwareneeds to be sufficient to support perform-ance analysis. In cases where the testenvironment utilizes hardware resources,which are also supporting other develop-ment or management activities, specialarrangements may be necessary duringactual performance testing. During sys-tem test, the software configurationloaded within the test environment mustbe a complete, fully integrated releasewith no patches and no disabled sections.The hardware configuration supportingthe test environment needs to be designed

to support processing, storage, andretrieval activities, which may be per-formed across a local or wide area net-work, reflecting the target environment.

The test environment design will alsoneed to consider stress testing require-ments. Stress and load tests may requirethe use of multiple workstations that willrun multiple test procedures simultane-ously, while some automated test toolsinclude a virtual user simulation func-tionality that greatly eliminates or mini-mizes the need for multiple workstations.

Test data will need to be obtained withenough lead-time to support refinementand manipulation to support test require-ments. Data preparation activitiesinclude the identification of conversiondata requirements, the preprocessing ofraw data files, loading of temporarytables possibly in a relational databasemanagement system format, and the per-formance of consistency checks.Identifying conversion data requirementsinvolves performing in-depth analysis ondata elements, which includes definingdata mapping criteria, clarifying data ele-ment definitions, confirming primarykeys, and defining data-acceptableparameters.

During test planning, the test teamdefined and scheduled the test environ-ment activities. Now the team will needto track the test environment set up activ-ities. Resources need to be identified toinstall hardware, software and networkresources into test environment and inte-grate and test installed test environmentresources. The test environment materi-als and the application under test need tobe baselined within a configuration man-agement tool. Additionally, test environ-ment materials may include test data andtest processes.

The test team will need to obtain andmodify test databases necessary to exer-cise software applications, and developenvironment setup scripts and test bedscripts. The test team should performproduct reviews and validation of all testsource materials. The location of the testenvironment for each project or taskshould be defined within the test plan foreach project. Early identification of the

September 2000 Journal of Software Testing Professionals http://www.softdim.com 13

Page 9: ATLM.pdf

test site is critical to cost effective testenvironment planning and development.

5. Execution and Management of Tests

The test team at this stage, has addressedtest design and test development. Testprocedures are now ready to be executedin support of exercising the application-under-test. Also test environment setupplanning and implementation wasaddressed consistent with the test require-ments and guidelines provided within thetest plan.

With the test plan in hand and the testenvironment now operational, it is timeto execute the tests defined for the testprogram. When executing test proce-dures, the test team will need to complywith a test procedure execution schedule,as discussed previously. The test proce-dure execution schedule implements thestrategy defined within the test plan.Plans for unit, integration, system anduser acceptance testing are executed.Together, these testing phases make upthe steps that are required to test the sys-tem as a whole.

The various steps involved during execu-tion and management of tests are outlinedbelow.

When executing test procedures, the testteam will need to comply with a test pro-cedure execution schedule. Followingtest execution, test outcome evaluationsare performed and test result documenta-tion is prepared.

Plans for unit, integration, system anduser acceptance testing are executed,which together make up the steps that arerequired to test the system as a whole.During the unit test phase, code profilingcan be performed. Traditionally, profilingis a tuning process that determineswhether an algorithm is inefficient or afunction is being called too frequently.Profiling can discover instances wherethere is improper scaling of algorithms,instantiations and resource utilization.

Integration testing is performed whichfocuses on the application internals.During integration testing, units areincrementally integrated and tested

together based upon control flow. Sinceunits may consist of other units, someintegration testing, also called moduletesting, may take place during unit test-ing.

During system test, the test engineer istesting the integration of parts, whichcomprise the entire system. A separatetest team usually performs system-leveltests. The test team implements the testprocedure execution schedule and thesystem test plan.

The test team also performs analysis toidentify particular components or func-tionality that are experiencing a greaterrelative number of problem reports. As aresult of this analysis, additional test pro-cedures and test effort may need to beassigned to the components. Test resultsanalysis can also confirm whether exe-cuted test procedures are proving to beworthwhile in terms of identifying errors.Each test team needs to perform problem-reporting operations in compliance with adefined process. The documentation andtracking of software problem reports isgreatly facilitated by an automateddefect-tracking tool.

The test team manager is responsible forensuring that tests are executed accordingto schedule, and test personnel are allo-cated and redirected when necessary tohandle problems that arise during the testeffort. In order to perform this oversightfunction effectively, the test managerneeds to perform test program statustracking and management reporting.

Test metrics provide the test managerwith key indicators of the test coverage,progress, and the quality of the test effort.During white box testing the test engineermeasures the depth of testing, by collect-ing data relative to path coverage and testcoverage. During black box testing, met-rics collection focuses on the breadth oftesting to include the amount of demon-strated functionality and the amount oftesting that has been performed.

6. Test Program Review andAssessment

Test Program review and assessmentactivities need to be conducted through-

out the testing life-cycle, in order to allowfor continuous improvement activities.Throughout the testing life-cycle and fol-lowing test execution activities, metricsneed to be evaluated and final review andassessment activities need to be conduct-ed to allow for process improvement.

The various steps necessary for test pro-gram review and assessment are outlinedbelow.

Following test execution, the test teamneeds to review the performance of thetest program in order to determine whereimprovements can be implemented toimprove the test program performance onthe next project. This test programreview represents the final phase of theAutomated Test Life-cycle Methodology(ATLM).

Throughout the test program, the testteam collected various test metrics. Thefocus of the test program review includesan assessment of whether the applicationsatisfies acceptance criteria and is readyto go into production. The review alsoincludes an evaluation of earned valueprogress measurements and other metricscollected.

The test team needs to adopt, as part of itsculture, an ongoing iterative process oflessons learned activities. Such a pro-gram encourages test engineers to takethe responsibility to raise correctiveaction proposals immediately, when suchactions potentially have significantimpact on test program performance.

Throughout the entire test life cycle, it isgood practice to document and begin toevaluate lessons learned at each mile-stone. The metrics that are collectedthroughout the test life-cycle and espe-cially during the test execution phasehelp pinpoint problems that need to beaddressed.

Lessons learned, metrics evaluations andcorresponding improvement activity orcorrective action need to be documentedthroughout the entire test process in acentral repository that is easily accessi-ble.

After collecting lessons learned and other

September 2000 Journal of Software Testing Professionals http://www.softdim.com 14

Page 10: ATLM.pdf

metrics, and defining the correctiveactions, test engineers also need to assessthe effectiveness of the test program toinclude an evaluation of the Test ProgramReturn on Investment. Test engineerscapture measures of the benefits ofautomation realized throughout the testlife-cycle in order to support this assess-ment.

Test teams can perform their own surveysto inquire about the potential value ofprocess and tool changes. A sample sur-vey form is provided that can be used tosolicit feedback on the potential use ofrequirement management tools, designtools, and development tools. Surveys arehelpful to identify potential misconcep-tions, and gather positive feedback.

SummaryATLM is a structured methodology that isgeared toward ensuring successful imple-mentation of automated testing. TheATLM approach mirrors the benefits ofmodern rapid application developmentefforts, where such efforts engage theuser early in the development cycle. Theend-user of the software product isactively involved throughout analysis,design, development and test of eachsoftware build, which is augmented in anincremental fashion.

The ATLM incorporates a multi-stageprocess consisting of six components.This methodology supports the detailedand inter-related activities that arerequired to decide whether to acquire anautomated testing tool. The methodologyincludes the process of how to introduceand best utilize an automated test tool,and addresses test planning, analysis,design, development, execution and man-agement. The ATLM requires that thescope of the test program be outlinedwithin the test plan, as a top-leveldescription of test approach and imple-mentation. The scope is further formulat-ed through the definition of test goals,objectives and strategies, and with thedefinition of test requirements.

Similar to software application develop-ment, test requirements are specifiedbefore test design is constructed.Likewise, the test program must be

mapped out and consciously designed toensure that test activities performed rep-resent the most efficient and effectivetests for the application under test. Testdesign is developed, graphically portray-ing the test effort, in order to give projectand test personnel a mental framework onthe boundary and scope of the test pro-gram.

Elfriede Dustin may be contacted via herwebsite at www.autotestco.com. Elfriedeis a SQA Certified test engineer and hassupported test efforts for a multitude ofapplications. She is frequently a speakerat various Quality Assurance andSoftware Test Conferences. Elfriede has aBS degree in Computer Science and hasperformed as a Computer SystemsAnalyst/Programmer developing soft-ware applications and utilities, processand data modeling using CASE tools, andsystem design simulation models. Insupport of software test efforts, Elfriedehas been responsible for implementing

automated test, or has performed as thelead consultant guiding the implementa-tion of automated software test. Elfriedehas lead the successful rollout of auto-mated testing tools at three companies,and has applied her rollout strategy onover nine different projects.

Elfriede Dustin is the lead author of thebest-selling book "Automated SoftwareTesting", published by Addison-WesleyPub Co; ISBN: 0201432870, July 1999.The book has been getting excellentreviews throughout the testing communi-ty. Elfriede is currently the president ofthe Washington, DC area RationalTesting User Group, a group that meetsquarterly in the Greater Washington DCarea to discuss testing issues, strategiesand testing approaches using Rationaltools and other automated testing tools.

September 2000 Journal of Software Testing Professionals http://www.softdim.com 15

Building Strong Foundations forBusiness Information Systems

Has your automated test tool become “shelfware”?Need an automated infrastructure?

Concerned about the performance of your E-Commerce site?

Accurate testing can be done at the speed of the internet.

Software testingE-Commerce Development

Data Warehousing

www.cornerstoneusa.com(612) 469-1648