11 automated testing

40
Automated Testing

Upload: softwarecentral

Post on 14-Jan-2015

295 views

Category:

Documents


0 download

DESCRIPTION

 

TRANSCRIPT

Page 1: 11 Automated Testing

Automated Testing

Page 2: 11 Automated Testing

Introduction

• Software is seldom simple, and applications are inherently complex.

• At some level flaws are always hiding waiting to be

exposed. • Testing must be integrated into every phase of the

computing life cycle.

Page 3: 11 Automated Testing

Testing tools landscape

• Automated testing software falls into one of several

categories: – development test tools,

– GUI and event-driven test tools,

– load testing tools, and

– bug tracking/reporting tools. • Error-detection products identify specific kinds of bugs

that slip past compilers and debuggers.

Page 4: 11 Automated Testing

• Problems typically caught with this type of testing:

– memory leaks,

– out-of-bounds arrays,

– pointer misuse,

– type checking,

– object defects, and

– bad parameters • Catching the problem early saves a lot of time in later

phases of development.

Page 5: 11 Automated Testing

• Graphical User Interface (GUI) testing tools automatically

exercise elements of application screens. • Test scripts can usually be defined manually or by capturing user

activity and then simulating it. • This kind of regression testing often simulates hours of user

activity at the keyboard and mouse. • Since the testing is based on scripts, it can be saved and repeated. • It is crucial to enable developers to validate an application

interface after even minor changes have been made.

Page 6: 11 Automated Testing

• GUI testing evolved into client/server testing as the feasibility of testing more features in a distributed environment seemed within reach.

• The dividing line between GUI, client/server, and load

testing tools is one of degrees. • Load testing tools permit complex applications to be run

under simulated conditions. • This addresses not only quality, but performance and

scalability as well.

Page 7: 11 Automated Testing

• Such stress tests exercise the network, client software, server software, database software, and the server.

• By simultaneously emulating multiple users, load testing determines if an application can support its audience.

• A capture program similar to those in GUI testing tools helps automate building scripts.

• Those scripts can be varied and replayed to simulate not only many users, but varied tasks as well.

• Load testing charts the time a user must wait for screen responses, finds bottlenecks, and gives developers the chance to correct them.

Page 8: 11 Automated Testing

• Hardware, software, database, and middleware components are stress tested as a unit, providing more accurate

performance numbers. • Again, because testing is controlled via scripts, tests are

repeatable.

• If you add an index to a database and rerun the test, you can

quantify the specific performance impact of that change. • Load testing can help predict how a system will perform as

usage increases.

Page 9: 11 Automated Testing

• Tools permit user loads to be incremented and tracked so that performance degradation can be isolated.

• When applications must support a greater number of users, load testing quickly determines the outcome regarding quality and response time.

• Developers can re-use scripts to alter the user levels, transaction mixes and rates, and the complexity of the application.

• Load testing is the only way to verify the scalability of components as they work together.

Page 10: 11 Automated Testing

Regression testing• Regression testing is selective retesting of software to detect

faults introduced during modifications of a system or system component, to verify that modifications have not caused unintended adverse effects, or to verify that a modified system

or system component still meets specified requirements. • Regression answers the question--"Does everything still work

after my fix?"

• The prior to submitting changes into the system environment

should do regression testing should run. • The group should also run regression testing after each major

build or delta.

Page 11: 11 Automated Testing

Example: BENEFITS of TestWorks/Regression

• Automated capture/replay of realistic user session.

• Tree-oriented test suite management and PASS/FAIL reporting.

• Early detection of latent defects due to unexpected changes in application behavior and appearance.

• Early detection of errors for reduced error content in

released products. • Easy interface to full coverage + regression quality process

architecture.

Page 12: 11 Automated Testing

APPLICATIONS of TestWorks/Regression:

• Test suite applications which are very large (1000's of tests).

• Test suites that need to extract ASCII information from screens.

• Integration with the companion TestWorks/Coverage product for C/C++ applications.

Page 13: 11 Automated Testing

Making a Point• Even as testing tools are catching up on development technologies,

IT managers are learning that quality and performance are not ensured simply by selecting good testing tools.

• Proper testing processes and strategies must be ingrained into the

corporate culture. • RAD without quality accomplishes nothing.

• IT managers need to stop fixating on what testing tools to use, and focus on how to get the job done well.

• In client/server, many applications depend upon several computers, various application modules, and the network to function well.

Page 14: 11 Automated Testing

• Even if all the pieces work well independently, it does not

mean they will perform well as a unit. • Automated testing tools not only freed up a great deal of

manpower, they also provided greater control.

• The use of quality assurance testing tools in the development process will not suffice, however.

• An application that works well when deployed, doesn't mean continued problem-free functioning over time.

• Many complex applications scale well within certain parameters, but then everything can fall apart.

Page 15: 11 Automated Testing

A Tour of Testing Tools • Capture-replay tools are among the most widely known

software testing tools.

• Capture-replay take care of only one small part of software testing: the running or executing of tests cases.

• In order to automate specification-based testing fully, we need tools that create, execute, and evaluate test cases.

• A basic testing tool contain: an execution tool and an evaluation tool.

• Many other helpful testing tools are available.

Page 16: 11 Automated Testing

Tools for Requirements Phase

• Cost-effective software testing starts when requirements are recorded.

• All testing depends on having a reference to test against.

• Software should be tested against the requirements. • If requirements contain all the information a needed in a usable

form, the requirements are test-ready.

• Test-ready requirements minimize the effort and cost of testing.

• If requirements are not test-ready, testers must search for missing information.

Page 17: 11 Automated Testing

Requirements Recorder

• To capture requirements, practitioners may use requirements recorders.

• Some teams write their requirements in a natural language such as English and record them with a text editor.

• Other teams write requirements in a formal language such as LOTOS or Z and record them with a syntax-directed editor.

• Others use requirements modeling tools to record information graphically.

Page 18: 11 Automated Testing

• Requirements modeling tools were used mostly by analysts or developers.

• These tools seldom were used by testers.

• Currently, requirements modeling tools have been evolving in ways that help testers as well as analysts and developers.

• First, a method for modeling requirements called use cases was incorporated into some of these tools.

• Then the use cases were expanded to include test-ready information.

Page 19: 11 Automated Testing

Requirements Verifiers

• The use case are test-ready when data definitions are added.

• With such definitions, a tool will have enough information from which to create test cases.

• Requirements verifiers are relatively new tools.

• To be testable, requirements information must be unambiguous, consistent, and complete.

• A term or word in a software requirements specification is unambiguous if it has one, and only one definition.

Page 20: 11 Automated Testing

• Every action statement must have a defined input, function,

and output. • The tester needs to know that all statements are present. • Requirements verifiers quickly and reliably check for

ambiguity, inconsistency, and statement completeness.

• An automated verifier has no way to determine that

requirements statements are complete. • Requirements verifiers are usually embedded in other tools.

Page 21: 11 Automated Testing

Spec.-Based Test Case Generators

• The recorder captures requirements information which is then

processed by the generator to produce test cases. • A test case generator creates test cases by statistical, algorithmic,

or heuristic means. – Statistical test case generation chooses input values to form a statistically

random distribution or a distribution that matches the usage profile of the software under test.

– Algorithmic test case generation follows a set of rules or procedures,

commonly called test design strategies or techniques. – Often, test case generators employ action-, data-, logic-,event-, and state-

driven strategies. Each of these strategies probes for a different kind of software defect as shown next.

Page 22: 11 Automated Testing
Page 23: 11 Automated Testing

Requirements to Test Tracers

• In the old days, coming up with test cases was a slow, expensive, and labor-intensive.

• With modern test case generators, test case creation and revision time is reduced to a matter of seconds.

• Requirements to test tracers record the testing behavior to

determine how every specified function was tested. • Test tracers can take over most of the work once consumed

much human time.

• Tracers exist as individual tools, or are included in testing tools such as specification-based test case generators.

Page 24: 11 Automated Testing

Tools for the Design Phase

• In the requirements phase, the recorder, verifier, test case generator, and tracer are used at the system level.

• In the design phase, the same tools may be used again to test small systems.

• Designers may record their designs as either object or

structured models, depending on which methodology used. • Then they can use the a validator-like tool to generate test

cases from designs.

Page 25: 11 Automated Testing

Tools for the Programming Phase

• In efficient code development, programmers must write comments in their code to describe what their code will do.

• They must also create algorithms that the code.

• Finally they will write code.

• The comments , algorithms, and code will be inputs to testing

tools used during the programming phase. • Requirements tools may be used once again.

• The metrics reporter, code checker, and instrumentor also can

be used for testing during the programming phase. • These tools are classified as static analysis tools.

Page 26: 11 Automated Testing

Metrics Reporter

• The metric reporter reads source code and displays metrics information, often in graphical formats.

• Its reports complexity metrics in terms of data flow, data structure, control flow, code size in terms of modules, operands, operators, and lines of code.

• This tool helps the programmer correct and groom code and helps the tester decide which parts of the code need the most testing.

Page 27: 11 Automated Testing

Code Checker

• The earliest code checker most people remember is LINT offered as part of Unix.

• Other code checkers are available for other systems. • LINT was aptly named - it goes through code and picks out all

the fuzz that makes programs messy and error-prone. • The checker looks for misplaced pointers, uninitialized

variables, deviations from standards, etc. • Development teams that use software inspections as part of

static testing can save many staff hours by letting a code checker identify nitpicky problems before inspection time.

Page 28: 11 Automated Testing

name defined but never used

bufferCount pre.c(6)

value type used inconsistently

strlen llib-lc:string.h(64) unsigned int () :: pre.c(69) int ()

strlen llib-lc:string.h(64) unsigned int () :: pre.c(79) int ()

strlen llib-lc:string.h(64) unsigned int () :: pre.c(138) int ()

strlen llib-lc:string.h(64) unsigned int () :: pre.c(142) int ()

strlen llib-lc:string.h(64) unsigned int () :: pre.c(256) int ()

strlen llib-lc:string.h(64) unsigned int () :: pre.c(264) int ()

strlen llib-lc:string.h(64) unsigned int () :: pre.c(283) int ()

strlen llib-lc:string.h(64) unsigned int () :: pre.c(330) int ()

strlen llib-lc:string.h(64) unsigned int () :: pre.c(332) int ()

strlen llib-lc:string.h(64) unsigned int () :: pre.c(392) int ()

Page 29: 11 Automated Testing

function argument ( number ) used inconsistently

strncmp (arg 3) llib-lc:string.h(47) unsigned int :: pre.c(146) int

strncmp (arg 3) llib-lc:string.h(47) unsigned int :: pre.c(266) int

strncpy (arg 3) llib-lc:string.h(39) unsigned int :: pre.c(271) int

strncmp (arg 3) llib-lc:string.h(47) unsigned int :: pre.c(285) int

strncpy (arg 3) llib-lc:string.h(39) unsigned int :: pre.c(290) int

function returns value which is always ignored

getInput getMatchingRunBuffer fprintf sprintf

sscanf strcpy strncpy

declared global, could be static

runbufferNumber pre.c(7)

Ready pre.c(10)

ReadyString pre.c(11)

End pre.c(12)

Receive pre.c(13)

Page 30: 11 Automated Testing

outFlush pre.c(14)

funcend pre.c(17)

on701 pre.c(18)

on702 pre.c(19)

getString pre.c(112)

getBufferNumberandSize pre.c(124)

FindString pre.c(132)

CountChars pre.c(153)

getInput pre.c(164)

putOutput pre.c(181)

ReplaceXuser701 pre.c(192)

ReplaceXuser702 pre.c(227)

ReplaceStrings pre.c(242)

getMatchingRunBuffer pre.c(304)

ExtractSocketNumberSt pre.c(413)

ExtractSocketNumberEn pre.c(429)

FlushRun pre.c(446)

Page 31: 11 Automated Testing

Code Instrumentor

• The code instrumentor helps programmers and testers

measure structural coverage by reading source code. – For example, the instrumentor might choose to make a

measurement after a variable is defined or a branch is taken. • The tool inserts a new line of code, a test probe, that will

record information such as number and duration of test executions.

Page 32: 11 Automated Testing

Tools for the Testing Phase

• All the tools discussed so far are used before developers

get to the testing phase. • The tools discussed next are dynamic analyzers that must

have test cases to run.

Page 33: 11 Automated Testing

Capture-Replay Tool

• The capture-replay tool works like a VCR or a tape recorder.

• When the tool is in the capture mode, it records all information that flows past it.

• The recording is called a script that is a procedure that contains instructions to execute one or more test cases.

• When the tool is in the replay mode, it stops incoming

information and plays a recorded script.

Page 34: 11 Automated Testing

• Two features exist in some commercial capture-replay

tools. – First: an object-level, record-playback feature that enables capture-

replay tools to record information at the object, control, or widget level.

– Second: a load simulator is a facility that lets the tester simulate hundreds or even thousands of users simultaneously working on software under test.

• Companies are confronted with a "build or buy" decision.

Page 35: 11 Automated Testing

• Most such tools are helpful only to people who are testing GUI-driven systems.

• Many unit testers, integration testers, and embedded system testers do not deal with large amounts of software that interact

with graphical user interfaces (GUIs). • Therefore, most capture-replay tools will not satisfy these

testers' needs. • Capture-replay tools may be packaged with other tools such as

test managers. • A tool called a test manager helps testers control large numbers

of scripts and report on the progress of testing.

Page 36: 11 Automated Testing

Test Harness

• A capture-replay tool connects with software under test through an interface, usually located at the screen or terminal.

• But the software under test will probably also have interfaces with an operating system, a data base system, and other

application system. • Each such interface needs to be tested, too, using a test

harness.• If some parts of the software being developed are not

available when testing, testers build software packages to simulate the missing parts called stubs and drivers.

Page 37: 11 Automated Testing

• Test harnesses have been custom-built per application for years.

• Most harnesses did not become off-the-shelf products.

• Recently, interface standards and standard ways of describing application interfaces through modern software development tools have enabled commercial test harness generators.

Page 38: 11 Automated Testing

Comparator• The comparator compares actual outputs to expected outputs

and flags differences.

• Software passes a test case when actual and expected output values are within allowed tolerances.

• When complexity and volume of outputs are low, the " diff " function will provide all the comparison information testers

need. • Sometimes, cannot compare data precisely enough to satisfy

testers.

• Then testers may turn to comparators. • Most of today's capture-replay tools include a comparator.

Page 39: 11 Automated Testing

Structure Coverage Analyzer

• The structure coverage analyzer tells the tester which statements, branches, and paths in the code have been exercised.

• Structure coverage analyzers fall into two categories: – intrusive

– nonintrusive

• Intrusive analyzers use a code instrumentor to insert test probes into the code.

• The code with the probes is compiled and exercised.

Page 40: 11 Automated Testing

• Nonintrusive analyzers gather information in a separate hardware processor that runs in parallel with the processor being used for the software under test.

• If sold commercially, nonintrusive analyzers usually come with

the parallel processor(s) included as part of the tool package. • A special category of coverage analyzers called memory leak

detectors find reads of uninitialized memory as well as reads

and writes beyond the legal boundary of a program. • Since these tools isolate defectsand may be classified as

debuggers.