whitepaper agile testing
TRANSCRIPT
-
7/27/2019 Whitepaper Agile Testing
1/16
A SmartBear White Paper
Everyday, Agile development teams are challenged to deliver high quality sotware as quickly as
possible. Yet testing can slow-down the go-to-market process. This white paper suggests time-saving
techniques that make the work o Agile testing easier and more productive. Comprised o precise
and targeted solutions to common Agile testing challenges, Smart Agile Testing oers tips and ad-
vice to ensure adequate test coverage and traceability, avoid build-induced code breakage, identiy
and resolve deects early in the development process, improve API code quality, and ensure that new
releases dont cause perormance bottlenecks.
Five Challenges or Agile Testing Teams
Solutions to Improve Agile Testing Results
Contents
What Are the Most Common Challenges Facing Agile Testing Teams? .................................................................................................... 2
Challenge 1: Inadequate Test Coverage ................................................................................................................................................................ 3
Challenge 2: Accidental Broken Code Due to Frequent Builds ...................................................................................................................... 6
Challenge 3: Detecting Deects Early, When Theyre Easier and Cheaper to Fix .................................................................................. 8
Challenge 4: Inadequate Testing or Your Published API ............................................................................................................................. 11
Challenge 5: Ensure That New Releases Dont Create Perormance Bottlenecks .............................................................................. 12
About SmartBear Sotware ............................................................................................................................................................. .......................... 16
Ensuring Sotware SuccessSM
www.smartbear.com/agile
http://www.smartbear.com/agilehttp://www.smartbear.com/agile -
7/27/2019 Whitepaper Agile Testing
2/16
What Are the Most Common Challenges Facing Agile Testing Teams?
Agile development is a aster, more ecient and cost-eective method o delivering high-quality sotware.
However, agile presents testing challenges beyond those o waterall development. Thats because agile require-
ments are more lightweight, and agile builds happen more requently to sustain rapid sprints. Agile testing
requires a fexible and streamlined approach that complements the speed o agile.
Smart Agile Testing is a set o timesaving techniques specically designed to make the work o agile testing
teams easier and more productive. It is an empowering process that produces great results and has a simple
mission: Get the best possible testing results with the least amount of work.
These challenge- and solution-based techniques do not require major changes in your existing workfow. You
can adopt them in increments, which enables you to ocus on one specic challenge and meet it head-on with a
precise, targeted solution. Following are ve common challenges agile teams ace and recommended solutions
to handle them quickly and eectively.
Recommended Solutions to the Five Common Challenges
Challenge 1: Inadequate Test Coverage
Challenge 2: Accidentally Broken Code Due to Frequent Builds
Challenge 3: Finding Deects Early, When Theyre Cheaper and Easier to Fix
Challenge 4: Inadequate Testing or Your Published API
Challenge 5: Ensuring That New Releases Dont Create Perormance Bottlenecks
Linking tests to user stories (traceability) or insight into test coverage or each user story
Integration with source check-in to nd changed code that was not anticipated or planned or
Analyzing specic metrics to identiy traceability and missing test coverage
Running automated regression tests run on every build to discover broken code
Analyzing specic metrics to identiy regression runs and broken code
Perorming peer reviews o source code and test artiacts to nd early stage deects
Using static analysis tools to identiy early stage deects
Analyzing peer review statistics and deect aging to address deects early when theyre least costly to x
Running automated API tests on every build to ensure your API is working as designed
Running load testing on your API calls to be sure your API is responsive
Analyzing specic metrics to determine API test coverage and responsiveness
Conducting Application and API Load Testing ensures that perormance is not impacted with the new release
Implementing production monitoring to detect how your application is perorming in production
Analyzing specic metrics to identiy bottlenecks in application / API perormance
2
http://www.smartbear.com/ -
7/27/2019 Whitepaper Agile Testing
3/16
Challenge 1: Inadequate Test Coverage
Inadequate test coverage can cause big problems. Its oten the result o too ew tests written or each user
story and lacking visibility into code that was changed unexpectedly. As we all know, developers sometimes
change code beyond the scope o the eatures being released. They do it or many reasons such as to x deects,
to reactor the code, or because developers are bothered by the way the code works and just wants to improve
it. Oten these code changes are not tested, particularly when you are only writing tests or planned new ea-
tures in a release.
To eliminate this problem, its important to have visibility into all the code being checked in. By seeing the code
check-ins, you can easily spot any missing test coverage and protect your team rom unpleasant surprises once
the code goes into production.
How Can You Ensure Great Test Coverage o New Features?
Beore you can have adequate test coverage you rst must have a clear understanding o the eatures being de-
livered in the release. For each eature, you must understand how the eature is supposed to work, its constraints
and validations, and its ancillary unctions (such as logging and auditing). Agile developers build eatures based
on a series o user stories (sometimes grouped by themes and epics). Creating your test scenarios at the user
story level gives you the best chance o achieving optimal test coverage.
Once your QA and development teams agree on the eatures to be delivered, you can begin creating tests or
each eature. Coding and test development should be done in parallel to ensure the team is ready to test each
eature as soon as its published to QA.
Be sure to design a sucient number o tests to ensure comprehensive results:
Positive Tests: Ensure that the eature is working as designed, with ull unctionally, is cosmetically correct,
and has user-riendly error messages.
Negative Tests: Users oten (okay, usually) start using sotware without rst reading the manual. As a result,
they may make mistakes or try things you never intended. For example, they may enter invalid dates, key-in
characters such as dollar signs or commas into numeric elds, or enter too many characters (e.g., enter 100
characters onto a eld designed or no more than 50). Users also may attempt to save records without
completing all mandatory elds, delete records that have established relationships with other records (such
as master/detail scenarios), or enter duplicate records. When you design tests, its important to understand
the constraints and validations or each requirement and create enough negative tests to ensure that each
constraint and validation is ully vetted. The goal is to make the code dummy proo.
Perormance Tests: As we will see later in this paper, its a good idea to test new eatures under duress.
There are ways to automate this but you should conduct some manual tests that perorm timings withlarge datasets to ensure that perormance doesnt suer too much when entering a large amount o data
or when many there are many concurrent users.
Ancillary Tests: Most well-designed systems write errors to log les, record changes to records via audits,
and use reerential integrity so whenever a master/child record is deleted, both records are deleted
simultaneously. Many systems regularly run purge routines; be certain that as you add new eatures they
3
http://www.smartbear.com/ -
7/27/2019 Whitepaper Agile Testing
4/16
are covered by ancillary tests. Finally, most systems have security eatures that limit access to applications
so specic people only have rights to specic unctions. Ancillary tests ensure that log les and audits are
written, reerential integrity is preserved, security is embraced, and purge routines cleanly remove all related
data as needed.
Applying a sucient number o tests to each eature to ully cover all the scenarios above is called traceability.Securing traceability is as simple as making a list o eatures including the number and breadth o tests
that cover positive, negative, perormance, and ancillary test scenarios. Listing these by eature ensures that
no eature is insuciently tested or not tested at all.
To learn more about traceability, view this video.
How Can You Detect Changes to Code Made Outside the Scope o New Features?
Its common or developers to make changes to code that go beyond the scope o eatures being released.
However, i your testing team is unaware o changes made, you may end up with an unexpected deect because
you couldnt test them.
So, whats the solution? One approach is to branch the code using a source control management (SCM) sys-
tem to ensure that any code outside the target release remains untouched. For source code changes, you need
visibility into each module changed and the ability to link each module with a eature. By doing this, you can
quickly identiy changes made to eatures that arent covered by your testing.
Consider assigning a testing team member to inspect all code changes to your source control system daily and
erret out changes made to untested areas. This can be cumbersome; it requires diligence and time. A good
alternative is to have your source control system send daily code changes to a central place so your team canreview them. Putting these changes in a central location helps implement rules that associate specic modules
with specic eatures. That way, youll see when a change is made outside o a eature thats being developed
in the current release, and check o the reviewed changes so you can be certain each check-in is covered.
One way to achieve this is to set up a trigger in your SCM that alerts you whenever a le is changed. Alternative-
ly you can use a eature that inspects your source control system and sends all check-ins to a central repository
or review. I you havent already built something like this, consider SmartBears QAComplete; it has a eature
built specically or this important task. Through its OpsHub connector, QAComplete can send all source code
changes into the Agile Tasks area o the sotware, which you can then use to build rules that link source modules
with eatures and to fag check-ins as being reviewed.
What Are the Most Important Test Coverage Metrics?
Metrics or adequate test coverage ocus on traceability, test run progress, deect discovery, and deect x rate.
These include:
4
http://smartbear.com/support/screencasts/almcomplete/requirements-traceability/http://smartbear.com/support/screencasts/almcomplete/requirements-traceability/http://www.smartbear.com/http://smartbear.com/support/screencasts/almcomplete/requirements-traceability/http://smartbear.com/support/screencasts/almcomplete/requirements-traceability/ -
7/27/2019 Whitepaper Agile Testing
5/16
Traceability Coverage: Count the number o tests you
have or each requirement (user story). Organize the
counts by test type (positive, negative, perormance,
and ancillary). Reviewing by user story shows i
you have sucient test coverage, so you can be
condent o the results.
Blocked Tests: Use this technique to identiy
requirements that cannot be ully tested because o
deects or unresolved issues.
Test Runs by Requirement (User Story): Count the
number o tests you have run or each requirement,
as well as how many have passed, ailed, and are still
awaiting run. This metric indicates how close you are
to test completion or each requirement.
Test Runs by Conguration: I youre testing in
dierent operating systems and browsers, its
important to know how many tests you have runagainst each supported browser and OS. These counts indicate how much coverage you have.
Daily Test Run Trending: Test run trending helps you visualize, day-by-day, how many tests have passed,
ailed, and are waiting to be run. This shows whether you can complete all testing beore the test cycle is
complete. I it shows youre alling behind, run your highest priority tests rst.
Deects by Requirement (User Story): Understanding the number o deects discovered by requirement
can trigger special ocus on specic eatures so that you can concentrate on those with the most bugs. I
you nd that specic eatures tend to be most buggy, youll be able to run those more oten to ensure ull
coverage o the buggy areas.
Daily Deect Trending: Deect trending helps you visualize, day-by-day, how many deects are ound
and resolved. It also shows whether you can complete all high-priority deects beore the testing cycle iscomplete. I you know you are lagging, ocus the team on the most severe, highest priority deects rst.
Deect Duration: This shows how quickly deects are being xed. Separating them by priority ensures that
that the team addresses the most crucial items rst. A long duration on high priority items also signals slow
or misaligned development resources, which your team and the development team should resolve jointly.
How Can You Ensure Your Test Coverage Team Is Working Optimally?
As a best practice, we recommend that each day your testing team:
Participates in Standup Meetings: Discuss impediments to test progress.
Reviews Daily Metrics: When you spot issues such as high-priority deects becoming stale, work with the
development leader to bring attention to them. I your tests are not trending to completion by the sprint
end date, mitigate your risk by ocusing on the highest priority tests. When you discover code changes that
are not covered by tests, immediately appoint someone to create them.
5
http://www.smartbear.com/ -
7/27/2019 Whitepaper Agile Testing
6/16
Challenge 2: Accidentally Broken Code Due to Frequent Builds
Perorming daily builds introduces the risk o breaking existing code. I you rely solely on manual test runs, its
not practical to ully regress your existing code each day. A better approach is to use an automated testing tool
that records and runs tests automatically. This is a great way to test more stable eatures to ensure that new
code has not broken them.
Most agile teams perorm continuous integration, which simply means that they check source code requently
(typically several times a day). Upon code check-in, they have an automated process or creating a sotware
build. An automated testing tool can perorm regression testing whenever you launch a new build. There
are many tools on the market or continuous integration, including SmartBears Automated Build Studio, Cruise
Control, and Hudson. Its a best practice to have the build system automatically launch automated tests to
detect the stability and integrity o the build.
How Can You Get Started with Automated Testing?
The best way to get started is to proceed with baby steps. Dont try to create automated tests or every eature.
Focus on the tests that provide the biggest bang or your buck. Here are some proven methods:
Assign a Dedicated Resource: Few manual testers can do double duty and create both manual and
automated regression tests. Automated testing requires a specialist with both programming and analytical
skills. Optimize your eorts by dedicating a person to work solely on automation.
Start Small: Create positive automated tests that are simple. For example, imagine you are creating an
automated test to ensure that the order processing sotware can add a new order. Start by creating the test
so that it adds a new order with all valid data (positive test). Youll drive yoursel crazy i you try to create
a set o automated tests to perorm every negative scenario that you can imagine. Dont sweat it. You can
always add more tests later. Focus on proving that your customers can add a valid order and that new code
doesnt break that eature.
Conduct High-Use Tests: Create tests that cover the most requently used sotware eatures. For example,
in an order processing system, users create, modiy, and cancel orders every day; be sure you have tests
or that. However, i orders are exported rarely, dont waste time automating the export process until you
complete all the high-use tests.
Automate Time-Intensive Tests/Test Activities: Next, ocus on tests that require a long setup time. For
example, you may have tests that require you to set up the environment (i.e., create a virtual machine
instance, install a database, enter data into the database, and run a test). Automating the setup process
saves substantial time during a release cycle. You may also nd that a single test takes our hours to run by
hand. Imagine the amount o time you will recoup by automating that test so you can run it by clicking a
button!
Prioritize Complex Calculation Tests: Focus on tests that are hard to validate. For example, maybe yourmortgage sotware has complex calculations that are very dicult to veriy because the ormulas or
producing the calculation are error-prone i done manually. By automating this test, you eliminate the
manual calculations. This speeds up testing, ensures the calculation is repeatable, reduces the chance o
human error, and raises condence in the test results.
Use Source Control: Store the automated tests you create in a source control system. This saeguards
6
http://smartbear.com/products/software-development/software-release-managementhttp://www.smartbear.com/http://smartbear.com/products/software-development/software-release-management -
7/27/2019 Whitepaper Agile Testing
7/16
against losing your work due to hard drive crashes and prevents overwriting o completed tests. Source
control systems provide a saeguard by allowing you to check code in and out and retain prior test versions
without ear o accidental overwriting.
Once you create a base set o automated tests, schedule them to run on each build. Daily, identiy tests that
ailed. Conrm i they fag a legitimate issue or i the ailure is due to an unexpected change to the code. When
a deect is identied, you should be very pleased that your adoption o test automation is paying dividends.
Remember, start small and build your automated test arsenal over time. Youll be very pleased by how much o
your regression testing has been automated, which rees you and your team to perorm deeper unctional test-
ing o new eatures.
Reliable automated testing requires a proven tool. As you assess options, remember that SmartBears
TestComplete is easy to learn and oers the added benet o integrating with QAComplete, so you can schedule
your automated tests to run unattended and view the run results on a browser.
To learn more about traceability, see this video.
What is Test-Driven Development?
Agile practitioners sometimes use test-driven development (TDD) to improve unit testing. Using this approach,
the agile developer writes code by using automated testing as the driver to code completion.
Imagine a developer is designing an order entry screen. She might start by creating a prototype o the screen
without connecting any logic, and then create an automated test o steps or adding an order. The automated
test would validate eld values, ensure that constraints were being enorced properly, etc. The test would be run
beore any logic was written into the order entry screen. The developer would then write code or the order entry
screen and run automated tests to see i it passes. She would only consider the screen to be done when the
automated test runs to completion without errors.
To illustrate urther, lets say youre writing an object
that when called with a specic input, produces a specic
output. By implementing a TDD approach, you can write
code, run the automated tests, and continue that process
recursively until attaining the expected input and output.
Which Metrics Are Most Important or Successul Auto-
mated Testing?
As you grapple with this challenge, ocus on metrics that
analyze automated test coverage, automated test run
progress, deect discovery, and deect x rate, including:
7
http://smartbear.com/products/qa-tools/automated-testing-toolshttp://smartbear.com/products/qa-tools/automated-testing-toolshttp://smartbear.com/products/qa-tools/automated-testing-toolshttp://smartbear.com/products/qa-tools/test-management-softwarehttp://smartbear.com/support/screencasts/almcomplete/testcomplete-integration/http://smartbear.com/support/screencasts/almcomplete/testcomplete-integration/http://www.smartbear.com/http://smartbear.com/support/screencasts/almcomplete/testcomplete-integration/http://smartbear.com/support/screencasts/almcomplete/testcomplete-integration/http://smartbear.com/products/qa-tools/test-management-softwarehttp://smartbear.com/products/qa-tools/automated-testing-toolshttp://smartbear.com/products/qa-tools/automated-testing-tools -
7/27/2019 Whitepaper Agile Testing
8/168
Feature Coverage: Count the number o automated tests or each eature. Youll know when you have
enough tests to be condent that you are ully covered rom a regression perspective.
Requirement/Feature Blocked: Use this metric to identiy what requirements are blocking automation. For
example, third-party controls require custom coding; current team members may lack the expertise to write
them.
Daily Test Run Trending: This shows you, day-by-day, the number o automated tests that are run, passed,and ailed. Inspect each ailed test and post deects or issues you nd.
Daily Test Runs by Host: When running automated tests on dierent host machines (i.e., machines with
dierent operating systems or browser combinations), analyzing your runs by host alerts you to specic OS
or browser combinations that introduce new deects.
What Can You Do to Ensure Your Automated Test Team Is Working Optimally?
We recommend that testing teams perorm these tasks every day:
Review Automated Run Metrics: When overnight automated test runs fag deects, do an immediate manual
retest to rule out alse positives. Then log real deects or resolution. Use Source Control: Review changes youve made to your automated tests and check them into your source
control system or protection.
Continue to build on your Automated Tests: Work on adding more automated tests to your arsenal ollowing
the guidelines described previously.
Challenge 3: Detecting Deects Early, When Theyre Easier and Cheaper to Fix
You know that deects ound late in the development cycle require
more time and money to x. And deects not ound until production
are an even bigger problem. A primary goal o development and
testing teams is to identiy deects as early as possible, reducing
the time and cost o rework. There are two ways to accomplish
this: Implement peer reviews, and use static analysis tools to scan
code to identiy deects as early as possible. Is there value in using
more than one approach? Capers Jones, the noted sotware quality
expert, explains the need or multiple techniques in a recent white
paper available or download on SmartBears website.
What Should You Review?
As youre developing requirements and breaking them into user
stories, also conduct team reviews. You need to ensure every story is:
Clear
Supports the requirement
Identies constraints and validations the programmer and testers need to know
A synergistic combination o ormal
inspections, static analysis, and ormal
testing can achieve combined deectremoval eciency levels o 99%. Bet-
ter, this synergistic combination will
lower development costs and sched-
ules and reduce technical debt by
more than 80% compared to testing
alone. 1
- Capers Jones
Whitepaper from smartbear.com
1 Capers Jones, Combining Inspections, Static Analysis, and Testing to Achieve Deect Removal Eciency Above 95%, January 2012.
Complimentary download rom www.smartbear.com
http://www2.smartbear.com/Achieve_Defect_Removal_Efficiency_WP.htmlhttp://www2.smartbear.com/Achieve_Defect_Removal_Efficiency_WP.htmlhttp://www2.smartbear.com/Achieve_Defect_Removal_Efficiency_WP.htmlhttp://www2.smartbear.com/Achieve_Defect_Removal_Efficiency_WP.htmlhttp://www.smartbear.com/ -
7/27/2019 Whitepaper Agile Testing
9/169
There are two ways to accomplish this: Hold regular meetings so the team can review the user stories, or use a
tool or online reviews to identiy missing constraints and validations beore you get too ar into the develop-
ment cycle. Dening the user stories also helps prevent deects that arise because the programmer ailed to add
logic or those constraints and validations. Tools such as SmartBears QAComplete make it easy to conduct user
story reviews online.
In addition to automated tests, your testers also need to dene manual tests that oer comprehensive coverage
or each user story. Then the team ormally reviews the manual tests to ensure that nothing critical is missing.
Programmers oten suggest additional tests the tester has not considered, which prevent deects beore the
code goes into production.
The second option is to simply allow the team to go online and review the testers plan to ensure nothing was
overlooked. Thats the better approach because it allows team members to contribute during lulls in their work
schedules. Tools such as SmartBears QAComplete make it easy to conduct manual test reviews online.
What Are the Best Ways to Review Automated Tests?
While developing or updating automated tests, its always a good idea to have another set o eyes take a look.
Code reviews o scripted automated tests identiy logic issues, missing test scenarios, and invalid tests. As
discussed above, its critical that you check all automated tests into a source control system so you can rollback
to prior versions i necessary.
There are several techniques you can apply to automated code reviews. Over-the-shoulder reviews entail
sitting with the automated test designer and reviewing the automated tests together. As you do this, you can
recommend changes until youre comortable with the level o testing provided by your automation engineer.
Remote teams cannot easily perorm over-the-shoulder reviews. Although you could set up an online meeting
and collaborate by sharing computer screens, its a much bigger challenge to manage time zone dierences and
nd a convenient time to conduct a review.
A convenient alternative to over-the-shoulder reviews is peer code reviews. Peer code review tools streamline
the entire process by enabling an online workow.
Heres an example: Your automation engineer checks the automation scripts into a source control system. Upon
check-in, he uses the peer review tool to identiy automation scripts that need code review. Using the tool, he
selects a specic reviewer. The reviewer views the script in a window and makes notes. The reviewer can write
notes on specic code lines to identiy which lines o the automation script are being questioned. The tool also
enables the reviewer to create a deect or the automation engineer to resolve when changing scripts. For thebroadest integrations and most fexible workfow, consider SmartBears CodeCollaborator, the most Agile peer
code review on the market.
What Do You Gain rom Peer Review?
The lack o ormal code reviews can adversely aect the overall testing eort, so you should strongly advocate
http://smartbear.com/products/qa-http://smartbear.com/products/qa-http://smartbear.com/products/software-development/code-reviewhttp://smartbear.com/products/software-development/code-reviewhttp://smartbear.com/products/qa-http://smartbear.com/products/qa-http://www.smartbear.com/ -
7/27/2019 Whitepaper Agile Testing
10/16
or them with the agile development team. When developers perorm regular code reviews they identiy logic er-
rors, missing error routines, and coding errors (such as overfow conditions and memory leaks), and they dramati-
cally reduce the number o deects. While code reviews can be done over-the-shoulder, its much more eective
and ecient to use the right tool.
How Can Static Analysis Help?
The testing team also should strongly advocate the use o static analysis tools. Static analysis tools automatical-
ly scan source code and identiy endless loops, missing error handling, poor implementation o coding standards,
and other common deects. By running static code analysis on every build, developers can prevent deects that
might not be discovered until production.
Although many static analysis tools can supplement the quality process by identiying bugs, they really
only tell you that you have a bug. They cant tell you how dangerous the bug is or the damage it could
cause. Code review provides insight into the impact o the bugis it a showstopper or is it trivialso devel-
opers can prioritize the fx. This is the main reason that SmartBear recommends that you use both code review
and static analysis tools.
What Are the Most Important Peer Code Review Metrics?
Focus your metrics around peer code review progress
and the number o deects discovered rom peer reviews.
Consider these:
Lines o Code Inspected: Quanties the amount o
code thats been inspected.
Requirement (User Story) Review Coverage: Identies
which user stories have been reviewed.
Manual Test Review Coverage: Reports which manual
tests have been reviewed.
Automated Test Review Coverage: Shows which
automated tests have been reviewed.
Code Review Coverage: Identies which source code has been reviewed and which source check-ins remain
to be reviewed.
Static Analysis Issues Found: Identies issues the static analysis scans ound.
Deects Discovered by Peer Review: Reports the number o deects discovered by peer reviews. You may
categorize them by type o review (user story review, manual test review, etc.).
What Can You Do Each Day to Ensure Your Testing Team Is Working Optimally?
Each day, your testing team should:
Perorm Peer Reviews o user stories, manual tests, automated tests and source code. Log deects ound
during the review so you can analyze the result o using this strategy.
10
http://www.smartbear.com/ -
7/27/2019 Whitepaper Agile Testing
11/16
Automatically Run Static Analysis to detect coding issues. Review the identied issues, congure the
tolerance o your static review to ignore alse positives, and log any true deects.
Review Deect Metrics related to peer reviews to determine how well this strategy helps you reduce deects
early in the coding phase.
Challenge 4: Inadequate Testing or Your Published API
Many testers ocus on testing the user interace and miss the opportunity to perorm API testing. I your sot-
ware has a published API, your testing team needs a solid strategy or testing it.
API testing oten is omitted because o the misperception that it takes programming skills to call the properties
and methods o your API. While programming skill can be helpul or both automated and API testers, its not
essential i you have tools that allow you to perorm testing without programming.
How Do You Get Started with API Testing?
Similar to automated testing, the best way to get started with API testing is to take baby steps. Dont try to
create tests or every API unction. Focus on the tests that provide the biggest bang or your buck. Here are some
guidelines to help you ocus:
Dedicated Resource: Dont have your manual testers develop API tests. Have your automation engineer
double as an API tester; the skill set is similar.
High Use Functions: Create tests that cover the most requently called API unctions. The best way to
determine the most called unctions is to log the calls or each API unction.
Usability Tests: When developing API tests, be sure to create negative tests that orce the API unction to
spit out an error. Because APIs are a black box to the end user, they oten are dicult to debug. Thereore,
i a unction is called improperly, its important that the API returns a riendly and actionable message that
explains what went wrong and how to x it.
Security Tests: Build tests that attempt to call unctions without the proper security rights. Create tests that
exercise the security logic. It can be easy or developers to enorce security constraints in the user interace
but orget to enorce them in the API.
Stopwatch-level Perormance Tests: Time methods (entry and exit points) to analyze which methods take
longer to process than anticipated.
Once you create a base set o API tests, schedule them to run automatically on each build. Every day, identiy
any tests that ailed to conrm that theyre legitimate issues and not just an expected change you werent
aware o. I a test identies a real issue, be happy that your eorts are paying o.
API testing can be done by writing code to exercise each unction, but i you want to save time and eort, use a
tool. Remember, our mission is to get the most out o testing eorts with the least amount o work.
When considering tools, take a look at SmartBears soapUI Pro. Its easy to learn and has scheduling capabili-
ties so your API tests can run unattended, and you can view the results easily.
11
http://www.eviware.com/http://www.eviware.com/http://www.smartbear.com/http://www.eviware.com/ -
7/27/2019 Whitepaper Agile Testing
12/16
Which API Metrics Should You Watch?
Focus on API unction coverage, API test run prog-
ress, deect discovery, and deect x rate. Here are
some metrics to consider:
Function Coverage: Identies which unctionsAPI tests cover. Focus on the unctions that
are called most oten. This metric enables you
to determine i your testing completely covers
your high-use unctions.
Blocked Tests: Identiy API tests that are
blocked by deects or external issues (or example, compatibility with the latest version o .NET).
Coverage within Function: Most API unctions contain several properties and methods. This metric identies
which properties and methods your tests cover to ensure that all unctions are ully tested (or at least the
ones used most oten).
Daily API Test Run Trending: This shows, day-by-day, how many API tests are run, passed, and ailed.
What Can You Do Each Day to Ensure Your API Testing Team Is Working Optimally?
Testing teams should perorm these things every day:
Review API Run Metrics: Review your key metrics. I the overnight API tests ound deects, retest them
manually to rule out a alse positive. Log all real deects or resolution.
Continue to build on your API Tests: Work on adding more API tests to your arsenal using the guidelines
described above.
Challenge 5: Ensure That New Releases Dont Create Perormance BottlenecksIn a perect world, adding new eatures in your current release would not cause any perormance issues. But
we all know that as sotware starts to mature with the addition o new eatures, the possibility o perormance
issues increase substantially. Dont wait until your customers complain beore you begin testing perormance.
Thats a ormula or very unhappy customers.
System slowdowns can be introduced in multiple placesyour user interace, batch processes, and API
create processes that ensure all perormance is monitored and issues are mitigated. Last but by no means
least, you also should implement automatic production monitoring to check the speed systems or speed, which
provides valuable statistics that enable you to improve perormance.
How Do You Get Started with Application Load Testing?
Your user interace is the most visible place or perormance issues to crop up. Users are very aware when they
are waiting too long or a new record to be added.
12
http://www.smartbear.com/ -
7/27/2019 Whitepaper Agile Testing
13/16
When youre ready or load testing, it is important to set a perormance baseline or your application, website, or
API:
The response time or major eatures (e.g., adding and modiying items, running reports)
The maximum number o concurrent users the sotware can handle
Whether the application ails or generates errors when too many visitors are using it Compliance with specic quality-o-service goals
Its very dicult to set a baseline without a tool. For instance, you could easily manually record the response
time o every eature simply by using a stopwatch and recording the times on a spreadsheet. But trying to
simulate hundreds or thousands o concurrent users is impossible without the right tool. You wont have enough
people connect at the same time to get you those statistics. When youre looking at tools, consider SmartBears
LoadComplete. Its easy to learn, inexpensive, and can handle all the tasks needed to create your baseline.
Once you establish a baseline, you need to run the same tests ater each new sotware release to ensure it
didnt degrade perormance. I perormance suers, you need to know what unctions degraded so the technical
team can address them. Once you create your baseline with LoadComplete, you can run those same load testson each release without any additional work. That enables you to collect statistics to determine i a new release
has adversely aected perormance.
How Do You Get Started with API Load Testing?
Another place perormance issues can surace is within your Web services API. I your user interace uses your
API, it can impact not only perormance o the API but also the overall user interace experience or your cus-
tomers. Similar to application load testing, you need to create a baseline so you know what to expect in terms o
average response time and learn what happens when a large number o users are calling your API.
Use the same approach as with application load testing to set baselines and compare them against each codeiteration. You can try to set the baseline manually, but it requires a lot o dicult work. Youd have to write
harnesses that call your API simultaneously and also write logic to record perormance statistics. A low cost API
load-testing tool like SmartBears loadUI Pro saves you all that work.
Run the same API perormance tests with each new sotware release to ensure it does not degrade API peror-
mance and help identiy which unctions slow down the system.
What Is Production Monitoring?
Once youve shipped your sotware to production, do you know how well its perorming? Is your application
running ast or does it slow down during specic times o the day? Do your customers in Asia get similar peror-
mance as those in North America? How does your websites perormance compare to your competitors? How
soon do you learn i your application has crashed?
These are just some o the thorny questions about sotware perormance that can be dicult to answer. But not
knowing the answer can adversely aect how your customers respond to your application and your company.
13
http://smartbear.com/products/qa-tools/load-testing-toolhttp://smartbear.com/products/qa-tools/load-testing-toolhttp://smartbear.com/products/qa-tools/load-testing-toolhttp://www.eviware.com/loadUI/whats-new-in-loadui-15.htmlhttp://www.smartbear.com/http://www.eviware.com/loadUI/whats-new-in-loadui-15.htmlhttp://smartbear.com/products/qa-tools/load-testing-toolhttp://smartbear.com/products/qa-tools/load-testing-toolhttp://smartbear.com/products/qa-tools/load-testing-tool -
7/27/2019 Whitepaper Agile Testing
14/16
Fortunately, you can address all o these critical questions by implementing production-monitoring tools. With
them you can:
Access website perormance
Receive an automatic e-mail or other notication i your website crashes
Detect API, e-mail, and FTP issues Compare your websites perormance to your competitors sites
When searching or the right perormance-monitoring tool, consider SmartBears AlertSite products, the best on
the market.
What Are Some Metrics to Watch?
Perormance monitoring metrics need to ocus on
perormance statistics and peer code review status.
Here are some to consider:
Load Test Metrics Basic Quality: Shows the eect o ramping up
the number o users and what happens with the
additional load.
Load Time: Identies how long your pages take
to load.
Throughput: Identies the average response
time or key actions taken.
Server Side Metrics: Isolates the time your server
takes to respond to requests.
Production Monitoring Metrics
Response Time Summary: Shows the response
time your clients are receiving rom your website.
Also separates the DNS, redirect, rst byte, and
content download times so that you can better
understand where time is being spent.
Waterall: Shows the total response time with
detailed inormation by asset (images, pages,
etc.).
Click Errors: Errors your clients see when they click on specic links on your web page to make it easier to
identiy when a user goes down a broken path.
14
http://smartbear.com/products/web-monitoringhttp://www.smartbear.com/http://smartbear.com/products/web-monitoring -
7/27/2019 Whitepaper Agile Testing
15/16
What Can You Do Every Day to Ensure Youre Working Optimally?
Each day, testing teams should:
Review Application Load Testing Metrics: Examine your key metrics and create deects or perormance
issues.
API Load Testing Metrics: I theyre not perorming as they should, create deects or resolution.
Review Perormance Monitoring Metrics: Review your key metrics and create deects or monitoring issues.
Learn More
I youd like to learn more about any o the SmartBear products discussed in this whitepaper, request a ree a
trial or receive a personalized demo o any o the products, contact SmartBear Sotware at +1 978-236-7900.
Youll nd additional inormation athttp://www.smartbear.com.
Learn about other Agile Solutions
Visit our website to see why testers chooseSmartBear products.
15
http://smartbear.com/contact-ushttp://www.smartbear.com/http://www.smartbear.com/http://www.smartbear.com/http://www.smartbear.com/http://smartbear.com/contact-us -
7/27/2019 Whitepaper Agile Testing
16/16
About SmartBear Sotware
SmartBear Sotware provides tools or over one
million sotware proessionals to build, test, and
monitor some o the best sotware applications and
websites anywhere on the desktop, mobile and in
the cloud. Our users can be ound worldwide, in small
businesses, Fortune 100 companies, and government
agencies. Learn more about the SmartBear Quality
Anywhere Platorm, our award-winning tools, or join
our active user community at www.smartbear.com,
on Facebook, or ollow us on Twitter @smartbear.
SmartBear Sotware, Inc. 100 Cummings Center, Suite 234N Beverly, MA 01915+1 978.236.7900 www.smartbear.com 2012 by SmartBear Sotware Inc.Specifcations subject to change. WP-5CHA-032012-WEB
http://-/?-http://-/?-http://-/?-http://www.smartbear.com/http://-/?-http://-/?-http://-/?-