axa winterthur performance test environment v06

38
` AXA Winterthur Performance Test Environment Requirement Document AXA Winterthur Wealth Management & Corporate Change History Versio n Issued Details of Change CR No. Author V01.00 17 March 2010 Initial Draft N/A Chris Dale V02.00 18 March 2010 Amendments following walkthrough with environments manager N/A Chris Dale document.doc INTERNAL V6 Draft Page 1 of 38

Upload: sarayudixit

Post on 13-Apr-2015

40 views

Category:

Documents


5 download

TRANSCRIPT

Page 1: AXA Winterthur Performance Test Environment V06

`

AXA Winterthur Performance Test Environment Requirement Document

AXA Winterthur Wealth Management & Corporate

Change History

Version Issued Details of Change CR No. Author

V01.00 17 March 2010 Initial Draft N/A Chris Dale

V02.00 18 March 2010 Amendments following walkthrough with environments manager

N/A Chris Dale

V03.00 04 April 2010 Appendix D – J added N/A Richard Moyce

V04.00 12 April 2010 Amendments following Walkthrough with Operational Acceptance Manager

N/A Chris Dale

V05.00 16 April 2010 LAM & WAN Comments added N/A Chris Dale

V06.00 26 April 2010 Amendments following walkthrough with Reviewers

N/A Chris Dale

document.doc INTERNAL V6 DraftPage 1 of 30

Page 2: AXA Winterthur Performance Test Environment V06

document.doc INTERNAL V6 DraftPage 2 of 30

Page 3: AXA Winterthur Performance Test Environment V06

Table of Contents

1 Introduction and Project Outline...............................................................................4

1.1 Document Purpose 41.2 Background 41.3 Management Summary 5

1.3.1 Tools 51.3.2 Resources 51.3.3 Environment 51.3.4 Environment Requirements 6

2 Recommendations......................................................................................................7

2.1.1 Tools 72.1.2 Resource 72.1.3 Environment 72.1.4 Local Area Network 72.1.5 Stubs & Drivers 72.1.6 Security 7

3 Costs............................................................................................................................8

3.1.1 Table of environment costs (Hardware and build costs) 83.1.2 Complete Loadrunner Course 93.1.3 Additional Performance Center controller 9

4 Other considerations and supporting evidence.....................................................10

4.1 Ring Fenced or dedicated Local Area Network (LAN) 104.2 WAN (Bristol – Basingstoke) 104.3 Stubs and Drivers 114.4 Data 114.5 Security 114.6 Software & Retrofits 124.7 Other support requirements 124.8 Risks 12

5 Performance Test Requirements for 2010..............................................................13

5.1.1 SVoA (May / June, July, Sept) 135.1.1.1 Example Technical Performance Tests for SvoA..................................................................................135.1.1.2 Performance Tests................................................................................................................................135.1.1.3 Load Tests............................................................................................................................................135.1.1.4 Stress Tests..........................................................................................................................................145.1.1.5 Volume Tests........................................................................................................................................145.1.1.6 Soak Tests............................................................................................................................................145.1.1.7 Alternative performance tests for SVoA................................................................................................14

5.1.2 NAC, CV & Docx 14

6 Document Particulars...............................................................................................15

Appendix A: Non Functional Performance Testing Overview......................................18

Appendix B: Specifying Timing NFRs............................................................................20

Appendix C: Automated Test Tools Overview...............................................................21

Appendix D: Siebel...........................................................................................................22

Appendix E: Enterprise Service Bus – Basingstoke Distributed ESB.........................23

Appendix F: BizTalk.........................................................................................................24

document.doc INTERNAL V6 DraftPage 3 of 30

Page 4: AXA Winterthur Performance Test Environment V06

Appendix G: Client View - Unknown...............................................................................25

Appendix H: Online Services (eCom).............................................................................26

Appendix I: Quotes..........................................................................................................27

Appendix J: Embassy.........................................................................................................28

document.doc INTERNAL V6 DraftPage 4 of 30

Page 5: AXA Winterthur Performance Test Environment V06

1 Introduction and Project Outline

1.1 Document PurposeThis document discusses the need for an AXA Winterthur Performance Test environment and also gives an example overview of the performance testing required for Single View of Agent (SVoA), Client View (CV), New Access Channels (NAC) due throughout 2010.

For a definition and description of Performance Testing – see Appendix A.

For information on specific performance NFRs – see Appendix B.

For a brief description of the Test Tools we use – see Appendix C.

For an Illustration of the Environments – see appendix D – I.

1.2 Background A requirement exists to be able to perform performance testing within Basingstoke. In order to achieve this we need an environment, tools and trained resources.

Throughout 2010 there are many high profile projects for example SVoA, CV, NAC that will require volume and load testing. It is therefore necessary to have a Performance Testing environment within Basingstoke (similar to the Tech Test environment that exists within AXA Bristol). More and more projects have the need to link to the performance environment in Bristol so that we can follow thorough the complete test.

External facing systems must be measured and scrutinised to ensure they are the highest standards within the industry. This will support the company’s ambition of achieving trusted market leader.

The Performance Test environment will be used to perform Performance, Load, Stress, Volume and Soak testing for all future releases (where necessary). It will help ensure that future releases do not degrade system performance. Ensuring business non functional requirements and service level agreements are measured and achieved.

The Performance Test environment will be available to test at both Project level (ensure that new functionality performs as required) and Release level (ensure that each new release of system has no overall performance degradation by using performance regression tests to check key processes within the system).

In recent projects performance test has disrupted functional testing by loading the environment to a point where functional test can no longer operate. An example of this is during the OCC project BizTalk queueing up many messages preventing functional tests from being completed for a prolonged period of time. The other disruption is functional test needs to stop during performance test to ensure that the load and performance can be measured correctly by knowing the activity in the environment. This impacts the test estimates, time and resource for the project. This is impossible to factor in to any plans as the performance work can vary depending on issues encountered.

document.doc INTERNAL V6 DraftPage 5 of 30

Page 6: AXA Winterthur Performance Test Environment V06

1.3 Management Summary

1.3.1 ToolsTools are required so we can generate load on the system and simulate multiple/concurrent actions. They generate reusable scripts that can be used for performance regression testing future releases of the system.

AXAs preferred tool is HP Loadrunner. Licences for this tool do exist in Tunbridge Wells/Bristol however when we have tried to use it before we have been informed that it is in use and there are no available licences. The problem also exists that the licenses are not floating so if one person is using the tool no one else can.

Another problem is that release cycles tend to be similar for other areas of AXA – for example everyone releases in November just before Xmas and so everyone wants to use Loadrunner during September/October.

AXA Tunbridge Wells are intending to upgrade to HP Performance Center (which is a new name for Loadrunner but uses floating licences). However we have just received an update from the Test Manager, Chris Bryant and Performance Center is still in the process of being purchased and going through AXA Tech European process in order to install in MEDC.  As such it’s not available for use yet and no timescale has been given for this instillation.

An additional Performance Center controller should be purchased to ensure availability to Basingstoke. In the short term we will approach transversal services in Bristol to discuss sharing the current unsupported Loadrunner 8.1.

1.3.2 ResourcesCurrently we have one performance test engineer, we need at least two so that projects can be rotated ensuring projects are assessed, performed and closed correctly. This will also mitigate any impact against any absence and allow knowledge sharing across the job family. Both recourses need to be given appropriate training they will need to be sent on the HP Loadrunner or Performance Center course provided by HP or an accredited training company.

1.3.3 EnvironmentAXA Bristol has a dedicated Tech Test environment used for performance testing. It represents the production environment as closely as possible (although in reality the hardware specifications lag slightly behind and it does not include every system).

The intention is to replicate this for the development and test work required in AXA Winterthur.

Advantages:

As the same environment/hardware is always used when performance testing future releases of systems, a reliable timing benchmark exists to compare against.

If we perform Load testing in UAT/System Test environment it is likely to slow the system down to a degree where we will get complaints from users/testers as evidenced by the Connect projects.

Performance Testing Tools do not run applications through the normal user application interface (they go through the back door using languages such as HTTP/HTML), so the possibility exists that they could enter non valid data into the Application under test (AUT) database, this could have an adverse effect on other manual test results.

It obviously takes time to develop the Performance tests and an environment needs to be available to develop them in.

Because a dedicated performance environment exists we do not have to keep having the arguments over whether a project should pay for one or not, proper Performance testing is then more likely to take place.

document.doc INTERNAL V6 DraftPage 6 of 30

Page 7: AXA Winterthur Performance Test Environment V06

1.3.4 Environment Requirements

The following table shows a high level estimate of the costs involved in creating a Performance Test environment matching the server specifics of the LIVE environment. All server costs included the Microsoft Windows operating system (including licences), and the basic Microsoft software included within the server software.

The following estimates do not include the costs for the manpower required to initially build the environment, plus any costs for physical servers. The assumption is that all new application servers will be built on virtual servers and further software, hardware updates to this environment will be paid for under project budgets as per the existing test environments.

Appendix D to I illustrates the applications and Databases required for each environment.

document.doc INTERNAL V6 DraftPage 7 of 30

Page 8: AXA Winterthur Performance Test Environment V06

2 Recommendations

2.1.1 Tools

Recommendation 1: Work with AXA PPP to assist in installation as this is the most cost effective in the short and long term, HP support contracts will be maintained in line with the test strategy for the AXA Group. The purchase of an additional controller would ensure that Basingstoke has guaranteed availability. In the short term we will approach transversal services in Bristol to discuss sharing the current unsupported Loadrunner 8.1.

2.1.2 ResourceRecommendation 2: Currently we have one performance test engineer, we need at least two so that projects can be rotated ensuring projects are assessed, performed and closed correctly. This will also mitigate any impact against any absence and allow knowledge sharing across the job family. Both recourses need to be given appropriate training they will need to be sent on the HP Loadrunner or Performance Center course provided by HP or an accredited training company.

2.1.3 Environment

Recommendation 3: We have a dedicated Performance Test environment. This can be staged as opposed to a big bang approach. The figure for the cost of building a Performance Test Environment are detailed under the heading “Costs”.

2.1.4 Local Area Network

Recommendation 4: No separate LAN is required. We should continue with the LIVE LAN but monitoring should be in place to ensure that there are no adverse effect on LIVE traffic & Applications.

2.1.5 Stubs & Drivers

Recommendation 5: Stubs & Drivers, This should be considered at project level. Any stubs, drivers and related work should be covered by the project.

2.1.6 Security

Recommendation 6: An active directory group should be created to enable control on this environment. This will be owned by the Performance Test Lead and Environments manager.

document.doc INTERNAL V6 DraftPage 8 of 30

Page 9: AXA Winterthur Performance Test Environment V06

3 Costs

3.1.1 Table of environment costs (Hardware and build costs)

Application Virtual Servers Estimate Additional Information

Siebel 10 x Application Servers £18,000 p.a. Additional Siebel licences may be required

1 x Database Server £3,630 p.a.

Disc Storage Space (200Gb) £1,400 p.a.

Build Days*:Infrastructure – 5Development – 8DBA – 2

£3,990

Enterprise Service Bus (ESB)

Application Server - tbc n/k

Database Server – tbc n/k

Client View Application Server – tbc n/k

Database Server - tbc n/k

OLS (eCom) 12 x Application Servers £21,600 p.a.

2 x Database Server £7,260 p.a.

Disc Storage Space (620Gb) £4,340 p.a.

Build Days*:Infrastructure – 6Development – 20DBA – 4

£7,980

WebQuotes 16 x Application Servers £28,800 p.a.

1 x Database Server £3,630 p.a.

Disc Storage Space (200Gb) £1,400 p.a.

Build Days*:Infrastructure – 8Development – 20DBA – 2

£8,265

Embassy 5 x Application Servers £9.000 p.a.

1 x Database Server £3,630 p.a.

Disc Storage Space (520Gb) £3,640 p.a.

Build Days*:Infrastructure – 2.5Development – 5DBA – 2

£2,423

document.doc INTERNAL V6 DraftPage 9 of 30

Page 10: AXA Winterthur Performance Test Environment V06

BizTalk 2 x Application Servers £3,600 p.a. Additional BizTalk licence may be required.

2 x Database Servers £7,260 p.a. Additional BizTalk licence may be required.

Disc Storage Space (240Gb)

£1,680 p.a.

Build Days*:Infrastructure – 3Development - 3

£1,710

Datapower Box n/a n/a Cost required for Smart421 to come in and configure existing box for new environment.

Not required at this time although is considered later a charge of £1000 a day.

*These ‘Build Days’ are based on a single FTE committed 100% to building these environments. These amounts are also assuming resources are available and therefore give no indication on the actual duration of building this environment.

Hardware £118,870

Man days (90.5 * £285) £25792.50

Total £144,662.50

3.1.2 Complete Loadrunner Course

Each performance test engineer will need to be given appropriate training they will need to be sent on the HP Loadrunner course or accredited training company. http://www.edgewordstraining.co.uk/is an example of a specialist company that can provide this at a cost of £1600 (5 days).

3.1.3 Additional Performance Center controller

An additional HP Performance center controller will provide Basingstoke with guarantee access to the performance center application. This will carry a one off cost of £34,771 with an additional support contract of £7,490 pa.

document.doc INTERNAL V6 DraftPage 10 of 30

Page 11: AXA Winterthur Performance Test Environment V06

4 Other considerations and supporting evidence

4.1 Ring Fenced or dedicated Local Area Network (LAN)An alternative to ring fencing a local area network would be to use the existing LAN, any schedule stress/load performance testing carried out during normal working hours will be carefully monitored to ensured that it has no adverse effect on the current production environment network traffic. For larger stress testing, if necessary this will be completed outside of normal working hours i.e. evenings or weekends. This will remove any risks to the Production environment associated with running load, stress and performance tests on the same network although we could run into difficulties with backups and scheduled jobs for example Embassy and WASP.

Assuming we follow our current IT strategy when provisioning this separate test environment then all the servers listed above (47 App and 9 Database) would be Virtual servers.

This is achievable with our current infrastructure and we would also be able to provision dedicated networking connections at a physical level. However, while this would avoid issues with impacting Live systems on the network, with our shared virtual environment others areas could still be affected.

Storage: Application and database servers require different tiers of storage and whilst we can, and do, split the storage up so that LIVE and TEST systems use different spindles within our storage arrays, it should be appreciated that without providing a dedicated storage system heavy load on the SAN could have an impact on LIVE and other test systems.

There is also the issue with disk I/O. Experience has shown that the database servers require a high tier of storage where as most application servers will run well on mid to low tier storage. Again, we cater for this in or current environment but to provision a dedicated section of storage on both our high and mid level tiers they’d need to be reconfigured. This not only takes time but would reduce the numbers of disk spindles available to Live/Test/Perf environments and thus reduce performance all round. We would also loose more of the RAW disk space due to the extra RAID overheads needed to provision more dedicated storage groups.

To get round this dedicated storage would need to be purchased and to be similar in set up to the Live systems we’d need to have high and mid tier storage arrays.

ESX Host: Again, we can easily provision a dedicated VM cluster but we would expect to need at least 6 hosts to do this, possibly 8 depending on the workload and DB requirements. To avoid any impact on Live and other systems we’d need to either provision a dedicated C-class enclosure or dedicated ports/interconnects in an existing enclosure. Again, reducing the ports available to LIVE systems in any of our enclosures obviously runs the risk of affecting their performance.

We should also be aware that, assuming this is all provisioned on Windows 2008, we’d need to include a further 1.8Tb of storage for the boot and system drives for these servers and hosts.

Active directory Domain: We will also need to look at how the AD side of this was set up. Not having a separate AD domain could have an impact on live system when testing. To be truly independent this would need to be considered and would require at least 1 extra server to run DNS and act as the domain controller.

4.2 WAN (Bristol – Basingstoke)

Since the figures below were produced, the utilization of the WAN link between Basingstoke and Bristol has significantly increased, due to the migration of much of the Traditional business to Capita, and routing changes. As a consequence of this, we now have periods when our traffic hits its maximum capacity.

As there is no current understanding of the potential WAN usage of this system, it is very difficult to estimate what impact WAN testing will have. There is, however, a substantial possibility that WAN testing will either

document.doc INTERNAL V6 DraftPage 11 of 30

Page 12: AXA Winterthur Performance Test Environment V06

saturate or further saturate the WAN link, with a knock on impact to both current connectivity, and also to the testing itself – potentially skewing the results.

Beyond this, an additional issue that has to be focused on is ensuring that testing should be undertaken using a similar infrastructure to the final proposed infrastructure. If a proposal suggests that elements should be located in Basingstoke, and other elements located in another location, this must be the solution tested due to the issues of latency impacting test results. If this is not done, the test will be invalid.

The current links between Basingstoke & Bristol are 15Mbps, with 4 Mbps reserved for voice traffic, leaving 11Mbps for data.

Following diagrams show current utilisation during a typical day.

Figure 1 Basingstoke-Bristol Link (Day)

The average utilisation is around 2.5 - 3Mbps, leaving enough scope for the growth but this may not be enough for performance or stress testing.

4.3 Stubs and DriversWith the suggested staged approach there may be a need for stubs and drivers so the testing can continue without the need for a full blown performance environment.

4.4 DataCurrently a full copy of the production data is being used for test. Due to the fact that performance test can corrupt data rapidly there will be a need to be able to restore this data from a signed off backup. These restores will be controlled and coordinated by the Performance Test Lead and Environments Manager. Supported by the development team.

4.5 SecurityWhen Performance Testing is taking place the environment being used must be kept under strict control with no unauthorised changes being made (such as the installation of service packs or tweaking of configuration settings etc.) which might falsely influence test results leading to possible incorrect conclusions. Therefore the environments should be tightly change controlled from a software and infrastructure viewpoint during these periods. This will be owner by the Performance Test Lead and Environments manager.

document.doc INTERNAL V6 DraftPage 12 of 30

Page 13: AXA Winterthur Performance Test Environment V06

4.6 Software & RetrofitsAll server software should replicate the Production environment at all times, unless specifically for the Performance Testing, it has been changed/enhanced for testing before being released into the Production environment. This would be necessary for external systems such as FNZ & Portals for example.

If a project is not making use of the performance test environment or the risk assessment outcome confirms that no Performance Test is required then it will be the responsibility of that project to retrofit all relevant code during the project warranty period as per all other environments.

4.7 Other support requirementsThe day to day support of these environments will require the aid of:-

Development Managers to ensure that deployments are configured and working as per the Production environment.

Environments & Implementations Manager to ensure that existing environments continue to replicate the Production environment software and hardware following on from successful project releases or architecture/infrastructure changes..

Operations and Infrastructure – to provide continued support to ensure that these environments are readily available to be used and support any changes to the infrastructure as required. Any planned architecture/infrastructure changes that could affect these environments will need to be reviewed and agreed, by the Development Managers, Environments & Implementations Manager, Performance Test Lead and the Ops and Infrastructure team.

It will be the responsibility of the Environments Manager to highlight any infrastructure changes that are required for the performance test environment.

4.8 RisksTesting will be performed across the WAN/internet and therefore make it hard to measure as we have no control over the amount of traffic passing to and from the companies/applications involved.

The WAN Between Basingstoke & Bristol average utilisation is around 2.5 - 3Mbps, leaving enough scope for the growth but this may not be enough for performance or stress testing.

document.doc INTERNAL V6 DraftPage 13 of 30

Page 14: AXA Winterthur Performance Test Environment V06

5 Performance Test Requirements for 2010The releases planned for SVoA, CV, DocX, NAC all have performance testing requirements as summarised below.

5.1.1 SVoA (May / June, July, Sept)The latest version of Siebel 8.1 will involve:

Drop1a:

Screen alignment to TDM (Target Data Model)

FSA Feed

Web Quotes

OLS Interface

Drop 1b:

Blue to Orange Siebel Migration

Orange to Blue Siebel Synchronisation Via web methods ESB

Drop 1c:

Interfaces to Embassy

There follows an example only of part of a Technical Test Plan that details how requirements and tests should be specified. The NFRs are examples only.

5.1.1.1 Example Technical Performance Tests for SvoA

The minimum NFR is that all screens and data items should meet, and not be less than, the existing performance level for Online Service. This will be verified by monitoring performance during System Testing and UAT.

Example NFRs that will be tested using Loadrunner and QTP are detailed below.

5.1.1.2Performance Tests

The performance baseline test is executed to give an indication of response times for the common processes.

The baseline test will be run at peak volumes TBC.

5.1.1.3Load Tests

The load scalability test will consist of a single script that performs a process TBC

This test will determine at what stage the process response times fail to meet the NFR.

document.doc INTERNAL V6 DraftPage 14 of 30

Page 15: AXA Winterthur Performance Test Environment V06

Below are other NFRs that will be incorporated into the full Technical Test Plan (if we decide to do one) and be tested in a similar way to the NFR above.

Page transitions and validation averaging 5 secondso Peak times – TBCo Off peak – TBCo Low usage – TBCo Trigger point will be on the selection of next or the wizard navigation bar

Summary page averaging 3 secondso Peak times – TBCo Off peak – TBCo Low usage – TBCo Trigger point will be on the selection of next on the confirmation screen

Users and concurrency o Peak times – TBCo Off peak – TBCo Low usage – TBC

5.1.1.4Stress Tests

Test cases developed for Load testing will be ramped up and used as the basis to stress test the system to try and find break points and limits.

5.1.1.5Volume Tests

Volume tests will be considered if a dedicated performance test environment is provided as per recommendation three above.

5.1.1.6Soak Tests

Soak tests that continuously request quotes over a 24 hour period will be performed.

5.1.1.7Alternative performance tests for SVoA

Without a dedicated performance test environment and Load Runner we can only perform some load testing in the current System Test/UAT environment. This will consist of getting a number of testers/users to simultaneously use the system and monitor the performance; a risk of accepting user error will need to be accepted. This is a method that has been used in the past and has proved adequate for low throughput systems. It will be of very limited use for high through put ecommerce type systems such as Siebel where we need to simulate 1000 users.

We can also use HP Quick Test Professional (QTP) to automate the Web Quote User process and run this process simultaneously on a number of PCs (up to 5) and monitor performance.

5.1.2 NAC, CV & Docx NAC, CV & DocX are two other examples of projects that will require performance tests. NFR’s and other performance related documentation have not yet been circulated for review.

document.doc INTERNAL V6 DraftPage 15 of 30

Page 16: AXA Winterthur Performance Test Environment V06

6 Document Particulars

Review & Sign OffAs part of the review and sign off process, it is important to refer to the following criteria to ensure that the document is fit for purpose. There are two types of review criteria:

Generic criteria – which can apply to any document

Specific criteria – that assess specific areas within a particular type of document.

Generic Criteria

The Generic criteria that must be used when considering deliverables are:

1. Has each section of the document been addressed to the expected level?

2. Could this deliverable, in its current state, be used as an input into a subsequent document or subsequent activity?

e.g. If you are reviewing a Requirements Definition Document, is it of sufficient quality to be used as an input into the Business System Design?

Specific Criteria

For this deliverable, the following specific criteria should be applied to the review:

1. Does the Environment Requirements Specification adequately describe the background to the project?

2. Does the Environment Requirements Specification adequately describe the purpose of the required environment(s)?

3. Does the Environment Requirements Specification adequately describe the components of the required environment(s)?

4. Have the services and support required from Delivery Services been documented?

5. Have the services and support required from other sources been documented?

The following review table applies to each baselined version of the document.

Approvers (Accountable)

Name Role Sign Off Date

Bhavesh Bhagodia Operational Acceptance Manager

Nigel Palmer IT Strategy & Design Manager

Steve Myers Head of Technology Services

Paul Smith Corp & AWWM IT Head of Strategy, Design and Dev

document.doc INTERNAL V6 DraftPage 16 of 30

Page 17: AXA Winterthur Performance Test Environment V06

Authors (Responsible)

Name Role Contribution

Chris Dale Performance Test Lead Author

Richard Moyce Environments Manager Contributor

Bhavesh Bhagodia Operational Acceptance Manager Contributor

Stuart Coleshill Principle Technical Analyst Contributor

David Scanlan Senior Support Analyst - Communications Services

Contributor

Reviewers (Consulted)

Name Role Level of Review / Criteria

Eric Chapman Test Manager Review

Gareth Harries Development Manager Review

Barry Webb Senior Development Manager Review

Simon Rose Development Manager Review

Roger Faithfull Manager Technology Services Review

Andy Weightman Information Services Manager Review

Dee Geliot Development Manager - Siebel Review

Gus McConnell Siebel Developer (Contract) Review

Philip Palmer IT Development Manager Review

Dave Phelps Solution Consultant Review

Mike Evans Senior Development Manager Review

Catherine Beaven Development Team Leader Review

John Goodyer Senior Project Manager Review

Distribution (Informed)

Name Role

Steve Rawlings Development Team Leader

Tim Higgins Development Team Leader

Mark Fox Siebel Developer

John Bennett Automated Test Lead

Stuart O’loughnane Analysis and Senior Test Manager

Aaron Ashworth Senior Solutions Consultant

Ian Bowman Principal Communications Analyst

Reference[1] <Author>, <Title>, <Publisher>, <Year>

document.doc INTERNAL V6 DraftPage 17 of 30

Page 18: AXA Winterthur Performance Test Environment V06

[2] <Title>, <URL>

document.doc INTERNAL V6 DraftPage 18 of 30

Page 19: AXA Winterthur Performance Test Environment V06

Glossary and AcronymsFor Glossary and Acronyms, refer to ARK.

Derived from Template: PRJD-TMPL-016 Environment Requirements Specification Word Template V05.04

Document Storage Location: <TBD>

document.doc INTERNAL V6 DraftPage 19 of 30

Page 20: AXA Winterthur Performance Test Environment V06

Appendix A: Non Functional Performance Testing Overview

There are a number of types of Non Functional performance tests and confusion exists as to what they actually do. A brief definition of each follows:

Performance TestsPerformance Tests are tests that determine end to end timing (benchmarking) of various time critical business processes and transactions, while the system is under low load, but with a production sized database.  This sets ‘best possible’ performance expectation under a given configuration of infrastructure.  It also highlights very early in the testing process if changes need to be made before load testing should be undertaken.  For example, a customer search may take 15 seconds in a full sized database if indexes had not been applied correctly, or if an SQL 'hint' was incorporated in a statement that had been optimized with a much smaller database. By raising such issues prior to commencing formal load testing, developers and DBAs can check that indexes have been set up properly.

Performance problems that relate to size of data transmissions also surface in performance tests when low bandwidth connections are used. For example, some data, such as images and "terms and conditions" text are not optimized for transmission over slow links. 

It is 'best practice' to develop performance tests with an automated tool, such as Winrunner or QTP so that response times from a user perspective can be measured in a repeatable manner with a high degree of precision.  The same test scripts can later be re-used in a load test and the results can be compared back to the original performance tests.

A key indicator of the quality of a performance test is repeatability.  Re-executing a performance test multiple times should give the same set of results each time.  If the results are not the same each time, then the differences in results from one run to the next can not be attributed to changes in the application, configuration or environment.

The best time to execute performance tests is at the earliest opportunity after the content of a detailed load test plan have been determined.  Developing performance test scripts at such an early stage provides opportunity to identify and remediate serious performance problems and expectations before load testing commences. 

For example, management expectations of response time for a new web system that replaces a block mode terminal application are often articulated as 'sub second'.  However, a web system, in a single screen, may perform the business logic of several legacy transactions and may take 2 seconds.  Rather than waiting until the end of a load test cycle to inform the stakeholders that the test failed to meet their formally stated expectations, a little education up front may be in order.  Performance tests provide a means for this education. 

Load TestsLoad Tests are end to end performance tests under anticipated production load.  The objective of such tests are to determine the response times for various time critical transactions and business processes and ensure that they are within documented expectations (or Service Level Agreements - SLAs).

Load Tests are major tests, requiring substantial input from the business, so that anticipated activity can be accurately simulated in a test environment.  If the project has a pilot in production then logs from the pilot can be used to generate ‘usage profiles’ that can be used as part of the testing process, and can even be used to ‘drive’ large portions of the Load Test.  

Load testing must be executed on “today’s” production size database, and optionally with a “projected future size” database.  If some database tables will be much larger in some months time, then Load testing should also be conducted against a projected database.  It is important that such tests are repeatable, and give the same results for identical runs.  They may need to be executed several times in the first year of wide scale deployment, to ensure that new releases and changes in database size do not push response times beyond prescribed SLAs. 

Load on a system is typically generated by a performance testing tool within a designed scenario, which simulates or generates numbers of transactions, of processes, of users, volume of data etc. The system’s resources – memory, CPUs, licenses, bandwidth – may be constrained purposefully, or as a result of test

document.doc INTERNAL V07.01 DraftPage 20 of 30

Page 21: AXA Winterthur Performance Test Environment V06

environment limitations. A load profile describes the combination of load that will be applied for a given test, and how it will vary over time.

Testing under load typically makes many measurements of different types of response times, which may be averaged to give an overall value, or displayed as a scatter graph to show populations, trends and emergent behaviours. Measurements may be interpreted with graphs of memory, CPU or network usage to give context – such measurements are often made with a monitoring tool.

Stress TestsTesting conducted to evaluate a system or component at or beyond the limits of its specified requirements.

Stress Tests determine the load under which a system fails, and how it fails.  This is in contrast to Load Testing which attempts to simulate anticipated load.  It is important to know in advance if a ‘stress’ situation will result in a catastrophic system failure, or if everything just “goes really slow”.  There are various varieties of Stress Tests, including spike, stepped and gradual ramp-up tests. Catastrophic failures require restarting various infrastructures and contribute to downtime, a stress-full environment for support staff and managers, as well as possible financial losses.  If a major performance bottleneck is reached, then the system performance will usually degrade to a point that is unsatisfactory, but performance should return to normal when the excessive load is removed.

For the sake of simplicity, one can just increase the number of users using the business processes and functions coded in the Load Test. However, one must then keep in mind that a system failure with that type of activity may be different to the type of failure that may occur if a special series of 'stress' navigations were utilized for Stress testing.

Volume Tests

Often the most misunderstood form of NF Testing. This is testing where the system is subjected to large volumes of data. Volume Tests are often most appropriate to Messaging, Batch and Conversion processing type situations.  In a Volume Test, there is often no such measure as Response time.  Instead, there is usually a concept of Throughput. 

A key to effective volume testing is the identification of the relevant capacity drivers.  A capacity driver is something that directly impacts on the total processing capacity.  For a messaging system, a capacity driver may well be the size of messages being processed. 

Volume testing refers to testing a software application with a certain amount of data. This amount can, in generic terms, be the database size or it could also be the size of an interface file that is the subject of volume testing. For example, if you want to volume test your application with a specific database size, you will expand your database to that size and then test the application's performance on it. Another example could be when there is a requirement for your application to interact with an interface file (could be any file such as .dat, .xml); this interaction could be reading and/or writing on to/from the file. You will create a sample file of the size you want and then test the application's functionality with that file in order to test the performance.

Soak Tests Leaving the application running at high levels for prolonged periods of time – this will typically find degradation in performance due to memory leaks or log files/tables becoming full.

document.doc INTERNAL V07.01 DraftPage 21 of 30

Page 22: AXA Winterthur Performance Test Environment V06

Appendix B: Specifying Timing NFRsIt is important that response time is clearly defined, and the response time requirements (or expectations) are stated in such a way to ensure that unacceptable performance is flagged in the load and performance testing process.

One simple suggestion is to state and Average and a 90th Percentile response time for each group of transactions that are time critical.   In a set of 100 values that are sorted from best to worst, the 90 th

percentile simply means the 90th value in the list.  The specification is as follows:

Time to display order details

Average time to display order details. Less than 5.0 seconds.

90th percentile time to display order details. Less than 7.0 seconds.

The above specification, or response time service level agreement, is a reasonably tight specification that is easy to validate against.

For example, a customer 'display order details' transaction was executed 20 times under similar conditions, with response times in seconds, sorted from best to worst, as follows -

    2,2,2,2,2, 2,2,2,2,2, 3,3,3,3,3, 4,10,10,10,20        Average = 4.45 seconds,  90th Percentile = 10 seconds

The above test would fail when compared against the above stated criteria, as too many transactions were slower than seven seconds, even though the average was less than five seconds.

If the performance requirement was a simple "Average must be less than five seconds" then the test would pass, even though every fifth transaction was ten seconds or slower.

This simple approach can easily extended to include 99th percentile and other percentiles as required for even tighter response time service level agreement specifications.

document.doc INTERNAL V07.01 DraftPage 22 of 30

Page 23: AXA Winterthur Performance Test Environment V06

Appendix C: Automated Test Tools Overview We will focus on the most commonly used types of Automated Tests tool and explain the significant but often misunderstood difference.

GUI Record and Playback

A Record and Playback testing tool in its simplest form allows you to record keys strokes, mouse movements and button clicks made by the user and then play them back – the key point here is that the actions are played back through the user interface (GUI) just as if the user was typing themselves. If the GUI wasn’t present the script would not be able to run as it is looking for buttons and pull downs and so on.

The tool used by AXA is HP Quick Test Pro (QTP). This is basically the same as WinRunner (another HP tools), the only difference being that QTP uses a BASIC type scripting language while WinRunner uses a C Type language.

WinRunner was the most widely used tool but has now been over taken by QTP (basically because the scripting language is easier to understand and WinRunner is no longer supported).

Performance Testing

A performance testing tool allows you to record the low level messages that are being sent by the user interface (GUI) as a result of the user hitting keys and moving the mouse and clicking the user interface (GUI).

So when you play back you are sending the messages that have been recorded. Usually in HTTP/HTML language. So because you don’t actually use the GUI when you play the messages back it is easy to send multiple instances of these messages, in effect simulating lots of users using the system.

The disadvantage when doing timing tests is that because the tool isn’t looking at the GUI it will often think an operation has been completed before it would appear completed to the user.

For example if you recorded someone typing in the BBC Web site URL www.bbc.com and then played the script back to see how long it took the web site to appear after the user had entered the URL then this would happen. As soon as the web site was found the performance test would complete and say OK that took say 1.217 seconds. However the GUI may still be trying to display all the graphical components of the web site which may take another 3 seconds.

So when we do performances testing us often have to use the Performance Testing tool to create the load by simulating many users sending messages and also have one instance of the Record and Playback testing tool running measuring the true user experience through the GUI.

One final point – Performance testing tools were primarily designed to test ecommerce applications so they only speak HTTP/HTML type languages. So you would not be able to use one to performance test Embassy as it isn’t an internet type application, but we can use it to test the Ecommerce/OLS type applications.

The tool used by AXA is HP Loadrunner (also known as Performance Center).

It has 2 components VUGEN and Loadrunner.

Loadrunner is used to record the script while VUGEN is use to playback the script and simulate a number of users. VUGEN is the component that is licensed and costs depend on how many users we want to simulate.

document.doc INTERNAL V07.01 DraftPage 23 of 30

Page 24: AXA Winterthur Performance Test Environment V06

Appendix D: Siebel

document.doc INTERNAL V07.01 DraftPage 24 of 30

Page 25: AXA Winterthur Performance Test Environment V06

Appendix E: Enterprise Service Bus – Basingstoke Distributed ESB

document.doc INTERNAL V07.01 DraftPage 25 of 30

Page 26: AXA Winterthur Performance Test Environment V06

Appendix F: BizTalk

document.doc INTERNAL V07.01 DraftPage 26 of 30

Page 27: AXA Winterthur Performance Test Environment V06

Appendix G: Client View - Unknown

document.doc INTERNAL V07.01 DraftPage 27 of 30

Page 28: AXA Winterthur Performance Test Environment V06

Appendix H: Online Services (eCom)

document.doc INTERNAL V07.01 DraftPage 28 of 30

Page 29: AXA Winterthur Performance Test Environment V06

Appendix I: Quotes

document.doc INTERNAL V07.01 DraftPage 29 of 30

Page 30: AXA Winterthur Performance Test Environment V06

Appendix J: Embassy

document.doc INTERNAL V07.01 DraftPage 30 of 30