testing in system integration project

23
WIP-SHARP Wipro’s IV & V Approach for System Integration Projects V&V For Scalability, High Availability, Reliability, Performance Wipro Technologies Bangalore, India Wipro Technologies Confidential Page 1 of23

Upload: melissa-miller

Post on 06-Oct-2015

4 views

Category:

Documents


0 download

DESCRIPTION

Approach for System Integration ProjectsV&V For Scalability, High Availability, Reliability, Performance.Business Solution Mapping

TRANSCRIPT

Methodology for testing System Integration Projects

WIP-SHARP

Wipros IV & V Approach for System Integration Projects

V&V For Scalability, High Availability, Reliability, Performance

Wipro Technologies

Bangalore, IndiaTable of Contents

41.Introduction

42.Characteristics and Challenges for testing

53.Applications involved in OSS/BSS Project

54.IV & V Methodology

64.1 Testing Process

105.Types of Testing

105.1 Functionality Testing

105.2Out of Box functionality Testing

105.3GUI (Web Interface)

115.4Multi-user

115.5Recovery & Restart

115.6Security

115.7Interface Tests

115.8Data Integrity Testing

115.9Data Backup and Restore Testing

125.10Compatibility testing

125.11Performance testing

135.12Reliability & Availability Tests (includes Failover tests)

135.13Localization /Globalization Testing

135.14Workflow Testing

136.Stages of Testing

146.1 Application Level Testing

176.2System Integration Testing (End-To-End Integration testing)

206.3Acceptance Testing

206.4Regression Testing

217.Test Tools

228.Environmental Requirements for testing

229.Test Artifacts

2310.Resource Planning

2411.Risks and Contingencies

2512.Appendix

1. Introduction

An OSS/BSS project or System integration project typically involves multiple applications with a mind-boggling combination of protocols, Hardware and Software. The very characteristic of these projects poses lot of challenges for testing. WIPRO has developed a framework WIP-SHARP to measure Scalability, High Availability, Reliability and Performance by carrying out a complete suite of IV & V activities through out the project life cycle. The objective of this methodology is to provide effective IV & V services by making use of the best practices. This document does not cover the process details.

2. Characteristics and Challenges for testing

#CharacteristicsChallengesType/Method of testing

1Involves Multiple applications, customized/extended, Different system Architecture and design aspects

Business Solution Mapping Functionality testing of these applications

2Interaction between various applications by using EAI tools & adapters (off the shelf/custom built)Choosing right adapters and data transformation,

Data Integrity.Rigorous Integration testing, business scenario testing, exceptional scenario testing, Adapter testing (custom built),

4Multiple layers, components, protocols, OS, Software, Hardware will have an impact on the performanceMeeting Performance requirements Performance, Scalability, Availability, Reliability tests, use of load testing tools

5

Interface with other hardware/network elements etc

Reliability, Performance and FunctionalityInterface testing, development of test drivers/stubs/scripts/simulators

6Business Scenario as Requirement Specification, New business process/ Enhancement

Implementing business processTest Plans, ATPs etc to be mapped to the business processes of the customer. Choice of appropriate environment and scope and coverage of testing.

7Up gradation of applications, Decommissioning legacy systems, Data Migration

Data Consistency,

Smooth transition from the existing system/applications to new system/applicationsRigorous Regression Testing, Automation

8Multiple Region, Currency, LanguageLocalizationLocalization Testing

3. Applications involved in OSS/BSS Project

Typically the following applications are involved in an OSS/BSS integration project. Each of these applications would be covered under the system tests/ System Integra. The applications are

CRM

Billing

Service Provisioning

Service Assurance

Web Portal

Information System

Mediation

EAI

4. IV & V Methodology

The testing of System Integration implementations would be in TWO parts. The functional verification and validation against the Requirement Spec (Business Processes) and Performance evaluation against the indicated requirements

The IV & V team is involved right from the beginning of the project and we follow the IV & V model given below.

4.1 Testing Process

The over all testing processes would be organized into seven groups of one or more process each and is as indicated below.

The following figure illustrates the process groups overlaps and variation within a phase

Overlap of process Groups in a phase

4.1.1 Test Strategy

This activity is carried out in parallel with the requirement-gathering phase. The objective of this phase is to develop an overall test strategy by understanding the solution architecture, applications involved in the solution, H/W, S/W, Protocols etc used. The various types and stages of tests required would be identified during this phase.

4.1.2 Test Planning

Test Planning would be carried out in parallel with the project planning activity. The objective of this phase is to develop an overall test plan, which includes

Estimation & Schedule

Test Suspension Criteria Defining Test Pass/Fail Criteria Defect Classification Test Environment definition Resource Requirements Identification Test Coverage Test deliverables.4.1.3 Test Case Design

In parallel with design phase, Test Case design and test data preparation would be carried based on appropriate Proven Methodologies. This activity includes Development of Stubs/drivers/simulators

Test Environment creation

Test Data generation4.1.4 Test Execution

Based on the Test plan and test cases both manual and automated test cases would be executed. This phase includes the following activities

Test Execution Environment setup

Automated/Manual Test script development

Test Script/Cases Execution

Capturing the Test Results

4.1.5 Defect Reporting

IV&V team will use a defect-tracking tool to log the defects resulting from the tests. The following Activities will be carried in this phase

Record Defects

Assign severity & Priority

Report defects to development team

The Severity of an incident will be classified as follows:

SeverityDescription

FatalThe functionality cannot be performed using the system; No workaround exists

MajorThe functionality cannot be performed using the system; Workaround exists

MinorA specific case does not function for a required functionality

The Priority of a defect will be classified as follows:

PriorityDescription

ImmediateRequires immediate fix

HighRequires immediate attention

MediumBug fix can be scheduled to the next release

LowBug fix can be delayed

On HoldThe fix is withheld

4.1.6 Summary Report & Analysis

During this phase the following metrics and reports will be prepared. Test Coverage

Error Discovery Rate

Defect Trend analysis

Test Metrics

Performance Metrics

Performance Graphs List of non compliance Verification evidencesThe overall summary of testing process would be as

TaskInputTools & Techniques

Output

Test Strategy- Business Requirement

- Application details

- Overall solution

architecture- Testing Knowledge

- Testing Methodology

- Master Test Strategy

- Individual Applications

Test Strategy

Test Planning

-Test Strategy

documents

- Application Information

- Project Plan

- Test Management

Tool

- Testing Knowledge

- Test Methodology

- Templates

- Automated Tools

Knowledge.

-Test Plan

- Support details

Test Case Design

- Test Plan

- Solution Mapping

- Business Scenarios

- Detailed Design doc

- Testing Knowledge

- Templates- Test Cases/Scripts/

Stubs/Drivers

Test Execution- Test Plans

- Test Cases/Scripts

- Testing Skills

- Test Tools- Performance Graphs

- Test Results

- Defect Reports

- Summary Reports

5. Types of Testing

GrayBox approach would be applied for system integration projects. Gray-Box approach is nothing but combination of White Box and Black Box approach. The following types of tests would be carried out for System Integration projects.

5.1 Functionality Testing

These will be a representative set of test cases for each of the functional modules. Each test case will have data, operations and expected results associated with it. These would be black box testing and would be targeted to ensure that the following features are correctly provided for:

Business Rules are implemented correctly.

Processing Logic is correct.

Correctness of all computations

5.2 Out of Box functionality Testing

The Out-of-Box functionality testing is used to identify the functional limitations of the applications.

5.3 GUI (Web Interface)

The basic focus of these tests would be to ensure that the front-end screens (Package specific screens and Browser based) are properly designed according to the USI screen templates. The key points that will be covered by these tests are:

a) Navigation

The screen flow & links should be checked. The movement & control of screens back wards and forwards through links and browser options to be checked.

The navigation aspects of the CRM users who would be operating the Clarify

b) Validations

Check for field / screen level validations of all fields on the screen when a save or submit is performed (formats like date, currency and other mandatory files)

Check for Enabling/Disabling of buttons & links based on access control defined for the user (Add and Modify)

c) Multilingual Features

The ability of the system to handle users using different languages simultaneously needs to be checked

d) Error handling

The error messages will be stored in the database and based on the type of error a suitable error message will be retrieved.

On Wrong / Bad field inputs, the behavior of the system, the legibility and meaningfulness of the displayed messages will be checked.

Exception handling5.4 Multi-user

The basic objective of these tests will be to ensure that the system handles multiple requests in parallel without getting into deadlocks or loosing the integrity of the transaction data. This set of tests would identify transactions that can happen in parallel and state tests around these conditions.

5.5 Recovery & Restart

The recovery and restart will only focus on maintaining the integrity of the system during a database failure or crash or during a network failure. The various failures like database failures, process failures & transaction failures (like order entry & authorisation, customer registration, specific transactions will be identified in the system test case) would be simulated to ensure that the system could recover without loss of integrity of information. These tests would cover some of the standard failure scenarios as follows:

Transaction recovery

Process recovery/restart

Browser recovery/restart

5.6 Security

These tests would verify that the system meets the security policies and requirements of the system.

Authentication features

Authorisation features related to access control on resources

Auditing & logs

All the events from the web will be audited

The security will be checked for both the internal and external users of the system. More details on this will be added as security is finalised.

5.7 Interface Tests

These tests are designed to check the various systems participating in data and transaction sharing. The interfaces between the components would be tested.

5.8 Data Integrity Testing

Data integrity is a key requirement for business-critical computing. Stock exchanges, banks, telecommunications companies, and the transaction processing applications of most businesses cannot afford to risk the integrity of their data.

The Data Integrity testing is used to test the data integrity of the systems/applications. Distributed transactions virtually assure data integrity at a global level through the "Acid" test. Acid is an acronym for Atomicity, Consistency, Isolation, and Durability.

5.9 Data Backup and Restore Testing

Data Backup and Restore Testing is another area of concern. All database and file servers should have defined backup procedures and defined restore procedures. These procedures would be tested by many failures from different possible causes. Backup and recovery testing can also provide a benchmark for time and resources required when restoring databases.5.10 Compatibility testing

Compatibility testing is necessary to ensure that localized products function properly in the local hardware and software environment, including local operating systems, peripheral devices, and networking and communications standards.5.11 Performance testing

The objective of the performance test is to check if the system can handle the transaction time for provisioning and also customer support. It checks that the response times do not degenerate and that they are still within acceptable limits during peaks. The objective of the performance tests is to ensure that the system would be able to handle the projected loads within the expected Service levels on the production hardware. The performance tests would focus on:

1. The ability of the server to handle the predicted loads

2. The response times and performance of the screens involved in the transaction at peak hours

3. The ability to handle large volumes of data over a period of time (volume & stress).

The performance of the network is not a part of these tests. Hence all these tests would be performed on a Local Area Network.

In addition, the following parameter would be checked depending on the monitoring tools that may be available in the given environment

CPU loads

Disk loads

Network traffic involved

The performance testing includes the types of testing:

a) Load Testing

The Load test is many concurrent users running the same program to see whether a system handles the load without compromising functionality or performance

b) Volume Testing

The purpose of Volume Testing is to find weaknesses in the system with respect to its handling of large amounts of data during short time periods. For example, this kind of testing ensures that the system will process data across physical and logical boundaries such as across servers and across disk partitions on one server.c) Stress Testing

The purpose of Stress Testing is to show that the system has the capacity to handle large numbers of processing transactions during peak periods.

In a batch environment a similar situation would exist when numerous jobs are fired up after down time.

5.12 Reliability & Availability Tests (includes Failover tests)

The reliability of a system is the conditional probability that the element will operate during a specified period of time. A system may be considered highly reliable (that is, it may fail very infrequently), but, if it is out of service for a significant period of time as a result of a failure, it will not be considered highly available.

One measure of systems reliability is its failure rate, or Mean Time To Failure (MTTF), the interval in which the system or element can provide service without failure. Another measure of reliability is the Mean Time To Repair (MTTR), which represents the time it takes to resume service after a failure has been experienced.5.13 Localization /Globalization Testing

The goal of localization/Globalization testing is to ensure that local language versions of the product perform consistently with the source language version

Localization testing includes

Functional Verification Testing

Multi-Lingual Operating System Verification

Translation Verification Testing5.14 Workflow Testing

The workflow testing type is used to test the business workflow process definitions and scenarios. In the integrated applications, the workflow process definitions will be tested independently.

6. Stages of Testing

The various stages of testing identified for system integration projects are:

Application level Testing

System Integration Testing

Acceptance Testing

Regression Testing

Regression tests would be carried out at Application level and Systems Integration levels

6.1 Application Level Testing

Application level testing would be carried out on individual applications and includes the following tests.

Out -of-the-Box tests

Functionality

Usability Tests UI, Navigation

Localization (if required)

Exceptional Scenario Tests

Security Tests

Workflow Testing

Out of Box functionality Testing

Data Integrity

Failover tests

Functional Area/ApplicationTypes of testsSub AreaMethod of testingRemarks

CRMFunctionalityOut-of- boxBlack BoxTo identify the limitations in the system

Business ScenarioBlack BoxTo test the business rules

CustomisationBlack BoxThis will be carried out only in case of customisation

Usability TestsNavigationBlack Box

Workflow TestingBlack Box

Localization TestingBlack BoxIf multiple regions and currencies are supported

BillingFunctionalityOut-of- boxBlack BoxTo identify the limitations in the system

Business ScenarioBlack BoxTo test the business rules

CustomisationBlack BoxThis will be carried out only in case of customisation

Usability TestsNavigationBlack Box

EAIFunctionality

Message broker Message brokers are key components in the integrated system. Various types of input messages would be fed to test these brokers

AdapterAll adapters/Connectors are tested with message simulators and set of XML files. Specific scripts are developed for analyzing the log files of individual instances of Adapters. Monitoring tools provided by the tool vendor would be used to monitor adapters

Data Integrity TestingBlack Box

Business scenarioBlack Box

Negative testing (Exceptional Scenarios)Creation and feeding negative test data to adapters

Sending improper messages to Message Brokers

Killing the processes, by sending messages through external systems such as TIB/Hawk, during the message flow.

Shutting down the machine in a distributed environment

Disconnecting the machine from the network

Service Provisioning Involves multiple products like Network Modeling, Inventory Management and automated service provisioning toolsFunctionalityAdapter testingTest Drivers for different servicesAdapters could be custom built

Business Scenario -

Interaction between multiple productsOff the shelf simulators/stubs/driversNetwork elements and simulators are required for these tests

Service Assurance Again involves multiple products like performance management, fault management tools etcFunctionalityAdapter testingTest Drivers for different servicesAdapters could be custom built

Business ScenarioOff the shelf simulators for generating faults and alarms/drivers Network elements/simulators are required for these tests

Dataware housing

FunctionalityBusiness ScenarioBlack Box

Adapter testingTest Drivers for different servicesAdapters could be custom built

Out-Of-Box Black Box

SAPFunctionalityBusiness ScenarioBlack Box

Adapter testingTest Drivers for different servicesAdapters could be custom built

Out-Of-BoxBlack Box

LegacyFunctionalityBusiness ScenarioBlack Box

Adapter testingTest Drivers for different servicesAdapters could be custom built

Out-Of-BoxBlack Box

Summary

The overall summary of applications level testing would be

TaskInputOutputReview Mechanism

Perform Application Test- Application Test Plan,

- Application Test

Criteria,

- Unit/Module tested

Code Test report

Defect Report

Application level

Tested Code

- Stubs/Drivers/Scripts- Reviews

- Test Audit

6.2 System Integration Testing (End-To-End Integration testing)

These tests include Interface tests, Performance tests (Refer to Performance tests section), Stress and Reliability Tests. This is carried out incrementally.

6.2.1 Interface Tests

End-to-End Business Scenario Testing

End-to-End Data Integrity testing

End-to-End Business Scenario Testing

The objective is to verify the end-to-end business scenario across application processes. Black box techniques are used for these tests. Regression test tools could be used for Regression testing.

End-to-End Data Integrity Testing

The objective of these tests is to verify the data integrity across the complete business process and to ensure database access methods and processes function properly and without data corruption.

6.2.2 Performance TestsThe objective of this is to measure the response times for various business processes, reliability and identification of bottle necks.

The process flow diagaram for performance testing is as indicated below.

6.2.2.1 Performance Requirements Study

This activity is carried out during the requirements phase. The objective is to understand the Hardware & Software components, Usage Model, Performance criteria, Performance requirements and Nature & Volume of load and develop a high-level performance strategy. It is important to understand as accurately and as objectively as possible the nature of load that must be generated.

Usage model would indicate the users of the systems, number of each type of user and each users common task. Along with the common tasks, task distribution is also identified which will help in determining the peak database activity and which activities typically occur during peak time.

This could be part of the test strategy document.

6.2.2.2 Tool Identification & Evaluation

Performance tools Identification and evaluation would be carried out during the design phase. Tools would identify based on the protocols, Software and hardware used in the solution. A POC would be carried out if required. The objective of this activity is to ensure that the tools identified support all the applications used in the solution and helps in measuring the performance goals.

6.2.2.3 Performance Strategy

A detailed strategy would be developed during the design phase, which would indicate the usage pattern, traffic pattern, peak load levels etc.

Based on the strategy, the scenarios for the creation of Virtual users would be developed. The scenarios will depend on the business criticality and frequency (usage model). The test environment would be decided based on the usage model, number of type virtual users.

6.2.2.4 Test Design

It involves the following activities.

Script Recording

Script Programming

Script Customization (Delay, Checkpoints, Synchronisation points)

Data Generation

Parameterization/Data pooling

6.2.2.5 Test Execution

Load and stress tests are carried out in conjunction with performance tests. Virtual users are simulated based on the usage pattern and load levels applied as stated in the performance strategy.

6.2.2.6 Performance Analysis report

Various reports/graphs and data generated are collated to analyse Performance under load, Transaction Performance, Transaction Distribution, Transaction Performance Summary, Transaction Performance summary by Virtual User etc.

Sample Graphs

6.2.3 Failover Tests

In the System Integration testing phase, the following failover tests would be conducted:

Application level failover test

Database level failover test

Interface level failover test

Application Server/Web server failover tests

Message broker failover tests

Application level failover test

The different failover mechanism of the application components would be verified by running

Steady State Workload test To verify the system reliability when it is exposed to continuous usage over a period of time

Instant Workload test To verify the system reliability when it is exposed to instant heavy workload suddenly

Failover transactions and conditions are created to make the database go down and tested.

Application server/Web server fail over mechanisms is also tested by running multiple instances.

Failover and load balancing features of Message broker are tested by running multiple instances of message brokers.

Summary

The overall summary of End-to-End system Integration Test would be

TaskInputOutputReview Mechanism

Perform End-to-End System Integration Test- System Integration

Test Plan

- Acceptance Criteria,

- System tested code

Test reports

Defect Report

System Integration

Tested Code

- Stubs/Scripts/drivers- Reviews

- Test Audit

Perform Performance

TestingPerformance testing Plan Performance Report

Test reports

Defect Report

Performance

Tested Code

Stubs/Scripts/drivers- Reviews

- Test Audit

6.3 Acceptance Testing

The final product deliverable will be tested as per the acceptance test plan.

The test cases are designed from a user point of view and would cover all the business scenarios.

The steps described in the Testing Procedure will be followed while testing the final deliverable.

All the defects captured during the acceptance testing will be logged into Defect tracking system.

The result of Acceptance Testing will be recorded in the Test Report.

On completion of acceptance testing and on meeting the acceptance criteria, Customer signoff will be obtained for the project.

Summary

TaskInputOutputReview Mechanism

Perform Acceptance Test- Acceptance Test Plan

- Acceptance Criteria

- System Integration

Tested code Test report

Defect Report

Acceptance Tested

Code - Reviews

- Test Audit

6.4 Regression Testing

Rigorous Regression testing would be carried out, to ensure that the fixes/changes made to the application do not cause any new errors. This test is executed on a baselined system or product when a part of the total system product has been modified. Regression test would be carried out at all stages of the test cycles.

Regression testing would be carried out after each release of a previously tested application. Automated test tools will be used for saving cost and Manpower. Traceability matrix would be used for finding out the test scripts that need to be used in Regression testing7. Test Tools

Automated testing tools will be used for verifying the functionality and performance of the applications.The following test tools will be used after the tools evaluation.

Test Management Tools

Ex: Test Director (Mercury), Test Studio (Rational)

Regression Test tools

Ex: Win Runner (Mercury),

Performance Test Tools

Ex: Load Runner (Mercury), PerformanceStudio (Rational)

Defect Reporting Tools

Ex: Rational Clear Quest

Tools Identification and evaluation for Performance and Regression testing would be carried out during the Requirement or design phase. Tools would be identified based on the protocols, Software and hardware used in the solution. A POC would be carried out if required.

The Summary of Automated tools identification, evaluation and procurement would be

TaskInputTools & TechniquesOut Put

Tools Identification - Test Requirement

- Test Environment

- Applications description.

- Tools vendors information Proof Of Concept- Tools identified

8. Environmental Requirements for testing

The following dedicated test environments are required for each level of testing activities:

a) Application level Testing Environment

- Architecturally similar to the production environment.

b) System Integration Testing Environment

- Architecturally similar to the production environmentc) Performance Testing Environment

-This environment should be architecturally identical and have the same configurations of

Hardware as in the production set up.

However access to production environment may be required in certain case like Commissioning /Decommissioning.

9. Test Artifacts

The following test artifacts would be generated during the test life cycle

Sl. NoTest ArtifactsRemarks

1. Test StrategyMaster test strategy, System Integration Test Strategy, Performance Test Strategy and Application/System test strategy for individual applications

2. Test PlansMaster Test plan and test plans for individual applications

3. Test case documentsFor all the tests mentioned in this document. Includes test cases for positive, negative and exceptions scenarios

4. Test driver/stub/scripts/

Simulators etcFor all test cases

5. Defect ReportsIncludes all identified defects

6. Test summary report (Metrics)

As per the test plan

7. Performance graphs and analysis reportsAs per the performance test plan

8. Tracebility MatrixTest case tracebility matrix for doing Regression testing

9. Test ChecklistsReview Check lists for Strategy, Test Plan, Test Cases and Test Audit.

10. Resource Planning

The IV&V team, as a whole, will be responsible for fulfilling all test requirements for a project and performing all test program tasks. The Resource planning for IV&V team is one of the most important tasks to enhance the project performance. This activity involves

Resources Identification

Developing individual and group Skills

Assign Roles and responsibilities

The Summary of Resource planning would be

TaskInputTools & Techniques

Output

Resource Planning

- Project requirements

- Constraints

- Estimation and

Schedule-Human resources

practices

-Templates- Organization chart

- Roles and Responsibility

assignments

- IV&V team management

Plan

Staff Acquisition

- Recruitment practices

- Staffing pool

Description- Pre-assignment

- Procurement- IV&V team resources

assigned.

- Team directory

Team Development

- IV&V Team

- Project plan

- Training requirement

- Performance reports- Training

- Team building

activities- Performance

improvement

- Input to performance

appraisal

The composition of the test team role and responsibility would be as shown bellow:

Roles

Responsibility

Test Manager

Test Management, Test Plan, Customer interaction Team Management, Work allocation to the team, Reviews and Status reporting

Test Architect

Test Strategy, Integration Test Plan, Performance Test plan, Reviews and Status reporting

Test Lead

Test planning, Customer interaction, Team management, Reviews and Status reporting

Test Engineer

Development of Test cases and Scripts, Test Execution, Result capturing and analyzing, Defect Reporting and Status reporting

Automation EngineerAutomated tools, Test Execution and Status reporting.

11. Risks and Contingencies

The risk identification would be accomplished by identifying causes-and-effects or effects-and-causes. The identified Risks are classified into to Internal and External Risks. The internal risks are things that the test team can control or influence. The external risks are things beyond the control or influence of the test team.

Once Risks are identified and classified, the following activities will be carried out

Identify the Impact and affected groups

Identify the frequency of the occurrence

Assign priority

Develop mitigation and contingency plan

The Summary of Risks identification and Contingency planning would be

TaskInputTools & TechniquesOut Put

Risk Identification - Project Requirement &

Applications description.

- Development and

other planning output

- Historical informationChecklist- Sources of Risk

Contingency Planning- Sources of Risk-Risk analysis- Mitigation plan

- Contingency plan

Some of the Major Risks identified for System Integration projects are as given below:

#RiskRemarks

1Completeness

- Incomplete Business requirement

Specification

2Multiple Region

- Local Language Knowledge for testing

3Complexity

- Wrong process mapping

- Understanding various system

architecture and protocols

- Scheduling asynchronous real-time events

4Product Specification

- Inadequate product information

5Product Control

-Poor analysis when new requirements are

added to the system.

-Tool vendor support

6Usability

- Documentation of Existing System

7Co-operation

-Cooperation across functional boundaries

8Schedule

- Dependencies with development groups

9Staff

- Knowledge about critical applications

- Legacy system information

10Test Environment

- Inadequate System Integration testing

environment.

- Non availability of dedicated environment

11Training

- Training for already implemented packages

and tools

12Customer

- User supports for testing

HLD/LLD for each application involved

Interface design

Application Integration

Integrated

Solution

Release

Project Planning

Traceability,

Master Test Strategy

Test Planning

Strategy for individual applications, Test case design

Performance Testing,

Business Cycle testing

Acceptance Tests

&

Certification

Incremental Integration Testing

System Testing (Application level Testing)

Solution Mapping/

Development/

Customization

Business requirements,

Solution

Architecture

R

R

Test Case Design

Test

Planning

Test

Strategy

R

Test Management

Test Execution

Summary Report & Analysis

Defect Reporting

Indicates Review Point

R

Legends

Test Management Test Strategy

Test Planning Test Case DesignTest Execution Defect Reporting

Summary Report

Level of Activity

Test End

Test Start

Time

Performance Strategy

Tool Identification & Evaluation

Performance Requirements Study

R

Performance Analysis

Report

Test Execution

Test Design

Generic Graph to display the number of transactions that passed or failed

Generic Graph showing Performance Under load

EMBED MSGraph.Chart.8 \s

EMBED MSGraph.Chart.8 \s

Wipro Technologies Confidential

Page 2 of 26Wipro Technologies Confidential

Page 24 of 24

_1065939367.xls

_1065939414.xls

_993662062.doc