stpro

Upload: satyasainadh

Post on 01-Mar-2016

216 views

Category:

Documents


0 download

DESCRIPTION

testing

TRANSCRIPT

  • Software Testing Process

    Presented By CTS

  • Software Testing ProcessContentsSoftware Testing ProcessPerforming Risk AnalysisPerrys ApproachRisk DimensionsRisk assessment methodologyEstablish, Identify, Prioritize, operationalize test objectivesConstruct Test PlanConstruct Test casesExecute Unit/Integration, system TestsAnalyze and Report test results

  • Software Testing ProcessDefining testing processMaking testing process repeatableHigh risk components of the system are testedavoiding problems of omission (inadequate testing)avoiding problems of commission(testing part of the systems that are not important)Effects of skilled and unskilled testersAdding intelligence to testingProviding metrics for managing and controlling testingProviding basis for test automationProducing specific test deliverables

  • Software Testing ProcessBasics of Testing ProcessTest team should be organized concurrently with the development teamPurpose of test team is to perform verification and validation as it relates to implementationFor a specific project the purpose of test team is to performverification and validation for the deliverables from the development and solution delivery to act as consultants to the development team during unit testing

  • Software Testing ProcessTasksIdentifying key application areasIdentification of key application areasIdentify testing groups responsibilitiesIdentifying key individualsIndividual involved in testing directly and indirectlyTesting team members will act directlyQA Managers, QA Analysts, Test Manager, Test Analyst, PM, PL, Analyst, database service personnel, network service personnel, customers

  • Software Testing ProcessTasks cont.Assign individual responsibilitiesDeveloping the test plan Defining the required test resources Designing test cases Constructing test cases Executing test cases in accordance with the test plan Managing Test resources Analyzing test results Issuing test reports Recommending application improvements Maintaining test statistics

  • Software Testing ProcessOutputsApplication areas involved in testing per team memberTeam member responsibilitiesTeam work planThe work plan defines milestones and tentative completion dates for all assigned tasks. A project management tools such as Microsoft Project can make this task very easy and the resulting document is a Gantt Chart that illustrates who is responsible for what and when.

  • Software Testing ProcessPerforming Risk AnalysisThe purpose of business risk analysis during software testing is to identify high-risk application components that must be tested more thoroughly, and to identify error-prone components within specific applications which must be tested more rigorously. The key to performing a successful risk analysis is to formalize the process. An informal approach to risk analysis methods leads an ineffective analysis process.

  • Software Testing ProcessPerrys ApproachThe first is Judgment and Instinct. This is the most common approach to performing risk assessment during testing. It is an informal approach using the testers Knowledge, and experience with past projects, to estimate the amount of testing required for the current project. Perry says this can be an effective method, but he sees the fact that the testers experience is not readily transferable to other testers as its primary fault.

  • Software Testing ProcessPerrys Approach In terms of the SEI Maturity model, this risk assessment process would be somewhere between levels 1 and 2. It is a repeatable approach, but is not formally written down for others to use.

    The second method is Dollar Estimation. This approach quantifies business risk using dollars as its unit of measure. It requires a great deal of precision and is difficult to use because results are based on estimates of frequency of occurrence and the loss per occurrence.

  • Software Testing ProcessThe third approach is Identifying and Weighting Risk Attributes.This approach identifies the attributes that that cause a risk to occur. The relationship of the attributes to the risk is determined by weighting. The tester uses weighted numerical scores to determine what the high risk areas are. The scores can be used to decide what application components should be tested more thoroughly than others, and total weighted scores can be used to compare one application to another.

  • Software Testing ProcessThe fourth approach is The Software Risk Assessment Package. This is an automated approach that involves purchasing an assessment package from a vendor. The advantage is the ease of use and the ability to do What-If-Analysis with risk characteristics. Automated risk assessment software packages exist which use the second and third approaches above, however, testers can create their own risk assessment questionnaires with MS Word and do the what-if-analysis with MS Excel.

  • Software Testing ProcessRisk Dimensions The risk dimensions Perry identified are Project Structure, Project Size, and Experience with Technology. With respect to project structure, the more structured a project the less the risks. Thus, software development projects that employ some type of project management/development life cycle approach should be at less risk. Project size is directly proportional to risk. The larger the project in terms of cost, staff, time, number of functional areas involved, etc., the greater the risk.

  • Software Testing ProcessRisk Dimensionscont.Technology experience is inversely proportional to project risk. The more experience the development team has with the hardware, the operation system, the database, the Network, and the development language the less the risk

  • Software Testing ProcessTasks Perry advocates a five-part risk assessment methodology. The tasks are: Ascertaining the risk scoreThis involves conducting a risk assessment review with the development team. During the session a risk assessment questionnaire is used to structure the process. Each question is asked of the development team members in group session and a consensus is reached as to the perceived level of risk (an alternative approach would be to have individual developers each fill out the questionnaire and consolidate the responses).

  • Software Testing ProcessThe questions are closed-end questions with the possible responses of Low, Medium, High, and Not Applicable. A numeric rating is assigned to the response. A final score is calculated using weighting by which the numeric rating is multiplied. The weighted scores are used to identify error-prone areas of the application and to compare the application with other applications.

  • Software Testing Processcreating the risk profileA weighting system is used to compute a score that reflects the importance of each area. Areas which are twice as important can be weighted with a value of two (e.g. an area with medium risk (2 points) could be considered two times as important as the other areas and have a final weighted score of 2 * 2 = 4 points (the weight times the risk points)). A total score is computed for the project, but the individual scores are used to develop the risk profile. Use the following approach (Perrys approach with some extensions) to construct the profile.

  • Software Testing ProcessOnce the risk data have been collected, create an MS Excel spreadsheet that computes the weighted scores. Sort the tabulated scores from the highest to the lowest (a pseudo-frequency distribution of sorts) and perform a Pareto Analysis (the 80/20 rule) [6] to determine what project areas fall into the upper 20% of the distribution. These are the areas that are at risk the most and they are the areas that will need to be tested the most. The results are rather obvious when charted using Kiviat Charts (radar charts). The high risk areas standout visually. (The charting should be done based on data sorted in ascending order by question number not on data sorted in descending order by highest to lowest for the Pareto Analysis.)

  • Software Testing ProcessModify the risk characteristics (optional for testing)Perry argues that once the areas at risk have been identified a proactive approach can reduce risk. He suggests that steps be taken to change the development approach or the project structure in order to reduce risk. When these alternatives are not feasible, the process of using the risk information to decide what areas to test becomes even more critical.

  • Software Testing Processallocate the resourcesPerry suggests allocating the most test resources to the high-risk areas, allocating less testing resources to medium risk areas and minimal testing resource to low-risk testing areas. A sound strategy is to assure that all of, or as many of as is possible, of the medium to high-risk areas are tested with the scope of the allotted testing resources. A possible split could be to commit 80% of the testing resources to medium and high-risk areas, and commit the remaining 20% to low risk areas. This is again applying the 80/20 rule

  • Software Testing Processcompile a risk assessment databaseA risk assessment database has two important functions. First, it can be used to improve the risk assessment process itself. Second, the data can be used to help management plan and structure development projects.

  • Software Testing ProcessThe Risk Assessment Review Session The entire test team along with end user representatives should be included in the session. He recommends conducting the session early in the test process. It should last no more than two hours. It should be run formally by the test team manager who facilitates the session. The session should have two objectives. The first objective is to answer each question on the risk assessment questionnaire. The second is to brainstorm and let the participants voice their concerns about the system under development.

  • Software Testing ProcessOutputsCompleted Assessment QuestionnaireAs the risk assessment review is completed, preliminary score for each question should be recorded on the questionnaires. After the review session, the scores should be converted to weighted scores and entered into a spreadsheet for subsequent analysis. The original questionnaires should be filed away and kept for future reference.

  • Software Testing ProcessOutputsRisk Score Analysis ResultsThe results in the sample Kiviat Chart below were taken from the General Risk Assessment Questionnaire data. They are easy to interpret. The further out along the axis a risk factor is placed the greater the risk. In the sample data it is obvious that the information of software development and the lack of documentation are major risk areas.

  • Software Testing ProcessRisk-based test resource allocation planA risk-based test resource allocation plan is a document containing recommendations for allocating test resources as based on the risk analysis results. It should list the risk areas from highest to lowest risk and should assign a percentage of the available testing resources to each risk sector.

  • Software Testing Processrisk databaseA risk database should be maintained that contains the project name under test, the risk assessment results from the project on a question by question basis, the analysis of the results on a question by question basis, graphs/charts of the results, and the test resource allocation recommendations.

  • Software Testing ProcessEstablish Test ObjectivesEstablishing Software testing Objectives is a critical part of planning the Software testing process. Defining testing objectives is also one of the most difficult test planning activities. It is difficult because humans frequently do not have a clear idea of what they want to do until they begin to do it. This means the best laid test plans change during test process execution. This is a problem without a solution, but there are some actions testers can take which will improve test planning.The establishment of clear testing objectives goes a long way toward offsetting future execution problems.

  • Software Testing ProcessEstablishing Test objectives cont.An objective is a testing "goal." It is a statement of what the tester wants to accomplish when implementing a specific testing activity. Each testing activity may have several objectives and there are two levels of objective specification. A test plan should contain both high-level general objectives in the overview section, and specific low-level "provable" objectives for each particular type of testing being implemented. The latter kind being operational goals for specific testing A good set of operational objectives can intuitively explain why we are executing a particular step in the testing process.

  • Software Testing ProcessEstablishing Test objectives cont.InputsRisk Analysis ResultsSystem Requirement DocumentThe system requirements document should have been created as part of the overall project documentation. If it has not be created, or is not complete, then the project, as well as, your testing effort, is already in trouble. Test objectives are really test requirements and they are a direct reflection of the system requirements as described in the Software Requirements Specification document.

  • Software Testing ProcessThe requirements specification can take many forms from homegrown to a deliverable from a commercial development methodology. Regardless of their source, they must be as complete as possible to assure the completeness of the testing effort. An important task is to review the requirements prior to using to establishing test objectives. The software design document also known as the functional specification document should be made available as part of the on-going project documentation. It is a functional decomposition of the system's functions and sub-functions that will be translated in to software programs and program modules.

  • Software Testing ProcessAt a minimum it should describe all of the system functions, the interdependencies among those functions, the system interface, and a detailed description of the internal design details of each function. The descriptions can take the form of Narratives, Structure Charts, Structured English, Tight English, Flow Charts, Pseudocode, etc. Regardless, these levels of design information are required to formulate White Box testing objectives.

  • Software Testing ProcessTasks - Identify Test ObjectivesThree methods can be used to specify test objectives. The first is brainstorming. The test team uses "creative" interaction to construct a list of test objectives. This is not a free-for-all process. It should be based on analysis products such as the diagrams/text of the requirements specification. It should be performed as a walkthrough the specifications section by section. The second approach is to identify "key" System functions. Next, specify test objectives for each function. This procedure can also be performed as a walkthrough of the functional requirements section by section. The third method is to identify business transactions and base objectives on them. This can be thought of as Scenario-based as business cycles could be used to drive the process.

  • Software Testing ProcessDefine completion criteriaA completion criterion is the standard by which a test objective is measured. Completion criteria can be either quantitative or qualitative. The important point is that the test team must some how be able to determine when a test objective has been satisfied. One or more completion criteria must be specified for each test objective.

    Prioritize test objectivesThe test objectives should be prioritized based on the risk analysis findings. Priority should be assigned using this scale.

  • Software Testing ProcessPrioritize test objectivescont.High - Most important tests: must be executed Medium - Second-level objectives: should be executed only after high-priority tests Low - Least important: should be tested last and only if there is enough time High and Medium test objectives should be assigned more resources that Low priority objectives.

  • Software Testing Processoperationalise test objectives

    Manual Test objectives should be implemented manually in the form of quality checklists, with one or more checklist items satisfying a specific objective. (Single checklist items can also satisfy more than one objective, as is the case for the date field objectives). Objectives are internally linked to testing activities because they drive the activities. The objectives can be worded in either positive or negative fashion.

  • Software Testing Processoperationalise test objectivescont.

    Automated Test objectives should be translated into an appropriate form for the automated test tool being used. For example, when using SQA TeamTest Test Manager a test requirements hierarchy would be created. Automated test requirements would be stored in the tool's test repository and would be used as the basis for constructing automated test scripts.

  • Software Testing ProcessOutputsTest ObjectivesThe statement of the test objectives is really a statement of the test requirements. It can be created using any word processing package or spread sheet. It can also be implemented with automated testing tools such as SQA TeamTest. In SQA's Manager product the test objects/requirements are in put as a test requirements hierarchy and are stored in the test repository. Each branch with in the requirements tree can have sub-branches, and sub-branches can also have sub-branches. SQA is only one example. Other automated testing tools will have their own type of test objectives/requirements documentation.

  • Software Testing ProcessObjectives completionThe important consideration is that each requirement and how it is validated is documented. Test requirements are completely useless unless they can be satisfied. Important test metrics that should be calculated and reported are the percentage of test requirements that have be covered by test cases, and the percentage of test requirements that have been successfully validated. The statement of objective completion criteria does not have to be a separate document.

    Prioritize test objectives

  • Software Testing ProcessConstruct Test PlansThe purpose of the test plans is to specify WHO does WHAT, and WHEN and WHY of the test design, test construction, test execution, and test analysis process steps. The test plan also describes the test environment and required test resources. It also provides measurable goals by which management can gauge testing. Furthermore, it facilitates communications within the test team, between the test team and the development team, and between the test team and management.

  • Software Testing ProcessConstruct Test PlansThe test plan is an operational document that is the basis for testing. It describes test strategies and test cases. Although it is operational in nature, it has administrative aspects. Thus, it can be used for both new development and maintenance. It should also be considered an evolving document.

  • Software Testing ProcessTest plan development requires about a third of the entire testing effort. Test plans should be constructed from a standardized template. The template should follow an accepted test plan standard. It is also acceptable to combine pieces from any or the entire outline to construct a hybrid test plan standard. In addition, test plans should be designed with test automation in mind.

  • Software Testing ProcessInputsRequirements DocumentSoftware Design DocumentRisk Analysis ResultsPrioritized test objectives

    TasksConstruction of Test PlansUnit Test PlanSystem Test PlanIntegration Test PlanUser Acceptance Test Plan

  • Software Testing ProcessSystem Test PlanConstruct System TestIdentify the business scenarios to be tested. The user will employ the application system to conduct business day in and day out and according to daily, weekly, monthly and/or yearly business cycles. The task identifies the business processes that can be translated into scripted test scenarios. A system test scenario is a set of test scripts which reflect users behaviors in a typical business situation. For example, the user logs on to the system, views some retail profile accounts, updates other retailer profiles, and deletes still other retailer profiles

  • Software Testing ProcessSystem Test Plancont. Installation and checkout Test PlanRelease Test PlanIntegration Test PlanSpecifies the build process

  • Software Testing ProcessUnit Test PlanUnit Testing Phases

    Test Planning General ApproachResourcesScheduleFeatures to Test

    Develop the Test Set Design the TestsImplement the Test Plan

    Execute the TestsRun Test ProceduresEvaluate the Test Results

  • Software Testing ProcessUnit Testing ActivitiesPlan the approachdetermine the features to be testedDefine the testsDesign, Construct, execute the testEvaluate the unit test effort

  • Software Testing ProcessDesign and construct Test casesThe purpose of this step is to apply test case design techniques to design and build a set of "intelligent" test data. The data must address the system as completely as possible, but it must also focus in on high-risk areas, and system/data components where weaknesses are traditionally found (system boundaries, input value boundaries, output value boundaries, etc.). The test data set will be a compromise of economics and need. It is not economically feasible to test every possible situation, so representative sampling of test conditions, etc. will be present in the test data.

  • Software Testing ProcessInputs:System Test PlanIntegration Test PlanUnit Test Plan

    TasksSpecify the test case design strategiesblack box, white box and other testing techniques

  • Software Testing ProcessDesign test casesThis task involves applying the test case design techniques to identify test data values that will be constructed. The developers will be responsible for designing unit testing test cases. The test team will aid developers in use of the techniques. For integration, System and Regression testing the test team will apply the techniques themselves. The test case Description can be documented either manually or stored in the test repository of an automated testing tool suite.

  • Software Testing ProcessDesign test casescont.If the test cases are documented automatically, the format and content will be limited to what the input forms and test repository can accept. Each tool vendor will probably store a different set of descriptive elements. Whether manual or automated, the test case description should minimally contain the elements illustrated in the sample output document listed here. The test data MUST include a description of the expected behavior for each test case.

  • Software Testing Processconstruct test dataconstruction of actual physical data.construct test logsseparate test logs for unit testing, integration testing, system testing and user acceptance testingQuality review of test dataThis task assures that the test data are complete, correct, and consistent. It is a simple review process that follows the Walkthrough format. The output should be a list of corrections, modifications, and additions that are necessary to strengthen the test data.

  • Software Testing ProcessExecute Unit and Integration TestingThe purpose of unit testing is to prove that the software functions as individual units to the level that each unit does what it is intended to do. The software developer who built the software unit under test should conduct the unit test. The testers should act as consultants to the developer during unit testing. The purpose of integration testing is to prove that the software units work together properly. It should prove that the integrated units do what they are intended to do, and that the integrated units do not do things they are not intended to do. The test team should conduct the integration test.

  • Software Testing ProcessTasks - Approve test environmentThe purpose of this task is to verify that the required test environment (e.g. test lab, networked clients machines, etc.) is in place before testing starts. There are many type of test environments ranging from very informal ones where testing is completed at developer/tester workstations to very formal test laboratories where software is tested under conditions approximating the production milieu. The formality of the test climate is a function of available resources. Most test environments lie some in between formal and informal. What this means to the test team is that some resources will be scarce and some will not be available.

  • Software Testing ProcessTasksApprove test environmentcont.Some classes of tests may have to be omitted. It is important that the testing manager along with the test team members inform the development team and their management of any shortcomings that will affect testing. A formal statement of test environment/resource deficiencies should be written and all responsible parties must signoff on the list.

  • Software Testing ProcessApprove test resourcesThe purpose of this task is to review and verify that All the required resources have been allocated and are available before testing begins. The resources can include people, funding, time, software tools, and the test environment. The required test resources should be identified and put in a written statement that will be distributed to development management. Both the development manager and the testing manager should review the resources. The test team must further review any changes that are requested.

  • Software Testing ProcessApprove test resourcescont.When consensus has been achieved, all involved parties should signoff on the list of required resources.It is important to identify all of the required testing resources before testing begins however, that is not always possible. At any time when identified test resources will be scarce or some will not be available, it is the test teams responsibility to make the testing manager aware of the consequences. The testing manager must either find the resources or make development managers aware of the problem and it possible consequences.

  • Software Testing ProcessApprove test resourcescont.Deficiencies in the test resources may ultimately affect the implementation of the test plans. Some classes of tests may have to be omitted. It is important that the testing manager along with the test team members inform the development team and their management of any missing resources that will affect testing. A formal statement of test resource deficiencies should be written and all responsible parties must signoff on the list.

  • Software Testing ProcessExecute unit testsThis task is the responsibility of each developer. The tests should be conducted so that the developer is able to certify that his/her software does what it is supposed to do. Test team members will act as consultants to developers during unit test execution.

  • Software Testing ProcessRetest problem areasThis task is usually done informally by the software developer without a lot of supervision.It can be improved considerable if the developers can be influenced to use an automated defect-tracking tool. Using a tool such as SQA TeamTest Test Manager product helps management track and control unit testing, as well as, track the types of problems that occur. Getting developers to agree to use an automated defect-tracking tool is not an easy task. It can be done.

  • Software Testing ProcessRetest problem areascont. It requires support from the project team lead and from the project manager. It also requires re-assurance that the defects entered will not be used against the programmers for adverse purposes. When buy-in has been accomplished, the results are excellent. Developers ultimately see the value of the product and how it can help them organize their own work.This task is cyclic in nature. Re-testing will continue until a pre-specified stopping criterion (criteria) is (are) met. For example, re-testing could continue until all severity 1,2, & 3 errors have been corrected. It could also be a function of the time allocated to re-testing

  • Software Testing ProcessRetest problem areascont.Re-testing problems found during unit testing is the responsibility of the developer. The test team will aid developers during unit testing to retest executable files when necessary.

  • Software Testing ProcessAnalyze and approve unit test resultsThe purpose of this task is to determine which test cases failed and which passed, to identify which failures should be entered into the defect database, and to provide weekly reports for defect tracking purposes. A defect and a failure are two different things. There are many definitions of the term "defect." Normally, it is thought of as a discrepancy of the observed from the expected. In software a defect is when the observed output is different than the expected output. This is an oversimplification of the word but, it is enough for our purposes.

  • Software Testing ProcessAnalyze and approve unit test resultscont.A failure results when the software does not perform. A defect may or may not cause a software failure. It may cause some output to be incorrect while producing other correct output. It is the responsibility of the Project-Level Configuration Control Board to determine which problems are defects and which problems are failures. It is also the CCB's responsibility to assign a severity level to each defect and to set repair priorities.

  • Software Testing ProcessExecute integration testsThis task is the responsibility of the test team. Its focus is to prove that the integrated software units do what they should do, and do not do what they should not do. This test is conducted in a formal manner. The testers use integration test cases that have predicted outputs. The test results are recorded in structured test logs. The structured test logs and test scripts drive the integration testing process.

  • Software Testing ProcessRetest problem areasThis task is cyclic in nature. Retesting will continue until a pre-specified stopping criterion (criteria) is (are) met. For example, retesting could continue until all severity 1,2, & 3 errors have been corrected. It could also be a function of the time allocated to retesting. Problem reports drive the retesting task. The test team will retest only CCB approved and corrected problems that are received in the form of legitimate software builds.

  • Software Testing ProcessAnalyze and approve integration test resultsThe purpose of this task is to determine which test cases failed and which passed, to identify which failures should be entered into the defect database, and to provide weekly reports for defect tracking purposes.

    Outputs:Unit Test logIntegration Test log

  • Software Testing ProcessExecute System TestThe purpose of the system test is to use the system in a "controlled" test environment, but to do so as the user would use the system in the production environment. The system test should prove that the complete system will do what it is supposed to do, and that it will not do anything that it is not supposed to do.

  • Software Testing ProcessTasksApprove System Test EnvironmentApprove Test resourcesSystem TestsRetest problem areasAnalyze system test results

  • Software Testing ProcessTasksApprove System Test EnvironmentThe purpose of this task is to verify that the required test environment (e.g. test lab, networked clients machines, etc.) is in place before testing starts. There are many type of test environments ranging from very informal ones where testing is completed at developer/tester workstations to very formal test laboratories where software is tested under conditions approximating the production milieu. The formality of the test climate is a function of available resources. Most test environments lie some in between formal and informal.

  • Software Testing ProcessApprove System Test Environmentcont.What this means to the test team is that some resources will be scarce and some will not be available. Deficiencies in the test environment will ultimately affect the implementation of the test plans. Some classes of tests may have to be omitted. It is important that the testing manager along with the test team members inform the development team and their management of any shortcomings that will affect testing. A formal statement of test environment/resource deficiencies should be written and all responsible parties must signoff on the list.

  • Software Testing ProcessApprove Test resourcesThe purpose of this task is to review and verify that All the required resources have been allocated and are available before testing begins. The resources can include people, funding, time, software tools, and the test environment. The required test resources should be identified and put in a written statement that will be distributed to development management.

  • Software Testing ProcessApprove Test resourcescont.Both the development manager and the testing manager should review the resources. The test team must further review any changes that are requested. When consensus has been achieved, all involved parties should signoff on the list of required resources.It is important to identify all of the required testing resources before testing begins however, that is not always possible. At any time when identified test resources will be scarce or some will not be available, it is the test teams responsibility to make the testing manager aware of the consequences.

  • Software Testing ProcessApproval of test resourcescont.The testing manager must either find the resources or make development managers aware of the problem and it possible consequences. Deficiencies in the test resources may ultimately affect the implementation of the test plans. Some classes of tests may have to be omitted. It is important that the testing manager along with the test team members inform the development team and their management of any missing resources that will affect testing. A formal statement of test resource deficiencies should be written and all responsible parties must signoff on the list.

  • Software Testing ProcessSystem TestsThis task is the responsibility of the test team. It focus is to prove that the completed system does what it should do, and does not do what it should not do. This test is conducted in a formal manner. The testers use scenario-based system test scripts that have predicted outputs. The test results are recorded in structured test logs. The structured test logs and test scripts drive the system testing process.

  • Software Testing ProcessFor mainframe projects, System Testing is carried out during the formal Testing phase of the SDLC. As far as testing is concerned, this stage consumes the largest chunk of testing resources. System Testing Activities are controlled by the System Test Plan that was developed much earlier during the SDLC in the System Design phase, but is now implemented. Purpose of system testingSystem testing activities are intended to prove that the system meets it objectives.

  • Software Testing ProcessSome experts argue that the purpose of System Testing is to prove that the system meets its requirements. This is not entirely true unless you consider Acceptance Testing as a type of System Testing because the purpose of Acceptance Testing is to demonstrate that the system meets the user's requirements. Acceptance Testing is a validation process. System Testing, in the strictest sense, is a verification process. Regardless of whether it represents verification or validation, System Testing represents an "external" view of the system.

  • Software Testing ProcessTypes of System TestFor clarity in our discourse on System Testing, we will define and discuss THREE types of System Testing one of which will incorporate what is known as Acceptance Testing. They are System Verification Testing, Customer Verification Testing, and Customer Validation Testing. The first TWO kinds of System Tests are designed to verify that the system does meet its design objectives. They are destructive in nature and intended to find areas where the system does not accomplish its objectives so that corrections can be made. The third type is designed to validate the system and is intended to be a positive (confidence building) experience demonstrating how the system fulfills its requirements.

  • Software Testing ProcessThe key words here are "verify" and "validate." Testing is a verification process when it occurs in a non-live test environment using non-live test data. This is exactly what happens when we do what we commonly call System Testing. A test team consisting of member of the development group, system operations staff, quality analysts, auditors, and end-user representatives is at the helm during the test. The project team leader directs this test. The test is actually conducted by the IS staff with input from the quality analysts, users, etc. who are present. This is more properly termed System Verification Testing.

  • Software Testing ProcessAnother instance of testing as a verification procedure involves repeating the previous test with the users conducting it. The test, as was the one before, is executed in a simulated non-production environment, hence, the name Customer Verification Testing. In fact, this test is conducted in the same environment as the previous system test, detail for detail. The same typical processing day is used together with the previously used scripts. In this case, the IS staff acts in an advisory capacity.

  • Software Testing ProcessValidation means that something actually works as it was intended to work. If this is so, then the product being tested meets its requirements. The purpose of Customer Validation Testing is to demonstrate that the system works. The only way to prove that a product functionscorrectly is to use it in the real world on real data.The common thread among the three kinds of System Tests is that they are formally implemented. "Formal" means according to a written plan. A major criticism of System Testing in general is that it is frequently done in an informal manner.

  • Software Testing ProcessA System Test must be comprised of scripts intended to thoroughly exercise each one. Furthermore, the scripts must be "destructive." They must include attempts to make the system do things it is not supposed to do.Establishing and Documenting the Expected ResultsSystem test scripts are not complete unless each includes the expected result(s). In many situations, however, the results may vary from day to day, week to week, month to month, etc. When this is the circumstance, the result associated with a particular script will vary depending on the values stored in the data base from one update cycle to another.

  • Software Testing ProcessHard coding the expected result would mean that the test script must be changed each time the database changes. This causes a test script maintenance nightmare. The best way to handle this problem is to have the system testers do dynamic database queries while conducting the system test. The database query returns the database record that is the basis of the result on the screen. Thus, what is displayed becomes what is expected based on the record in the database. The limitation on validating a test script this way is that the tester can only say that the displayed results are correct given the current state of the database. This sheds no light on whether or not the data itself is correct.

  • Software Testing ProcessThe validation of the database will have to be completed by seeding the data base loads/updates with known values that should appear in the database.System testing and automation toolsThe ultimate use of automated testing tools would be a completely automated and unattended system test. This, however, is not yet feasible. In fact, system testing requires a tremendous amount of manual labor, even if you use an automated testing tool. Automated test scripts cannot emulate the spontaneity of users.

  • Software Testing ProcessSystem testing and automation toolscont.The paradoxical nature automated test scripts is that they cannot be used for verification unless the system is working correctly when you record the scripts. The tester firsts executes different system features while recording and inserting test cases. The completed scripts are played back to very the system functions. It after verification that the scripts can be replayed to test regression test the system. So, System test cases must be manually written and executed the first time around.

  • Software Testing ProcessRetest problem areasThis task is cyclic in nature. Re-testing will continue until a pre-specified stopping criterion (criteria) is (are) met. For example, re-testing could continue until all severity 1,2, & 3 errors have been corrected. It could also be a function of the time allocated to re-testing. Problem reports drive the re-testing task. The test team should retest only CCB approved and corrected problems that are received in the form of legitimate software builds.

  • Software Testing ProcessAnalyze system test resultsThe purpose of this task is to determine which test cases failed and which passed, to identify which failures should be entered into the defect database, and to provide weekly reports for defect tracking purposes.

  • Software Testing ProcessDesign and automate regression test suitesThis task identifies how much, and what areas, of the test data should be saved for regression testing. The decision should be based on the system requirements, the risk analysis results, and the problems encountered during the unit, integration, and system tests.

    OutputSystem test log

  • Software Testing ProcessAnalyze and Report Test resultsThe purpose of the test report is to indicate which system functions work and which do not. This report should describe the level to which the system meets the test objectives. The report should include recommendations based on the success or failure of the system to meet the objectives. This report is a major report that covers all of the testing activities that have been completed. The audience for this report is management (Project Sponsor, Project Manager, Development Manager, and User Representative).

  • Software Testing ProcessTasksTest data analysisFormulate test findings reportConstruct test findings reportReview test findings report

  • Software Testing Process

    Thank You