1 grey box testing web apps & networking session 10 boris grinberg [email protected]

50
1 GREY BOX TESTING Web Apps & Networking Session 10 Boris Grinberg [email protected]

Upload: alejandro-schneider

Post on 27-Mar-2015

221 views

Category:

Documents


3 download

TRANSCRIPT

Page 1: 1 GREY BOX TESTING Web Apps & Networking Session 10 Boris Grinberg boris3@gmail.com

1

GREY BOX TESTINGWeb Apps & Networking

Session 10Boris Grinberg

[email protected]

Page 2: 1 GREY BOX TESTING Web Apps & Networking Session 10 Boris Grinberg boris3@gmail.com

2

Session 10 (4 Hours)

• Here are some things that we’ll cover:– Test Conditions

• Application-specific conditions• Environment-specific conditions

– Test Types– Risk Management– Improving QA Process– Defect Management

Page 3: 1 GREY BOX TESTING Web Apps & Networking Session 10 Boris Grinberg boris3@gmail.com

Test Conditions

• Test conditions are critically important factors in Web application testing. The test conditions are the circumstances in which an application under test operates.

• There are two categories of test conditions, application-specific and environment-specific, which are described in the following slides.

3

Page 4: 1 GREY BOX TESTING Web Apps & Networking Session 10 Boris Grinberg boris3@gmail.com

Application-specific conditions

• An example of an application-specific condition includes running the same word processor spell-checking test while in Normal View and then again when in Page View mode. If one of the tests generates an error and the other does not, then you can deduce that there is a condition that is specific to the application that is causing the error.

4

Page 5: 1 GREY BOX TESTING Web Apps & Networking Session 10 Boris Grinberg boris3@gmail.com

Environment-specific conditions

• When an error is generated by conditions outside of an application under test, the conditions are considered to be environment-specific.

• In general, I find it useful to think in terms of two classes of operating environments (Static and Dynamic), each having its own unique testing implications:

5

Page 6: 1 GREY BOX TESTING Web Apps & Networking Session 10 Boris Grinberg boris3@gmail.com

Two classes of operating environments• Static environments

(configuration and compatibility errors). An operating environment in which incompatibility issues may exist, regardless of variable conditions such as processing speed and available memory.

• Dynamic environments (RAM, disc space, memory, network bandwidth, etc.). An operating environment in which otherwise compatible components may exhibit errors due to memory-related errors or latency conditions.

6

Page 7: 1 GREY BOX TESTING Web Apps & Networking Session 10 Boris Grinberg boris3@gmail.com

Configuration and compatibility issues

• Configuration and compatibility issues may occur at any point within a Web system: client, server, or network. Configuration issues involve various server software and hardware setups, browser settings, network connections, and TCP/IP stack setups. Figures on the next slide illustrate two of the many possible physical server configurations, one-box and two-box, respectively.

7

Page 8: 1 GREY BOX TESTING Web Apps & Networking Session 10 Boris Grinberg boris3@gmail.com

Configuration Illustrations

8

One box configuration Two-box configuration

Page 9: 1 GREY BOX TESTING Web Apps & Networking Session 10 Boris Grinberg boris3@gmail.com

Dynamic Operating Environments

9

• When the value of a specific environment attribute does not stay constant each time (previous slide) a test procedure is executed, it causes the operating environment to become dynamic. The attribute can be anything from resource-specific (available

• RAM, disk space, etc.) to timing-specific (network latency, the order of transactions being submitted, etc.).

Page 10: 1 GREY BOX TESTING Web Apps & Networking Session 10 Boris Grinberg boris3@gmail.com

Test Types

• Test types are categories of tests that are designed to expose a certain class of error or verify the accuracy of related behaviors. The analysis of test types is a good way to divide the testing of an application methodically into logical and manageable groups of tasks. Test types are also helpful in communicating required testing time and resources to other members of the product team.

10

Page 11: 1 GREY BOX TESTING Web Apps & Networking Session 10 Boris Grinberg boris3@gmail.com

Acceptance Testing

• The two common types of acceptance tests are development acceptance tests and deployment acceptance tests.

• Release acceptance tests and functional acceptance simple tests are two common classes of test used during the development process. There are subtle differences in the application of these two classes of tests.

11

Page 12: 1 GREY BOX TESTING Web Apps & Networking Session 10 Boris Grinberg boris3@gmail.com

Release Acceptance Test (RAT)

• The (RAT) or build acceptance test (BAT) , is run on each build release to check that each build is stable enough for further testing.

• Typically, this test suite consists of entrance and exit test cases, plus test cases that check mainstream functions of the program with mainstream data.

• Copies of the BAT can be distributed to developers so that they can run the tests before submitting builds to the testing group.

12

Page 13: 1 GREY BOX TESTING Web Apps & Networking Session 10 Boris Grinberg boris3@gmail.com

QA procedures if BAT failed

• If a build does not pass a BAT test, it is reasonable to do the following:– Suspend testing on the new build and

resume testing on the prior build until another build is received.

– Report the failing criteria (should be at least 1 test stopper or catastrophic bug) to the development team.

– Request a new build.

13

Page 14: 1 GREY BOX TESTING Web Apps & Networking Session 10 Boris Grinberg boris3@gmail.com

Functional Acceptance Simple Test

• The functional acceptance simple test (FAST) is run on each build release to check that key features of the program are appropriately accessible and functioning properly on at least one test configuration (preferably the minimum or common configuration). – This test suite consists of simple test cases that check the

lowest level of functionality for each command to ensure that task-oriented functional tests can be performed on the program. The objective is to decompose the functionality of a program down to the command level and then apply test cases to check that each command works as intended. No attention is paid to the combination of these basic commands, the context of the feature that is formed by these combined commands, or the end result of the overall feature.

14FAST also known as minimum acceptance test (MAT)

Page 15: 1 GREY BOX TESTING Web Apps & Networking Session 10 Boris Grinberg boris3@gmail.com

Deployment Acceptance Test

• The configurations on which the Web system will be deployed will often be much different from develop-and-test configurations.

• Testers must consider this in the preparation and writing of test cases for installation time acceptance tests.

• This type of test usually includes the full installation of the applications to the targeted environments or configurations.

15

Think about single box deployment vs. multiple-box plus Firewalls/Switches/Routers/etc deployment.

Page 16: 1 GREY BOX TESTING Web Apps & Networking Session 10 Boris Grinberg boris3@gmail.com

Functionality and Feature-Level Testing

• Task-Oriented Functional Test

• Forced-Error Test• Boundary Test• System-Level Test• Exploratory Test• Load/Volume Test• Stress Test• Performance Test• Fail-over Test• Availability Test

• Regression Test• Compatibility and

Configuration Test• Documentation Test• Online Help Test• Utilities/Toolkits and Collateral

Test• Install/Uninstall Test• User Interface Tests• Usability Tests• Accessibility Tests

16

This is where we begin to do some serious testing, including boundary testing and other difficult but valid test circumstances:

Page 17: 1 GREY BOX TESTING Web Apps & Networking Session 10 Boris Grinberg boris3@gmail.com

Risk Management

17

Page 18: 1 GREY BOX TESTING Web Apps & Networking Session 10 Boris Grinberg boris3@gmail.com

Risk Management

18

Page 19: 1 GREY BOX TESTING Web Apps & Networking Session 10 Boris Grinberg boris3@gmail.com

Risk Management• Web site risk management is a process that

helps determine how an organization will be affected by exposure to risk on the Internet.

• Risk management can be used to minimize, control, or eliminate exposure to risks.

• Risk management is inevitable for all Web development applications.– There are two kinds of risks that are examined

when evaluating a project: opportunity risk, which is the loss from avoiding risk, and failure risk, which is the loss from taking a risk but failing to achieve the expected goal.

19

Page 20: 1 GREY BOX TESTING Web Apps & Networking Session 10 Boris Grinberg boris3@gmail.com

Risk Management and Loss

• Loss may be financial, due to the downtime from a Web server, or it may be competitiveness in the Web market. It may even be due to the development and acquisition of reusable software components or other valuable aspects of the Web site.

• Managing risks requires that you as the tester set up clear guidelines of how the risks related to the QA activities should be documented and tracked.

20

Page 21: 1 GREY BOX TESTING Web Apps & Networking Session 10 Boris Grinberg boris3@gmail.com

Risk Management Guidelines

• These guidelines should be a work in progress; the individuals who are responsible for the risk management assessment should be able to access and update them as needed. Risk management can be addressed throughout the Web planning phase.

• You need to think of the risks before testing, during testing, after testing, and then again when the Web site is actually deployed.

21

Page 22: 1 GREY BOX TESTING Web Apps & Networking Session 10 Boris Grinberg boris3@gmail.com

Several Risk Factors (slide 1 of 2)

• Following are several risk factors:• Probability. Probability is one risk method

used to determine the likelihood of the occurrence of a particular risk. The probabilities of risk are categorized as very low, low, medium, high, or very high. For example, server issues may be examined for their level of risk to the Web site. If the server goes down, it may have serious impact, which would make the risk very high.

22

Page 23: 1 GREY BOX TESTING Web Apps & Networking Session 10 Boris Grinberg boris3@gmail.com

Several Risk Factors (slide 2 of 2)

• Impact. Impact is used to determine the effect a risk would have on the project and how to handle the estimate of risk. Impact can be determined by categorizing risk as to whether they are negligible, critical, or catastrophic.

• Overall risk. Overall risk is the risk to the project. The overall risk to the project can be determined by using estimates of risk probabilities and impacts. In calculating the overall risk, consider how this risk may affect other risks on the project, and make a note of them.

23

Page 24: 1 GREY BOX TESTING Web Apps & Networking Session 10 Boris Grinberg boris3@gmail.com

Risk Matrix Table

• A matrix can be used to determine the overall risk for each of effort, performance, and schedule

24

Page 25: 1 GREY BOX TESTING Web Apps & Networking Session 10 Boris Grinberg boris3@gmail.com

Planning for Risks• A plan should be developed to address each

risk. A good rule of thumb for your risk toolkit is to ask the questions who, what, when, where, why, and how:– Who is responsible for the risk management activity?– What information is needed to track the status of the

risk?– When will resources be needed to perform the

activity?– Where will the resources be used?– Why is the risk important?– How will the risk plan be implemented?

25

Page 26: 1 GREY BOX TESTING Web Apps & Networking Session 10 Boris Grinberg boris3@gmail.com

Action and Contingency Planning

• There are two different types of risk planning. – Action planning is used to mitigate the risk by

way of an immediate response. The probability of the risk occurring and/or the potential impact of the risk may be mitigated by dealing with the problem early in the project. This type of planning is considered proactive.

– Contingency planning is used to monitor the risk and invoke a predetermined response. A trigger is set up, and if the trigger is reached, the contingency plan is put into effect.

26

Page 27: 1 GREY BOX TESTING Web Apps & Networking Session 10 Boris Grinberg boris3@gmail.com

Concerns in planning for risks (1 of 3)

• Anticipate risks. When you are testing the Web site, you should have some preconceived idea of what part of the application may cause you problems. An example is testing your Web application to see if it will generate the correct calculated results from the shopping basket.

• Eliminate risks. Potential problems can be identified before the testing process because the developer and programmer can deal with those issues during unit testing. It is important to make sure that the hardware you are using will work with the software before you begin any testing.

27

Page 28: 1 GREY BOX TESTING Web Apps & Networking Session 10 Boris Grinberg boris3@gmail.com

Concerns in planning for risks (2 of 3)• Reduce impact of risk. You can do several things to

reduce the impact of risk. It is important to make sure you know everything there is to know about the Web application and previous releases of the Web application project. To lower risks for your project make sure the testing team understands the basic components of the application and how the testing process should progress. It is also important to make sure that unit testing is being done after each phase of the coding process. Make sure you put into effect a complete test plan and document each phase of the software development. The testers should have prewritten scripts of the anticipated outcome of the test to follow.

28

Page 29: 1 GREY BOX TESTING Web Apps & Networking Session 10 Boris Grinberg boris3@gmail.com

Concerns in planning for risks (3 of 3)

• Stay in control when things do not go as expected. As you test your Web site, expect that something will go wrong. Do not panic; instead, take control of the process and anticipate the next course of action as it pertains to the Web test process. Set up an analysis of the Web testing process and revise and rerun anything that did not go according to plan.

• The best defense against certain key types of risks is to prepare a contingency and tracking plan that can be used to process and update your plan.

29

Page 30: 1 GREY BOX TESTING Web Apps & Networking Session 10 Boris Grinberg boris3@gmail.com

Tracking Risks

• Tracking risks is essential to the risk management process; if triggers go off, the entire team needs to be informed so that contingency plans can go into effect.

• Tracking is also useful as the project comes to the end of its development phase. Past knowledge may increase the chances of risk prevention and improvement in future projects. Resources are important as a part of the risk tracking process.

• Tracking risks will enable you to identify risks and to follow through on the likelihood that the risks will occur on your Web application.

30

Page 31: 1 GREY BOX TESTING Web Apps & Networking Session 10 Boris Grinberg boris3@gmail.com

Risk Tracking Document• Risks can be tracked by creating a tracking

document. Each member of the team should submit a risk document for his or her particular responsibility. Following is an example of what should be included in the risk document:– Name of risk– Description of the risk– Steps involved that would cause the risk to happen– Results– Probability of the risk– Resources affected– Comments– Related risks– Alternate plan

31

Page 32: 1 GREY BOX TESTING Web Apps & Networking Session 10 Boris Grinberg boris3@gmail.com

Handling the Risk (slide 1 of 4)

• There are different ways to monitor how you would like to handle risks. Following are different methods that will help you analyze and address your risks:– Decide on the specific component of the Web

application that appears to have a high risk. Will you be looking at the entire Web site, a single component, or even a list of components?

– Determine the severity of concern. Use a scale of normal, high, and low to rank the severity. Everything is presumed to be a normal risk unless there is reason for an assessment of a higher or a lower risk. Selecting a scale of concern that is meaningful to your business is critical in the assessment of the level of severity.

32

Page 33: 1 GREY BOX TESTING Web Apps & Networking Session 10 Boris Grinberg boris3@gmail.com

Handling the Risk (slide 2 of 4)

– Make individual input from your team key in identifying and foreseeing risks. Understanding the situation in which the Web site is set will help in developing a risk assessment. The team members will determine the different levels of risk that they foresee happening with their part of the Web site project.

– After each risk is identified, decide on the importance of the risk and its severity. For each area of development a decision should be made to determine whether there would be risks in this particular area of development. You should then determine the level of severity. Record how you think this will affect your risk assessment. Determine how this type of risk is critical to the advancement of the Web site project.

33

Page 34: 1 GREY BOX TESTING Web Apps & Networking Session 10 Boris Grinberg boris3@gmail.com

Handling the Risk (slide 3 of 4)

– Set up a plan that will be able to handle other risks as they occur. There will surely be risks that you may not even know about. It is critical to be able to deal with the uncertainties as well as the planned or foreseeable risks.

– Record unknowns that will affect your ability to analyze the risk. During the process, you may feel that you are not able to assess the risk probability. If a certain portion of the Web application is complex, you may be unable to determine the type of risk involved. A risk is anything that may have a negative impact on your business or the performance of your business. As you progress through the risk analysis phase, it helps to make a list of risk items that are critical to your business.

34

Page 35: 1 GREY BOX TESTING Web Apps & Networking Session 10 Boris Grinberg boris3@gmail.com

Handling the Risk (slide 4 of 4)

– Double-check the risk distribution. It's common to end up with a list of risks in which everything is considered to be equally risky. That may indeed be the case. On the other hand, it may be that your distribution of concerns is skewed because you're not willing to make tough choices about what to test and what not to test. Once you end up with a list of distributed risks, it is important to make sure you double-check them by taking a few examples of equal risks and asking whether those risks really are equal. Take some examples of risks that differ in magnitude and ask if it really does make sense to spend more time testing the higher risk and less time testing the lower risk. Confirm that the distribution of risk magnitudes is correct.

35

Page 36: 1 GREY BOX TESTING Web Apps & Networking Session 10 Boris Grinberg boris3@gmail.com

Contingency Planning (slide 1 of 2)• Contingency planning is a vital part of

software development. All contingency plans should address the following areas:– Objectives of the plan (for example, continue

normal operations, continue in a degraded mode, abort the function as quickly and as safely as possible, and so on).

– Criteria for invoking the plan (for example, missing a renovation milestone, experiencing serious system failures, and so on).

– Expected life of the plan. (How long can operations continue in contingency operating mode?)

– Roles, responsibilities, and authority.

36

Page 37: 1 GREY BOX TESTING Web Apps & Networking Session 10 Boris Grinberg boris3@gmail.com

Contingency Planning (slide 2 of 2) – Plan(s) creation and checkout of resource

constraints to plan for each contingency and objective.

– Training on and testing of plans.– Procedures for invoking contingency mode.– Procedures for operating in contingency mode.– Resource plan for operating in contingency mode

(for example, staffing, scheduling, materials, supplies, facilities, temporary hardware and software, communications, and so on).

– Criteria for returning to normal operating mode.– Procedures for returning to normal operating mode.– Procedures for recovering lost or damaged data.

37

Page 38: 1 GREY BOX TESTING Web Apps & Networking Session 10 Boris Grinberg boris3@gmail.com

Improving QA Process

The performance of QA process can be improved by applying three readily understood and executed steps:

• Define the process• Measure process performance• Improve the process.

Repeat this cycle continuously.Measure progress with reporting solutions.

38

Page 39: 1 GREY BOX TESTING Web Apps & Networking Session 10 Boris Grinberg boris3@gmail.com

Types of Reporting Solutions• QA tool built-in reports

– Pros: Reports come with the tool– Cons: very limited or too generic

• Custom built reports– Pros: designed for specific needs– Cons: extremely expensive (e.g. hire dev)

• Third Party Reporting Solutions– Pros: Variety, Flexibility, Affordable Support– Cons: Infrastructure changes (e.g. setup

report server)

39

Page 40: 1 GREY BOX TESTING Web Apps & Networking Session 10 Boris Grinberg boris3@gmail.com

Examples of QA Oriented Reports

• Daily Reports (Bug Summary, Regression Status)• Weekly Reports (Project Bug Progress)• Test Plan Status Report• Test Cases Readiness Report• Test Plan Execution Report• Test Case Priority Report• Execution History Report• Quality Index Report• Bug Triage Report, Bug Tracking Report• Team Activity Report

40

Page 41: 1 GREY BOX TESTING Web Apps & Networking Session 10 Boris Grinberg boris3@gmail.com

DefectAssigneeReport

41www.rbreporting.com

The screenshotis posted withpermission of theReliable BusinessReporting Inc.

The report shows:• Engineer Names• Not Resolved• Not Verified

Page 42: 1 GREY BOX TESTING Web Apps & Networking Session 10 Boris Grinberg boris3@gmail.com

Team ProgressReport

42www.rbreporting.com

The screenshotis posted withpermission of theReliable BusinessReporting Inc.

The report shows:• Engineer Names• Bug Statistics• Test Case Exec.

Status

Page 43: 1 GREY BOX TESTING Web Apps & Networking Session 10 Boris Grinberg boris3@gmail.com

Report Generation• Access online, Print, Save as Excel,

PDF• Receive by email, Save on file

server• Flexibility, Parameter

Customization:

43Reliable Business Reporting Inc.

www.rbreporting.com

Page 44: 1 GREY BOX TESTING Web Apps & Networking Session 10 Boris Grinberg boris3@gmail.com

Defect Management

• Defects determine the effectiveness of the Testing what we do. – If there are no defects, it directly implies

that we don’t have our job.

• There are two points worth considering here, either the developer is so strong that there are no defects arising out, or the test engineer is weak. In many situations, the second is proving correct.

44

Page 45: 1 GREY BOX TESTING Web Apps & Networking Session 10 Boris Grinberg boris3@gmail.com

What is a Defect?

• For a test engineer, a defect is following:– Any deviation from specification– Anything that causes user

dissatisfaction– Incorrect output– Software does not do what it intended

to do.

45

Page 46: 1 GREY BOX TESTING Web Apps & Networking Session 10 Boris Grinberg boris3@gmail.com

Bug / Defect / Error: -

• Software is said to have Bug if it features deviates from specifications.

• Software is said to have Defect if it has unwanted side effects.

• Software is said to have Error if it gives incorrect output.

But as for a test engineer all are same as the above definition is only for the purpose of documentation or indicative.

46

Page 47: 1 GREY BOX TESTING Web Apps & Networking Session 10 Boris Grinberg boris3@gmail.com

Defect Taxonomies (1 of 2)

Categories of Defects:All software defects can be broadly categorized

into the below mentioned types:• Errors of commission: something wrong is

done• Errors of omission: something left out by

accident• Errors of clarity and ambiguity: different

interpretations• Errors of speed and capacity

47

Page 48: 1 GREY BOX TESTING Web Apps & Networking Session 10 Boris Grinberg boris3@gmail.com

Defect Taxonomies (2 of 2)

1. Conceptual bugs / Design bugs

2. Coding bugs3. Integration bugs 4. User Interface Errors5. Functionality6. Communication7. Command Structure8. Missing Commands9. Performance10. Output11. Error Handling Errors

12. Boundary-Related Errors

13. Calculation Errors14. Initial and Later States15. Control Flow Errors16. Errors in Handling Data17. Race Conditions Errors18. Load Conditions Errors19. Hardware Errors20. Source and Version

Control Errors21. Documentation Errors22. Testing Errors 48

However, my previous slide is a broad categorization; here I have for you a host of varied types of defects that can be identified in different software applications:

Page 49: 1 GREY BOX TESTING Web Apps & Networking Session 10 Boris Grinberg boris3@gmail.com

Life Cycle of a Defect

• The following self explanatory figure explains the life cycle of a defect:

49

Page 50: 1 GREY BOX TESTING Web Apps & Networking Session 10 Boris Grinberg boris3@gmail.com

50

Q & A Session

• ? ? ? ? ?• ? ? ? ? ?• ? ? ? ? ?• ? ? ? ? ?• ? ? ? ? ?