hypothesis based testing (hbt) cookbook
DESCRIPTION
A cookbook on Hypothesis Based Testing, a personal scientific test methodology. Presented by STAG Software.TRANSCRIPT
© 2011-12, STAG Software Private Limited. All rights reserved.STEM is the trademark of STAG Software Private Limited.HBT is the intellectual property of STAG Software Private Limited.
This e-book is presented by STAG Software Private Limited www.stagsoftware.com
2
Hypothesis Based Testing (HBT) is a scientific personal test methodology that is unique in its approach to ensuring cleanliness of software. It is a goal focused approach, commencing with setting up of cleanliness criteria, hypothesising potential defect types that can impede this, and then performing activities to ensure that testing is purposeful and therefore effective and efficient. The central theme HBT is constructing a hypotheses of potential defects that may be probable, and then scientifically proving that they do not indeed exist. The activities of testing like test strategy, test design, tooling & automating become purposeful as these are focused on uncovering the hypothesised defect types ensuring that these activities are done scientifically and in a disciplined manner.
HBT is based on sound engineering principles geared to deliver the promise of guaranteeing cleanliness. Its core value proposition is about hypothesising potential defects that may be present in the software and then allow you to engineer a staged detection model to uncover the defects faster and cheaper that other typical test methodologies.
HBT fits into any development methodology and weaves into your organisational test process. HBT is powered by STEMTM (STAG Test Engineering Method) a collection of EIGHT disciplines of thinking. STEM provides the foundation for scientific thinking to perform the various activities. It is personal scientific inquiry process that is assisted by techniques, principles and guidelines to decompose the problem, identify cleanliness criteria, hypothesise potential defect types, formulate test strategy, design test cases, identify metrics and build appropriate automation.
3
Inspirations from nature
Physical & Chemical properties of matter allow us to:... classify... understand behaviours, interactions... enable checking purity
How can we use a similar train of thought to identify “properties of cleanliness” and then “types of defects”?
Properties of matter
HBT has been inspired by certain ideas and these are discussed below. The inspirations have come from “Properties of matter”, “Fractional distillation”, “Sherlock Holmes”, “Picture of baby growth”.
“affected by”
“Properties of the system”Cleanliness criteria
Potential Defect Types (PDT)
End user expectations
Issues in specifications,structure, environment
and behaviour
4
Inspirations from nature
From : http://withfriendship.com
This is a technique to separate mixtures that have components of different boiling points
In software systems, there exists a variety of defect types that may be present in the system. How can we apply this thought process to optimally uncover the defects, by “fractionally distilling” them?
Can we separate these types of defects on the basis of certain properties and optimally uncover the defects?
Fractional distillation
5
Inspirations from nature
The picture shows the health of the foetus/baby . This picture shows size, shape, parts and types of issues not present
Seeking inspiration, can we depict the health of software system in a similar manner? Can we measure the ‘intrinsic quality’ at a stage?
As we progressively evaluate in a staged manner, certain types of defects detected & removed and therefore quality grows. Can we chart this as “cleanliness index”?
Picture of baby growth
Source :http://www.environment.ucla.edu/media/images/Fetal_dev5.jpg
6
Inspirations
Sherlock Holmes was person who applied deductive logic to solve mysteries.
How can we see inspirations from Holmes to hypothesise the types of defects that may be present and prove presence of these?
Sherlock Holmes
7
HBT - A Personal Scientific Test Methodology
Test methodologies focus on activities that are driven by a process which are powered by tools, yet successful outcomes still depend a lot on experience.
Typically methodologies are at organisational level.
On the other hand HBT is a personal scientific methodology enabled by STEMTM , a defect detection technology to deliver “Clean Software”
8
Scientific approach to detecting defects
Cleanliness criteria What is the end user expectation of “Good Quality”?
Potential Defect Types What types of issues can result in poor quality?
Evaluation Stage When should I uncover them?
Test Types How do I uncover them?
What are the test cases? Are they enough?Scenarios/Cases
Test Techniques What techniques to generate test cases?
Scripts How do I execute them?
Metrics & Management How good is it? How am I doing?
9
How is HBT different from other test methodologies?The typical test methodologies in vogue have relied on strength of the process and the capability of the individual to ensure high quality in the given cost and time constraints. They lack the scientific rigour to enable full cost optimisation and more often rely on automation as the means to driving down cost and cycle time. For example, they do not provide a strong basis for assessing the quality of test cases in terms of their defect finding potential and therefore improve effectiveness and efficiency.
HBT on the other hand enables you to set a clear goal for cleanliness, derive potential types of defect and then devise a “good net” to ensure that these are caught as soon as they get injected. It is intensely goal-oriented and provides you with a clear set of milestones allowing you to manage the process quickly and effectively.
Activities ........................................................................................................................................................ ............................................................................
Goal
Pow
ered
by
expe
rien
ce
Pow
ered
by
defe
ct d
etec
tion
te
chno
logy
(S
TEM
)
drives
hopefully results in
Typical
Activities ........................................................................................................................................................ ............................................................................
Goal
HBT
10
Hypothesis Based Testing - HBT 2.0 A Quick Introduction
11
Personal, scientific test methodology.SIX stage methodology powered by EIGHT disciplines of thinking (STEMTM).
HypothesisePotential Defect Types
Nine Stage Defect Removal FilterCleanliness Assessment
SetupCleanliness Criteria
SUT
A quick introduction to HBT
Understand EXPECTATIONS
Understand CONTEXT
Formulate HYPOTHESIS
Devise PROOF
Tooling SUPPORT
Assess & ANALYSE
S1
S2
S3S4
S5
S6
D1D2
D3D4D5
D6
D7STEM
D8
32 core concepts
D1
Business value understanding
Defecthypothesis
Strategy &planning
Test designTooling
Visibility
Execution & reporting
Analysis &management
D2
D3
D4D5
D6
D7
D8
STEM Core
SIX stages of DOING EIGHT disciplines of THINKINGpowered by
HBTPersonal test methodology
STEMDefect detection technology
powered by
12
Business value understanding D1
Landscaping ViewpointsReductionist principleInteraction matrixOperational profilingAttribute analysisGQM
EFF model Defect centricity principleNegative thinkingOrthogonality principleDefect typing
Defect hypothesisD2
Reductionist principleInput granularity principleBox model Behaviour-Stimuli approachTechniques landscapeComplexity assessmentOperational profilingTest coverage evaluation
Test designD4
Orthogonality principleTooling needs assessmentDefect centred ABQuality growth principleTechniques landscapeProcess landscape
Test strategy & planningD3
ToolingD5
Automation complexity assessmentMinimal babysitting principleSeparation of concernsTooling needs analysis
GQMQuality quantification model
VisibilityD6
Gating principleCycle scoping
Analysis & ManagementD8
Contextual awarenessDefect rating principle
Execution & ReportingD7
32 core concepts
13
Connecting HBT Stages to the Scientific approach to detecting defects
Complete test cases
Sensible automationGoal directed measures
Staged & purposeful detection
Potential defect typesCleanliness criteria
Expectations
S1
S2S3
S4
S5S6
14
Clear baseline
Set a clear goal for quality
Example: Clean Water implies1.Colourless2.No suspended particles3.No bacteria4.Odourless
What information(properties) can be used to identify this?
... Marketplace,Customers, End users
... Requirement(flows), Usage, Deployment
... Features, Attributes
... Stage of development, Interactions
... Environment, Architecture
... Behaviour, Structure
Complete test cases
Sensible automationGoal directed measures
Staged & purposeful detection
Potential defect typesCleanliness criteria
Expectations
S1, S2
15
A goal focused approach to cleanliness
Identify potential defect types that can impede cleanliness
Example:Data validationTimeoutsResource leakageCalculationStoragePresentationTransactional ...
Scientific approach to hypothesising defects is about looking at
FIVE Aspects - Data, Logic, Structure, Environment & Usage from THREE Views - Error injection, Fault proneness & Failure
Use STEM core concepts > Negative thinking (Aspect)> EFF Model (View)
“A Holmes-ian way of looking at properties of elements”
Complete test cases
Sensible automationGoal directed measures
Staged & purposeful detection
Potential defect typesCleanliness criteria
Expectations
S3
16
Quality Levels
PDT4PDT3
PDT6PDT5
PDT7
PDT1
PDT2
PDT: Potential Defect Types
L1
L2
L3
NINE levels to Cleanliness
Input cleanliness
Input interface cleanliness
Structural integrity
Behaviour correctness
Environment cleanliness
Attributes met
Flow correctness
Clean Deployment
End user value
L1
L2
L3
L4
L5
L6
L7
L8
L9
TT1TT2
TT4
TT5
TT3
Test Techniques (T1-T4)
TT1
TT2
TT3
T2
T3
Levels, Types & Techniques - STRATEGY
T1
Complete test cases
Sensible automationGoal directed measures
Staged & purposeful detection
Potential defect typesCleanliness criteria
Expectations
S4
17
Countable test cases & Fault coverage
That test cases for a given requirement shall have the ability to detect specific types of defects
FAULT COVERAGE
Test Scenarios/Cases
TTTS TC1,2,3
TS TC4,5,6,7
R1R2R3
PDT1PDT2PDT3
Requirements & Fault traceability
Use STEM Core concepts> Box model > Behaviour Stimuli approach> Techniques landscape> Coverage evaluation
to - Model behaviour- Create behaviour scenarios- Create stimuli (test cases)
Irrespective of who designs, #scenarios/cases shall be same - COUNTABLE
Countable test cases & Fault coverage
Complete test cases
Sensible automationGoal directed measures
Staged & purposeful detection
Potential defect typesCleanliness criteria
Expectations S4
18
Focused scenarios + Good Automation Architecture
Input cleanliness
Input interface cleanliness
Structural integrity
Behaviour correctness
Environment cleanliness
Attributes met
Flow correctness
Clean Deployment
End user value
L1
L2
L3
L4
L5
L6
L7
L8
L9
Level based test scenarios yield shorter scripts that are more flexible for change and easily maintainable.
Complete test cases
Sensible automationGoal directed measures
Staged & purposeful detection
Potential defect typesCleanliness criteria
Expectations
S5
19
“Cleanliness Index” - Improved visibility
Quality reportQuality reportQuality reportQuality reportQuality report
CC1 CC2 CC3 CC4
R1
R2
R3
R4
R5
Met
Not met
Partially met
PDT3PDT2
PDT1
TT2
TT1
PDT5PDT4
PDT8
PDT7PDT6
TT5
TT4
TT3L1
L2
L3
L4
Stage
PDT10
PDT9
PDT9 TT6
TT7
TT8
Clea
nlin
ess
“Growth of a baby”
Complete test cases
Sensible automationGoal directed measures
Staged & purposeful detection
Potential defect typesCleanliness criteria
Expectations
S6
20
Six staged methodology to produce clean softwareThe act of validation in HBT consists of “SIX Stages of DOING”. It commences with first two stages focused on a scientific approach to understanding of the customer expectations and the the context of the software. One of the key outcomes of the first two stages is “Cleanliness Criteria” that gives a clear understanding of the expectation of quality. In the third stage, the Cleanliness Criteria and the information acquired in the first two stages are used to hypothesise potential types of defects that are probable in the software. The fourth stage consists of devising a proof to scientifically ensure that the hypothesised defects can be indeed be detected cost-efficiently. The fifth stage focuses on building the tooling support needed to execute the proof. The last stage is about executing the proof and assessing if the software does indeed meet the Cleanliness Criteria.
Understand EXPECTATIONS
Understand CONTEXT
Formulate HYPOTHESIS
Devise PROOF
Tooling SUPPORT
Assess & ANALYSE
S1
S2
S3S4
S5
S6
D1D2
D3D4D5
D7STEM
D8
D6
Who are the customers, end users, what do they need, and what do they expect?S1
What are the features of the system, what technologies are used, architecture?S2
What types of defects may be present? What types of fishes to catch?S3
What is strategy, plan, test scenarios/cases?Sherlock HolmesS4
What tools do I need to detect the defects?Boat in the fishing analogyS5
How am I doing? How is quality?Fisherman S6
22
Stage #1 : Understand EXPECTATIONS
Identify business requirements for each
user type
Identify end user types & #users for each type
Understand the marketplace for
the system
Understand the technology(ies) used
Understand deployment environment
The perception that end-users have of how well the product delivers the needs denotes the quality of the software/system. "Needs" represent the various features that the software/system needs to have, to allow the end-user to fulfill his tasks effectively and efficiently. "Expectations" on the other hand represent how well the needs are fulfilled.
The final software/system may be deployed in different marketplaces addressing the needs of various types of customers. Hence it is imperative that we understand the various target markets (i.e marketplace) where the software or system will be deployed. There could be different types of customers in the marketplace. Hence it is necessary to identify the various types of customers and then finally identify various types of end-users present in the customer. What we have done now is to start from outward direction i.e marketplace and adopt a customer/end-user centric view to understand the needs and expectations.
Once we have identified the various types of customers and the corresponding end-users, we can move on to understand the various technologies that make up the software or the system and also a deployment environment. The intent is to get a good appreciation of the "construction components" and the target environment of deployment. It is imperative that we should have a good understanding of the internal aspects and not merely the external aspects of the system.
Now we're ready to go into a detailed analysis of the various types of end-users and the typical number of users for each of these end-users. Subsequent to this, we need to identify the various business requirements i.e. "needs" for each end-user.
At the end of the stage, the objective is to have a good understanding of the various end-users and their needs paving the way to understanding expectations clearly.
23
Needs & Expectations
Producte.g Pencil
Education
Drawing
Corporate
Customers in
Kids
Seniors
Artists
Management
Draftsmen
Engineering
Admin
End users
Needs typically features that allow to get the job done.Expectations are how well the need is satisfied.
Remember Functional & Non-functional requirements ?
NEEDSShould writeShould have a eraser
EXPECTATIONSShould be attractiveShould be non-toxicLead should not break easily
NEEDSShould writeShould not need sharpening
EXPECTATIONSThickness should be consistentVariety of thickness should be availableVariety of hardness should be available
24
What does “understanding” involve?
Good understanding of what is expected is key to effective testing. To accomplish this, it is imperative that we commence from understanding who the various types of end users, their requirements and subsequently the expectations that they have from these. Having a deep domain knowledge helps immensely. But what if I this is a domain that I am not very conversant with? Is there a scientific way to undertand?
Understanding is a non-linear activity, it is about identifying the various elements and establishing connections between these. In the process of connecting the dots, missing information is identified, leading to intelligent questions. Seeking answers to these questions aids in deepening the understanding.
“Good testing is about asking intelligent questions leading to deeper understanding.”
These are some of the elements that need to be understood. Some of the information elements are “external to the system” i.e. marketplace, customer types, end users, business requirements while some are “internal to the system” i.e. features, architecture, technology etc.
Stage #1 (Understand EXPECTATIONS) focuses on “external information while Stage #2 (Understand CONTEXT) focuses on “internal information”.
25
Information extracted & artefacts generated
Information
Marketplace
Customers
User types
Requirements
Deployment environment.
Technology
Lifecycle stage
#Users/type
Artefacts
System overview
User type list
Requirement map
HBT Stage 1
HBTStage #1
At each stage, certain information is extracted, understood and transformed into artefacts useful to perform effective & efficient testing.
In Stage #1, the focus is on external information that relate to marketplace, customers, end users and business requirements. This stage is useful to get the bigger picture of the system and its potential usage and the how it is deployed.
The key outcomes as demonstrated by the artefacts are:‣The big picture of the system ‣The various end users ascertained for different classes of customers in different marketplaces‣A list of business requirements for each type of end user.
“Good understanding is key to effective testing. Identifying who will use what is the beginning to become customer-focused”
Deliverables from Stage #1
System overview
User type list
Requirement map
Should contain a a good overview of the marketplace, the various types of customers, end-users types, deployment environment and technologies that will be used to build the system.
Should contain a list of the various types of users for different types of customers in various market segments.
Should contain a list of the business requirements and high level technical features mapped to the various individual types.
STEM Discipline applied in Stage #1The STEM discipline “Business value understanding” of STEM is applied in this stage of HBT. The two STEM core concepts of “Landscaping” and “Viewpoints” are useful in this stage to scientifically understand the expectations.
27
Stage #2 : Understand CONTEXTIn this stage the objective is to understand the technical context in terms of the various features, their relative business value, the profile of usage and ultimately arrive at the cleanliness criteria. Note that at this stage, we are moving inward to get a better understanding of technical features of the system.
Having identified the various business requirements mapped by each type of end-user, the next logical step is to drill-down to the various technical features for each business requirement. It is important to understand the various technical features that constitute the entire system do not really work in isolation. Therefore it is necessary to understand the interplay of the features i.e. understand the dependencies of a feature with other features. Understanding this dependency is very useful at later stages of the life cycle, particularly to regress optimally.
We now have a list of requirements and the corresponding technical features mapped by each end-user. We are ready to proceed logically to understand the profile of usage of each of the features by the various end-users. To do this it is important to understand the typical in the maximum number of users for each user type and then the volume of usage by each user for every technical feature. Since we already have a mapping between the end-user type and the technical feature feature, all we have to do is to understand as to approximately how many times this feature will be used by typical end-user of that end user type. The intent of this is to gain a deeper understanding of the usage profile to enable an effective strategy formulation at the later stage of HBT.
It is not only sufficient that the features work correctly, it is equally important that the various attributes of the nonfunctional aspects of the various features are indeed met. Typically nonfunctional aspects of the system are identified in the highest system level, and typically turn out to be fuzzy. Good testing demands that each requirement is indeed testable. In HBT, attributes are identified for each key feature and then aggregated to form the complete set of nonfunctional requirements. We will do this in two stages: firstly identifying the critical success factors for the technical features and thereof the business requirement and then detailing the critical success factors to arrive at the nonfunctional requirements or attributes. Hence after figuring out the usage profile, identify the success factors for each business requirement.
28
Prioritize value of end users(s) and features
Identify critical success factors
Identify technical features and baseline
them
Understand dependencies
Understand profile of usage
Ensure attributes are testable
Setup cleanliness criteria
Stage #2 : Understand CONTEXT
Good testing is not about testing all features equally, it is about learning to focus more on those requirements/features that affect the customer experience significantly. This does not imply that some requirements/features are less important than the others, it simply means that some requirements/features are more important . Before we start detailing the various attributes, it is worthwhile to the prioritize the various requirements /features and also various end-user types. To prioritize, start by prioritizing the various types of end users in terms of their importance to the successful deployment of the final system. Subsequently rank the importance of each of the requirement/feature for each of the end-user type. At the end of this exercise, we should have a very clear understanding of the business value of each requirement/feature. Note that the understanding of usage profile comes in very handy here.
Now we are ready to derive the various attributes from the previously identified success factors and ensure that they are testable. A testable requirement simply means that it is an unambiguously possible to state whether it failed all passed after executing it. In the context of attributes, testability implies that each attribute does indeed have a clear measure/metric. Therefore it is necessary to identify the measures and the expected value of the measures for each of the attribute.
Having identified the various technical features and the corresponding attributes, the usage profile in the ranking of the requirements/features, we are now set to identify the various criteria that constitute the cleanliness of the intended software. Cleanliness criteria in HBT represents testable expectations. Cleanliness criteria provides a very strong basis for ensuring a goal-focused testing. This allows one to identify potential types of defects and then formulate an effective strategy in the complete set of test cases It is important that the cleanliness criteria is not vague or fuzzy.
29
Information extracted & artefacts generated
Information
Features
Usage
Focus areas
Attributes
Interactions
HBTStage #2
Artefacts
Feature list
Value prioritization matrix
Usage profile
Attributes list
Interaction matrix
Cleanliness criteria
At each stage, certain information is extracted, understood and transformed into artefacts useful to perform effective & efficient testing.
In Stage #2, the focus is on internal information that relate to technical features, their interactions, focus areas, attributes, architecture, technology.
The key outcomes as demonstrated by the artefacts are:‣A clear list of technical features‣Ranking of features to focus on high risk areas‣Profile of usage ‣List of attributes‣Feature interactions‣Clarity of expectations outlined as “Cleanliness criteria”
30
Deliverables from Stage #2
STEM Discipline applied in Stage #2The STEM discipline “Business value understanding” of STEM is applied in this stage of HBT. The STEM core concepts of “Interaction matrix”, “Operational profiling”, “Attribute analysis” and “GQM” are useful in this stage to scientifically understand the context.
Feature list
Value prioritization matrix
Usage profile
Attributes list
Interaction matrix
Cleanliness criteria
Should contain the list of technical features, that forms the technical features baseline.
Should contain a set of users, requirements and features ranked by importance.
Should contain a the profile of various operations by various end users over time.
Should contain the key attributes stated objectively i.e. state expected value for all the measures for each attribute.
Should contain the which feature affects what. Note that this should list the interactions and not the details of interactions. The objective is to get a rapid understanding of the linkages.
Should contain criteria that need to be met to ensure that the deployed system is indeed clean.
31
Cleanliness criteriaCleanliness criteria is a mirror of expectations, The intention is to come up with criteria that if met will ensure that system meets the expectations of the the various end users. This is not be confused with “Acceptance criteria”, as “Acceptance criteria” is typically at a higher level. Acceptance criteria is typically “extrinsic” in nature i.e. it describes aspects like long duration running, migration of existing data, clean installation and running in the final deployment environment, delivering stated performance under real-life load conditions.
Cleanliness criteria represents the “intrinsic quality” i.e. what properties should the final system have to ensure that it is deemed clean?Use the properties of the FIVE aspects of Data, Business logic, Structure, Environment, Usage as applied to your application to arrive at these criteria specific to your application.
Note that the cleanliness criteria should both the the functional and non-functional requirements.
The recommended style of writing Cleanliness criteria is:“That the system shall meet ....”
Examples:That the system is able to handle large data (need to qualify large)That the system releases resources after use.That the system displays meaningful progress for long duration activities.That the system is able to detect inappropriate environment/configuration.
32
Stage #3 : Formulate HYPOTHESIS
Having understood the expectations and the context resulting in the formulation of cleanliness criteria, we are ready to hypothesize the potential defects that could affect the cleanliness criteria. This is one of the important stages of HBT resulting in a clear articulation of the various types of defects and forms the basis for the remaining stages of HBT.
The key idea is to use the external information like the feature’s behaviour, environment, attributes, usage and internal information like construction material i.e technology, architecture to hypothesize the potential defects that may be present in the software under construction. Also note that the history of the previous versions of the software or similar systems can also be used to construct and strengthen the hypothesis. Having hypothesized the potential defects, it is possible to scientifically construct a validation strategy and design adequate test cases, thereby ensuring that the final system to be deployed is indeed clean.
The FIVE key aspects useful for constructing hypotheses of defects are: data, business logic, structure, environment and usage. This HBT stage allows us to follow a structured &scientific approach to the hypothesize the potential effects ensuring that we do not miss any.
33
Stage #3 : Formulate HYPOTHESIS (continued)
Map PDTs to the elements-under-test i.e. features/requirements
Identify potential faults for the five aspects - Data, Business logic, Structure, Environment, Usage
Identify potential failures of the five aspects - Data, Business logic, Structure, Environment, Usage
Identify potential errors that could be injected in the five aspects - Data, Business logic, Structure, Environment, Usage
Now identify potential defects (PD) & combine PDs, remove duplicate PDs
Group similar PD to form Potential Defect Types (PDT)
Firstly use the external information like data specification and business logic specification to identify the potential defects. The information related to data that could help are: data type, boundaries, volumes, rate, format, data interrelationships. The intent should be to get into a "negative mentality" and think of what can go wrong with respect to all the information related to the data and then produce a list of potential defects.
Now use the information related to the business logic to identify the potential effects. Business logic or the intended behaviour primarily transforms the various inputs i.e. input data to outputs that the user values. The intention is to identify potential transformation losses. The information specific to business logic that is useful for arriving at potential defects are : the various conditions and their linkages, values of conditions, exception handling conditions, access control and dependencies on the other parts of the software. Once again, the intent is to get into a "negative mentality", and identify erroneous business flows of logic.
Up to now the focus has been on using external information like the specification of data and business logic to identify the potential defects. Now focus on the internal information like structure of the system and construction materials(i.e. language, technology) used to build the system to hypothesize potential defects. Structure at the highest level represents the deployment architecture while structure at the lowest level represents the structure of the code. Some of the structural information that could be useful to hypothesize are: flow of control, resource usage, distributed architecture, interfacing techniques, exception handling, timing information, threading, layering. As explained above, continue with the similar train of thought of examining these information with intent to identify potential problems in the structure.
34
Stage #3 : Formulate HYPOTHESIS (continued)Having identified potential defects using the behavioural and structural information, examine information related to environment and how they can affect the deployed system. By environment, we mean the associated hardware and software on which the system is deployed and the hardware, software and application resources used by the system. The objective is to examine carefully how these can affect the finally deployed system. Some of the key information related to the environment that could be useful are: hardware/software versions, system access control, application configuration information, speed of hardware (CPU, memory, hard disk, communication links), environment configuration information (e.g. #handles, cache size etc), system resources (hardware, OS and other applications).
Up till now we have taken a fault-centric approach of looking for potential faults (aka defects) by examining external or internal information. In addition to a fault-centric approach, we can also view the system from potential failure points and then identify the potential defects. Additionally, it is also possible to examine the system from an error injection point of view. That is, understand the kinds of potential errors that could be injected into the system to irritate the potential defects if any. The objective is to ensure that we have examined the system from all three views (error centric, fault centric & failure centric) and thereby ensure that we have not missed any potential defects.
A failure centric approach demands that we wear an end-user hat and identify the potential failures that could cause business loss. The cleanliness criteria formulated earlier could come in very handy as this would force us to think like a customer/end-user. What we trying to do is to ensure that we have considered all the potential failures and therefore hypothesized the potential defects.
Now move to a user centric view to examine the various ways that an end-user could abuse the system by identifying the various ways errors could be injected into the system. Not that an end user does not always connote a physical person, it could be another system that interacts with the system via some interface. so examine the various points of interaction and look at the possibilities of their injection and then hypothesize the potential defects that could get irritated by these errors. The kinds of information that could be useful here are: workflows, data access, interesting ways of using the system, accessibility, environmental constraints faced by the physical end-user and potential deviant ways of using the system.
Then consolidate the potential defects and group similar ones into potential defect types (PDT). Finally map the PDTs to the various elements-under-test i.e. feature/requirements. Now we have a clear notion as to what types of defects that we should look forward to uncovering in what parts of the system.
35
Information extracted & artefacts generated
HBTStage #3
Artefacts
PD catalog
Fault traceability matrix
Information
Data
Structure
Environment
Business logic
Usage
Attributes
Past defects
At each stage, certain information is extracted, understood and transformed into artefacts useful to perform effective & efficient testing.
The key outcomes as demonstrated by the artefacts are:‣List of potential defect types‣Mapping between PDTs & the elements-under-test i.e. Feature/Requirement
In Stage #2, the focus is on hypothesizing PDTs using the FIVE aspects of Data, Business logic, Structure, Environment & Usage from THREE views - Error-centric, Fault-centric & Failure-centric.
36
Deliverables from Stage #3
STEM Discipline applied in Stage #3The STEM discipline “Defect hypothesis” of STEM is applied in this stage of HBT. The STEM core concepts of “Negative thinking”, “EFF model”, “Defect centricity principle” and “Orthogonality principle” are useful in this stage to scientifically hypothesize defects.
PD catalog
Fault traceability matrix
Should contain the list of potential defects and the potential defect types
Should contain the mapping between the potential defect types/potential defects and features/requirements.
37
Stage #4 : Devise PROOF (Part #1: Test Strategy & Planning)
HBT being a goal focused test methodology, the intent is about figuring out an optimal approach to detect the potential of defects in the system. Therefore strategy in HBT is about staging the order of defect detection, identifying tests that are needed to uncover the specific defect types and finally choosing test techniques best suited for each type of test.
Typically we have always looked at the levels of testing like unit, integration and system from the aspect of the “size” of entity-under test. Unit test is typically understood as being done on the smallest component that can be independently tested. Integration test is typically understood as being done once the various units have been integrated. System test is typically seen as the last stage of validation and is done on the whole system.
What is not very necessarily very clear is the specific types of defects that are expected to be uncovered by each of these test levels. In HBT, the focus shifts to specific types of defects to be detected, and therefore the act of detection is staged to ensure an efficient detection approach.
In HBT, the notion is of quality levels, where each quality level represents a milestone towards meeting the final cleanliness criteria. In other words each quality level represents a step in the staircase of quality. The notion is to ensure that the defects that can be caught earlier is indeed caught. So the first step to formulation of strategy is to stage the potential defects and thereby formulating the various quality levels.
However in HBT, there are NINE pre-defined quality levels where the lowest quality level focuses on input correctness progressively going onto the highest quality level to validate of the intended business value is indeed delivered.
38
Stage #4 : Devise PROOF (Part #1: Test Strategy & Planning)
Identify detection process
Identify test techniques
Identify tooling needs
Formulate cycles
Understand scope
Choose quality levels
Identify test types
Identify risks
Estimate effort
Having identified the various types of potential defect types to be detected at various levels, it is now necessary to understand the specific types of tests needed to uncover these potential defects. In HBT each test shall be intensely goal focused. This means that a type of test shall only uncover specific type of defects. The act of test type identification results in specific types of tests to be done at each of the quality levels.
Now that we know what types of defects need to be detected when and where what type of tests, we need to know how to design sufficient yet adequate test cases for each type of test. In HBT, a test technique is one that allows us to design test cases. Based on the types of defects i.e. types of tests, we have to identify the test technique(s) that is best suited for uncovering these types of the defects.
Now we have a clearer idea of various types of defects, the levels of detection, types of tests and test techniques., we are now ready to identify the optimal detection process best suited for design/execution of test cases. The the act of identifying detection process also allows us to understand whether we need technology support to be able to execute test cases and therefore pave the way for automation strategy.
At this point in time we have a strategy and are ready to develop the detailed test plan. Some of the key elements of the test plan is the estimation of effort and time and formulating the various test cycles. In HBT cycles are formulated first and then effort and time estimated.
Finally potential risks that could come in the way of executing the test plan are identified and the risk management plan put in place.
In summary, a strategy in HBT is a clear articulation of the quality levels, test types test techniques and detection process model.
39
Information extracted & artefacts generated
Information
PDT
Attributes
Techniques
Deployment env.
Scope of work
#Scenarios
Risks
Cleanliness criteria
Artefacts
Test strategy
Test plan
HBT Stage 1
HBTStage #4
At each stage, certain information is extracted, understood and transformed into artefacts useful to perform effective & efficient testing.
The key outcomes as demonstrated by the artefacts are:‣Test strategy‣Test plan
In Stage #4 (Part 1) the focus is on identifying the quality levels, test types, test techniques and the detection process.
40
Deliverables from Stage #4 (Part #1)
Test strategy
Test plan
STEM Discipline applied in Stage #4 (Part #1)The STEM discipline “Strategy & Planning” of STEM is applied in this stage of HBT. The STEM core concepts of “Orthogonality principle”, “Quality growth principle”, “Defect centered activity breakdown” , “Cycle scoping” are useful in this stage to scientifically developing the strategy & plan.
Should contain the quality levels, test types, test techniques & detection process
Should contain the test effort estimate, cycle details and the potential risk & mitigation plan.
41
Stage #4 : Devise PROOF (Part #2: Test Design)
The act of designing test cases is a crucial activity in the test life cycle. Effective testing demands that the test cases possess the power to uncover the hypothesized potential defects. It is necessary that the test cases are adequate and also optimal.
In HBT the design is done level-wise and within each level test-type wise. Based on the level & type, the test entity may be different. The test design activity for an entity for a type of test at a quality level consists of two major steps, firstly to design test scenarios and then generate these test cases for each scenario. Test scenarios are designed entity-wise and therefore there is a built-in notion of requirements traceability. In addition to requirements traceability, it is expected that the test scenarios and corresponding test cases are traced to the potential types of defects that they are expected to uncover. This is termed “Fault traceability”.
42
Stage #4 : Devise PROOF (Part #2: Test Design)
For each scenario, generate test cases
Generate the test scenarios
Trace scenarios to PDT & entity-under -test
Identify test level to design consider & identify entities
Identify conditions & data
Model the intended behaviour semi-formally
Assess the test adequacy by fault coverage analysis
The act of test design commences with the identification of the quality level and then the specific type of test for which the test cases are to be designed. This allows us to identify the various test entities for which test cases have to be designed.
Having identified the test entities it is then required to identify the conditions that govern the business logic and the data elements that drive these conditions. Subsequent to this, build the behavioral model.
Use the behavioral model to generate test scenarios. Then for every scenario, identify the data that varies and then generate values for each data element. Finally combine the data values to generate the test cases.
Since we have designed scenarios/cases entity-wise, requirements traceability is built-in i.e. the designed scenarios/cases automatically trace to the entity (or requirement). Now map the scenarios/cases to the hypothesized PDTs to build the fault traceability matrix.
Finally assess the test adequacy of the designed scenarios/cases by checking test breadth, depth & porosity.
43
Information extracted & artefacts generated
HBTStage #4
Artefacts
Test scenarios & cases
Requirements traceability matrix
Fault traceability matrix
Information
Data
Logic
Structure
PDT
Defect escapes
Conditions
Attributes
At each stage, certain information is extracted, understood and transformed into artefacts useful to perform effective & efficient testing.
The key outcomes as demonstrated by the artefacts are:‣Test scenarios & cases‣Requirements traceability matrix‣Fault traceability matrix
In Stage #4 (Part 2), the focus is on designing test scenarios/cases that can be proved to be adequate and have the power to uncover the hypothesized PDTs.
44
Deliverables from Stage #4 (Part #2)
Test scenarios & cases
Requirements traceability matrix
STEM Discipline applied in Stage #4 (Part #2)
Fault traceability matrix
The STEM discipline “Test design” of STEM is applied in this stage of HBT. The STEM core concepts of Reductionist principle, Input granularity principle, Box model , Behavior-Stimuli approach, Techniques landscape, Complexity assessment,Operational profiling, Test coverage evaluation are useful in design test scenarios/cases scientifically.
Should contain the test scenarios/cases for each entity for all types of tests at various quality levels
Should contain the mapping between the scenarios/cases and the entity-under-test
Should contain the mapping between the scenarios/cases and the PDTs
45
Stage #4 : Devise PROOF (Part #3: Metrics Design)
For each of these goals, identify questions to ask
For each of the aspects identify the intended goal to meet
To answer these questions, identify metrics
Identify when you want to measure and how to measure
Identify progress aspects
Identify adequacy(coverage) aspects
Identify progress aspects
In this stage, the objective is to design measurements to manage the process of validation in an effective and efficient manner. Since HBT is a good focused test methodology, it is necessary to device measurements that enable us to clearly show the progress towards this goal.
The measurements in HBT are categorized into progress related measures, test effectiveness measures and system risk measures. Therefore it is necessary to identity the various aspects related to progress, effectiveness and the system health.
Once the aspects are identified, key goals related to these are identified and then the metrics formulated. Finally it is necessary to understand when to measure and how to measure.
46
Information extracted & artefacts generated
HBTStage #4
Artefacts
Metrics chart
Information
Progress aspects
Process aspects
Organization goals
When & how to measure
Quality aspects
At each stage, certain information is extracted, understood and transformed into artefacts useful to perform effective & efficient testing.
The key outcomes as demonstrated by the artefacts are:‣Chart of metrics that are goal-focused
In Stage #4 (Part 3), the focus is on designing metrics that will ensure that we stay on course towards the goal.
47
Deliverables from Stage #4 (Part #3)
Metrics chart
STEM Discipline applied in Stage #4 (Part #3)The STEM discipline “Visibility” of STEM is applied in this stage of HBT. The STEM core concepts of GQM, Quality quantification model are useful in design metrics that are goal-focused.
Should contain the list of metrics, collection frequency and a how this meets the goal.
48
Stage #5 : TOOLING support
Evaluate tools
Identify the order in which scenarios need to be automated
Design automation architecture
Develop scripts
Perform tooling benefit analysis
Identify automation scope
Assess automation complexity
Debug and baseline scripts
In this stage, the objective is to analyze the support that we need from tooling/technology to perform the tests. Automation does always imply scripting that is typically automating the designed scenarios. It could also involve development of test bench, custom tooling to enable the system to be tested.
This stage of HBT allows you to identify the tooling needs, understand issues/complexity involved, perform cost-benefit analysis, evaluate existing tools for suitability/fitment and finally devising a good architecture that provides for flexibility/maintainability before embarking onto automation.
49
Information extracted & artefacts generatedAt each stage, certain information is extracted, understood and transformed into artefacts useful to perform effective & efficient testing.
The key outcomes as demonstrated by the artefacts are:‣The reason for tooling & automation‣Challenges involved‣Requirements of tooling‣Scope of tooling & automation‣Architecture of automation‣Automated scripts
In Stage #5, the focus is on identifying tooling requirements and building automated scripts that is delivers value & ROI.
HBTStage #5
Artifacts
Needs & benefits document
Complexity assessment report
Tooling requirements
Automation scope
Automation architecture
Tooling & Scripts
Information
Automation objectives
Scope
Scenarios to automate
Scenario fitness
Technologies used
Tool info.
Complexity info.
50
Deliverables from Stage #5
STEM Discipline applied in Stage #5The STEM discipline “Tooling” of STEM is applied in this stage of HBT. The STEM core concepts of Automation complexity assessment, Minimal babysitting principle, Separation of concerns, Tooling needs analysis are useful in adopting a disciplined approach to tooling & automation and deliver the ROI..
Should contain the technical & business need for automation.Needs & benefits document
Complexity assessment report
Tooling requirements
Automation scope
Automation architecture
Tooling & Scripts
Should contain the technical challenges of automation
Should contain the requirements expected out of automation
Should contain scope of automation
Should contain the architecture adopted to building tooling/scripts
The actual tools/scripts for performing automated testing
51
Stage #6 : Assess & ANALYZE
Record status of execution
Record learnings from the activity and the context
Analyze execution progress
Quantify quality and identify risk to delivery
Identify test cases/scripts to be executed
Execute test cases, record outcomes
Record defects
Update strategy, plan, scenarios, cases/scripts
This stage is where you execute the test cases, record defects, report to the team and take appropriate action to ensure that the system/application is delivered on time with the requisite quality.
52
Information extracted & artefacts generatedAt each stage, certain information is extracted, understood and transformed into artefacts useful to perform effective & efficient testing.
The key outcomes as demonstrated by the artefacts are:‣Report of test execution & progress‣Defect report‣Report on cleanliness aka quality‣Learnings from execution resulting in improved strategy, scenarios & cases‣Any other key learnings
In Stage #6, the focus is on ensuring a disciplined execution, intelligent analysis and continuous learning to ensure that the goal is reached.
HBTStage #6
Artifacts
Execution status report
Defect report
Progress report
Cleanliness report
Updated strategy, plan, scenarios &
cases
Key learnings
Information
Execution information
Defect information
Context
53
Deliverables from Stage #6
STEM Discipline applied in Stage #6The STEM disciplines of “Execution & reporting” and “Analysis and management” of STEM are applied in this stage of HBT. The STEM core concepts of Contextual awareness, Defect rating principle, Gating principle, Cycle scoping enable a disciplined execution, fosters continual learning and stay focused on the goal.
Should contain the status of test execution
Should contain defect information
Should contain progress of execution and thereof the cycle
Should contain the cleanliness index and how well the cleanliness criteria has been met
Updated strategy, plan, scenarios, cases based on learnings from execution
Key observations/learnings that could be useful in the future
Execution status report
Defect report
Progress report
Cleanliness report
Updated strategy, plan, scenarios &
cases
Key learnings
54
Discipline #1 : Business value understanding
How to Understand a system
Landscaping | Viewpoints
How to Create a functional baseline
Viewpoints | Reductionist principle
How toIdentify focus areas
Value prioritisation | Viewpoints
How to Understand usage
Operational profiling | Viewpoints
How toUnderstand interdependencies
Interaction matrix
How to Create an attribute baseline
Viewpoints | Reductionist principle
How toBaseline expectations Goal-Question-Metric | Viewpoints
This discipline enables one to understand the system, create a baseline of features, attributes andfinally expectations. This discipline consists of SEVEN tools, each of which uses certain STEM core concepts to ensure these are done in a scientific and disciplined manner.
Good quality implies meeting expectations. This requires that we understand expectations in additions to the needs as delivered by the requirements. Understanding the intended business value to delivered is key to this.
56
Baseline provides the basis for future work
What is to be tested needs to be clear.
Remember Functional & Non-functional requirements?
Functional BaselineConsists of list of features to be tested.Essentially a agreed upon list of features.
Attribute BaselineThe non-functional aspects.Agreed upon attributes & their values.
57
Tools in D1 -Business value understanding
ToolsSTEM Core Concepts Description
How to Understand a system
LandscapingViewpoints
System is viewed as a collection of information elements that are interconnected. This tool enables you to come up with intelligent questions to understand the various information elements and their interconnections.
How to Create a functional baseline
ViewpointsReductionist principle
Commencing from an external view of end users, various use cases (requirements) are identified and then technical features that constitute the use cases. This tool enables you to clearly setup a functional baseline that is used as a basis for strategy, plan, design, tooling, reporting & management.
How to Create an attribute baseline
Attribute analysisViewpoints
In addition to functional correctness, it is imperative that the attributes are met,. This tool enables you to identify the attributes and ensure that these are testable.
How toIdentify focus areas
ViewpointsValue prioritisation
All requirements/features are not equally valued by the end users. This tool allows you to rank the end users, requirements, features thereby enabling prioritisation of testing based on the risk and perceived value.
How to Understand usage
ViewpointsOperational profiling
Understanding the real life usage profile is about knowing what operations, #concurrent operations, rate of arrival are in progress at a point in time. This tool allows to arrive at the closer to reality potential usage profile of the system to ensure effective non-functional tests.
How toUnderstand interdependencies
Interaction matrixUnderstanding how a feature/requirement affects/is-dependent on other feature/requirements is useful to understand impact & re-testing effort. This tool allows you to rapidly understand the interdependencies.
How toBaseline expectations
ViewpointsGoal-Question-Metric
This tool allows to derive cleanliness criteria that reflect the expectations.
58
Customers & End Users
Producte.g Pencil
Education
Drawing
Corporate
Customers in
Kids
Seniors
Artists
Management
Draftsmen
Engineering
Admin
End users
A product or an application may be sold in different market places made up of different kinds of customers. Each class of customer may have different types of end users who use the product.It is important to understand that each end user may have different needs & expectations.
Testing is about ensuring that the product will indeed satisfy the variety of needs & expectations
59
Needs & Expectations
Producte.g Pencil
Education
Drawing
Corporate
Customers in
Kids
Seniors
Artists
Management
Draftsmen
Engineering
Admin
End users
Needs typically features that allow to get the job done.Expectations are how well the need is satisfied.
Remember Functional & Non-functional requirements ?
NEEDSShould writeShould have a eraser
EXPECTATIONSShould be attractiveShould be non-toxicLead should not break easily
NEEDSShould writeShould not need sharpening
EXPECTATIONSThickness should be consistentVariety of thickness should be availableVariety of hardness should be available
60
Customer Profile
Different customers have different types of end users, and differing number of users for type of end user.
Customer #1 Customer #2 Customer #3 Customer #4
61
Customer Profile & UsageWhat typesof users
How many users
F2
F3
F4
F5
F6
F7
F8
F1
System
What does each one use?What is order of importance?What is the usage frequency?
Different end users may use the system differently in terms of what they use, frequency of usage and how they value each each feature.
62
Business Value
Ultimately end users need the system to do their job BETTER, FASTER, CHEAPER and deliver value to their customers.
Understand that it is about “business value” of system - how does the system help my business to do BETTER, FASTER, CHEAPER.
63
Discipline #2 : Defect hypothesis
How to Hypothesise defects
Negative thinking | EFF model | Defect centricity principle
How to Setup goal-focus
Orthogonality principle
This discipline enables one to hypothesise potential defect types that may be present in the system under test and setup a clear goal approach to detection/prevention. Goal focused approach implies that we map the hypothesised potential defect types (PDT) to the elements-under-test i.e feature/requirements.
This discipline consists of TWO tools, each of which uses certain STEM core concepts to ensure these are done in a scientific and disciplined manner.
Hypothesis is done by scientifically examining certain properties of the system and can be complemented by ones experience.
65
Tools in D2 - Defect hypothesisTools STEM Core Concepts Description
How to Hypothesise defects
Negative thinkingEFF modelDefect centricity principle
Hypothesis is done by examining properties of the system in a scientific manner. Examining properties of elements that make up the system from five aspects (data, business logic, structure, environment and usage) from three views (error-injection, fault-proneness and failure) allows you to scientifically come with potential defects. Subsequently grouping similar potential defects, we arrive at potential defect types (PDT).
How to Setup goal-focus
Orthogonality principleMapping the PDT to the elements of the system enables you to be clear as to what type of defect you want to in each element enabling you to be goal-focused.
66
“affected by”
“Properties of the system”Cleanliness criteria
Potential Defect Types (PDT)
End user expectations
Issues in specifications,structure, environment
and behaviour
67
Expectations
Needs
Features
Environment
Behavior
Structure
Material
“impedes”
“Properties of the system”Cleanliness criteria
Potential Defect Types (PDT)
Expectations delivered by Needs (Requirements)via Features that display Behaviorconstructed from Materials in accordance to a Structure in a given Environment
68
Setting up a Clear GoalBefore we invest effort in devising a test strategy, plan & test cases,let us be clear about the goal...
What types of defects are we looking for?
Understand EXPECTATIONS
Understand CONTEXT
Formulate HYPOTHESIS
Devise PROOF
Tooling SUPPORT
Assess & ANALYSE
S1
S2
S3S4
S5
S6
D1
D2
D3
D4D5
D7
STEM
D8
D6
What types of defects may be present? i.e. what types of fishes to catch
32 core concepts
D1
Business value understanding
Defecthypothesis
Strategy &planning
Test designTooling
Visibility
Execution & reporting
Analysis &management
D2
D3
D4D5
D6
D7
D8
69
Potential Defect Types
CLEAN Entity
Functional CLEANLINESS
Attribute CLEANLINESS
implies +
What types of defects will affect my
1. Functional behavior2. Attributes
Potential Defect Types(PDT) Cleanliness criteriaaffects
70
Potential Defect (PD) & Potential Defect Type (PDT)
We may come up with a variety of potential defects for an entity-under-test.A set of similar potential defects (PD) may be grouped into class of defects i.e Potential Defect Type (PDT).The intent is to create a set of smaller set of classes of defects to uncover.
PDT1
PD1 PD2
PD3
PDT1 : User Interface Issues
PD1: Spelling mistakes in UI
PD2: UI elements not aligned
PD3: UI standards violated
Example:
71
Information used for hypothesis
Intended Functionality
AttributesExpectations
Defecthistory
Personal experience
Potential Defect Types
used to hypothesise
72
Aspects used to hypothesise
Business Logic
Structure
Data
Environment
Usage used by
uses
built using
uses, lives in
The two broad areas of validation for any entity-under-test are :‣Functionality‣Attributes
Our objective is to ensure that the functional aspects of the system are correct and that they meet the expected attributes.
So, how can we hypothesise potential defect types for a given entity-under-test?
In this discipline of HBT, we decompose the entity into FIVE elemental aspects that are:‣Data‣Business logic‣Structure‣Environment‣Usage
i.e. A feature is used by end user(s) and implements the behavior via business logic that is built using structural materials that uses resources from the environment.
73
Views on these Aspects
Each “Aspect” can be viewed from THREE angles.
Error injection
Fault proneness
Failure
What errors can we inject?
What inherent faults can we “irritate”?
What failures may be caused?
ERRORirritatesFAULT
FAULT propagates resulting in FAILURE
74
Aspects & Views Combined
Business Logic
Structure
Data
Environment
Usage
Error injection Fault proneness Failure
What kinds of erroneous data may be injected?
What kind of issues could data cause?
What kinds of bad data can be generated?
What conditions/values can be missed?
How can conditions be messed up?
What can be incorrect results when conditions are combined?
How can we setup incorrect “structure”?
How can structure mess up the behavior?
What kinds of structure can yield incorrect results?
What is incorrect environment setup?
How can resources in the environment cause problems?
How can environment be messed up?
In what ways can we use the entity interestingly?
What kinds of usage may be be inherently faulty?
What can be poor usage experience?
75
TWO Important Core Concepts used in Defect Hypothesis
EFF (Error-Fault-Failure) model View oriented approach.
In real life usage, we combine both of these.
Error injection
Fault proneness
Failure
Business Logic
Structure
Data
Environment
Usage
used by
uses
built using
uses, lives in
Negative ThinkingASPECT oriented approach
81
How to write PDTs
“Language shapes the way we think.”
Hence it is necessary to have a simple and structured approach to documenting the PDTs identified.
When writing PDTs, commence the sentence with
“That the system/entity may/may-not....”
Write this in defect oriented form.Write each PDT as a sentence.Do not be verbose.
e.g.That the system may accept data out of bounds.That the system may leak resources.
82
Discipline #3 : Strategy & Planning
How to Identify scope
Cycle scoping
How to Formulate strategy
Orthogonality principle Quality growth principle Process landscape Techniques landscape
This discipline enables to adopt a structured and disciplined approach to formulating a goal-focused strategy, estimating effort and then formulating a plan. In HBT strategy is defined a clear combination of what to test, when to test, how to design scenarios for test and finally test. This is defining the scope of test, types of test, quality levels, test techniques for design and what tooling support is need to execute the strategy.
This discipline consists of SIX tools, each of which uses certain STEM core concepts to ensure these are done in a scientific and disciplined manner.
How to Assess tooling support
Tooling need analysis
How to Estimate effort
Defect centred activity breakdownApproximation principle
How to Formulate cycles
Cycle scopingQuality growth principle
How to Setup criteria
Gating principle
84
Tools in D3 - Strategy & PlanningTools STEM Core Concepts Description
How to Identify scope
Cycle scopingThe focus of this tool is allow you to clearly identify the scope of testing that is expected of you by identifying the types of tests i.e. PDTs that you are expected to uncover.
How to Formulate strategy
Orthogonality principleQuality growth principleProcess landscape Techniques landscape
Strategy is about identifying levels of quality, types of tests, test techniques for ensuring adequacy and the mode of execution of cases. This tool enables you to approach a disciplined approach to developing a goal-focused strategy that will be effective & efficient
How to Assess tooling support Tooling need analysis
Leveraging technology to develop custom tooling and automating scenarios is key to improving efficiency and effectiveness.This tool enables you to clearly identify the tooling & scripting requirements to you leverage your investment in tooling & automation.
How to Estimate effort
Defect centred activity breakdownApproximation principle
Using PDTs as the basis, this tool enables a logical way to estimate effort. Having identified PDTs and mapping this to the elements-under-test and identifying the types of test to uncover these and deriving #cycles of test by scoping out cycles, this tool proceeds to estimate the effort for each element-under-test for each type of test for every cycle and sums these to arrive at the potential total effort.
How toFormulate cycles
Cycle scoping Quality growth principle
Formulating cycles requires a clear focus of the scope of every cycle. This tool enables you to be clear as to what PDTs you plan to uncover at different points in time of the development in line ensuring that the quality growth is in accordance with the quality levels.
How toSetup criteria
Gating principleEffective & efficient testing implies that good defects are indeed are found the right stages of software development. This tool enables setting criteria for each stage of development and release.
85
Strategy should help in
Design AssessmentPlanning Execution
Ensu
ring h
igh co
vera
ge
Test
techn
iques
Autom
ation
arch
itectu
re
Cost e
ffecti
ve ex
ecut
ion
Cycle
plann
ing
Wha
t is m
anua
l/aut
omate
d?
Stayin
g on t
rack
Metrics
- Wha
t & w
hen
How to
inter
pret
Clear p
lan o
f acti
on
Estim
ation
, Sch
eduli
ng w
ork
Infra
struc
ture
86
Contents of a test strategy
Features to focus on
List down major features of the product.
Rate importance of each feature(Importance = Usage frequency x Failure criticality).
Potential issues to uncover
Identify the PDTs that you look forward to detecting.
Quality Levels
Identify the levels of quality that are applicable and map the PDTs to these levels.
Tests & Techniques
State the various tests that need to be done to uncover the above PDTs.
Identify the test techniques that may be used for designing effective test cases.
Execution approach
Outline what tests will be done manually/automated.
Outline tools that may be used for automated testing.
Test metrics to collect & analyse
Identify measurements that help analyse the strategy is working effectively.87
Goal-focused strategy
Input cleanliness
Input interface cleanliness
Structural integrity
Behaviour correctness
Environment cleanliness
Attributes met
Flow correctness
Clean Deployment
End user value
L1
L2
L3
L4
L5
L6
L7
L8
L9
Data validation test
Functionality, Data integrity
Structure test
UI test, Usability
Flow correctness test
“Good citizen” test
LSPS, Security, Usability, Reliability, Volume
SI, Migration, Compatibility
End to End Flow test
Key tests
PDT3PDT2
PDT1
TT2
TT1
PDT5PDT4
PDT8
PDT7PDT6
TT5
TT4
TT3L1
L2
L3
L4
Stage
PDT10
PDT9
PDT9 TT6
TT7
TT8
Cle
an
lin
ess
... WHAT PDTs to be uncovered WHEN (Quality Levels) and HOW(Test Types)?
In HBT, there exists NINE quality levels, with certain PDTs to be uncovered at each level.
88
Discipline #4 : Test design
This discipline enables one come with scenarios/cases that can be proven to be adequate. Design of scenarios/cases uses a model based approach with some of a tool to enable to help you build the behavioural model and subsequently generating test scenarios/cases from the model ensuring these are “countable” (i.e can be proved to be sufficient) and traced to faults (i.e. has the power the uncover the hypothesised defects).
This discipline consists of THREE tools, each of which uses certain STEM core concepts to ensure these are done in a scientific and disciplined manner.
This tools in this discipline pay a lot attention of the form& structure of test cases and these conform to the HBT test case architecture. The structure of test cases is seen as crucial to ensure adequacy and ensure optimality..
How to Model behaviour
Box model Techniques landscape Operational Profiling
How to Design scenarios & cases
Behaviour-Stimuli approach Techniques landscape Input granularity principle
How to Ensure adequacy
Complexity assessmentCoverage evaluation
89
Tools in D4 - Test designTools STEM Core Concepts Description
How to Model behaviour
Box model Techniques landscapeOperational Profiling
This tool enables you to understand the intended behaviour of the element-under-test and create a behaviour model to ensure that the scenarios & cases subsequently designed are indeed complete. This commences by identifying conditions that govern behaviour and the data elements that drive the conditions.
How to Design scenarios & cases
Behaviour-Stimuli approachTechniques landscapeInput granularity principle
This tool enables you to design scenarios & cases that can be proved to be adequate. Scenario in HBT is a path or flow of a given behaviour while a test case is a combination of data (stimuli) that makes the system take that path. The focus is ensuring that number of scenarios can be proven to be “countable”( i.e. no-more or no-less) and therefore the test cases too are indeed countable.
How to Ensure adequacy
Complexity assessmentCoverage evaluation
This tool enables you to ensure the designed scenarios/cases are indeed adequate. Tracing scenarios/cases to PDTs enables “Fault coverage” i.e. ensuring the PDTs hypothesised can indeed be in covered. In conjunction with “Countability”, the adequacy can indeed be proved in a logical manner. This tool can be also be review/assess completeness/adequacy of existing scenarios/cases.
90
Objective of test design
Test design is a key activity for effective testing. This activity produces test scenarios/cases.The objective is to come with complete yet optimal number of scenarios/cases that have the power to uncover good defects.
Do we have a net that is broad, deep, strong, with small enough holes to catch the fishes that matter?
91
Therefore the design of test cases plays a crucial role to deliver clean software. Based on the fishing analogy. test cases is the “net” to catch the “fishes” (defects) and it is necessary that the net needs to be broad, deep, strong with a fine mesh.
In HBT, the test design activity is done quality level-wise and within each level stage-wise. At each level it is done in two stages - design test scenarios first and then test cases. Test scenarios are designed into entity-wise and therefore there is a built-in notion of requirements traceability. In addition to requirements traceability, it is expected that the test scenarios and the corresponding test cases are indeed traced to the potential types of defects that they are expected to uncover. This is termed as fault traceability.
The act of test design commences with the identification of the test level and then the specific type of test for which the test cases are to be designed. This allows us to identify the various test entities for which test cases have to be designed.
Having identified the test entities it is then required to partition the problem into two parts: firstly to understand the behaviour (business logic) and then to understand the various data elements for the business logic. This allows us to identify the various conditions in the business logic and allow us to model the behaviour more formally. The behaviour model is used to generate test scenarios. Then for every given scenario, we have to understand the data elements that vary and then come up with the optimal number of values for each data element. The various values of each data element are then combined to generate the test cases.
Note that only external specification and therefore black box techniques have been used until now to design the scenarios and cases. It is equally necessary to use the structural information of the entity in the test to refine the scenarios and test cases.
Finally we have to trace the scenarios and the corresponding test cases to the potential defects that have been hypothesized for the entity under test for the given test type. This allows us to ensure that the test cases do indeed have the power to uncover the hypothesized defects and thereby ensure that the test cases are indeed adequate.
The final step involves assessment of the test breadth, depth porosity and thereby be sure the test cases are indeed adequate.
Effecting testing is the outcome of good test cases.
92
Approach to test design
Input cleanliness
Input interface cleanliness
Structural integrity
Behavior correctness
Environment cleanliness
Attributes met
Flow correctness
Clean Deployment
End user value
L1
L2
L3
L4
L5
L6
L7
L8
L9Remember the NINE quality levels.. The test scenarios/cases are designed level-wise. Note that the entity to be tested at each level may be different. For example at the higher levels, the entities to be tested are requirements/business-flows, whereas at lowers levels, it may be screens/APIs etc.
At each level the approach to test design is:.. design test scenarios first and then.. come up with test cases
93
What is a Test Scenario & Test Case?When we test, our objective is to check that the intended behaviour is what is implemented.
What do we need to do?For an entity under test, we need to come up with various potential behaviors and check each one of these. That is we need up a set of scenarios to evaluate the behaviours.
Test Scenario reflects a behavior and is the path from the beginning to end.
How do we check a behavior?We do this stimulating the behavior with a combination of inputs and check the outputs.
Test Case is a combination of inputs to stimulate the behavior.
Positive/Negative test scenarios/casesPositive scenario is the expected behavior of the entity under test.Negative scenario is behavior that is not expected of the entity under test.
Test cases that are part of positive scenario are positive test cases.Test cases that are part of negative scenario are negative test cases.
94
Hierarchical test design
For each entity under test, generate test scenarios first, and then test cases.This is Hierarchical Test Design.
Business LogicIs a collection of
conditionsInputs Outputs
Combination of INPUTSresult in Test cases
Combination of the CONDITIONS result in Test Scenarios
95
Information neededfor design
Input cleanlinessL1 Data validation test Data specification info needed
Behavior correctnessL4 Functionality, Data integrityBehavioral (conditions) & Data specification
Structural integrityL3 Structure testInformation about architecture & code structure
Input interface cleanlinessL2 UI test, UsabilityInterface information and User information
Flow correctnessL5 Flow correctness testBehavioral (conditions) & Data specification
Environment cleanlinessL6 “Good citizen” test Environment dependencies & Resource usage info
Attributes metL7LSPS, Security, Usability,Reliability, VolumeUsage profile, data sizes, access controls, security aspects and other attribute information as applicable
Clean DeploymentL8 SI, Migration, CompatibilityEnvironment (HW, SW, versions), Data volumes/formats,
End user valueL9 End to End Flow testEnd user scenarios of usage, End user expectations
Key tests/Information needed
96
What to do when requisite information is missing/not-available?
When analyzing a specification, look for the conditions that govern the behavior (business logic) and the data.
It is quite possible that all the conditions may not be clearly listed or the values for the conditions are not clearly stated.
What is to be done in such cases?It is a cardinal sin to ignore missing conditions!
It is imperative that you identify the list of conditions and values that they take.In the case, these are not available, question!
The true value of effective testing lies in uncovering the missing information.Note that you have in effect uncovered issues in specification, which is great.
97
How do we know that test scenarios/cases are adequate?1. Test Scenarios/Cases shall be COUNTABLE. That is, the number of test scenarios/cases designed shall be proven to no more or no less. This can only be done (a) if the behavior is modeled and scenarios generated and (b) values for test inputs generated and combined formally.
2. There shall exist scenarios/cases for each requirement/featureREQUIREMENTS TRACEABILITY.
3. Each type of defect (PDT) hypothesized for every requirement/feature shall traced to scenarios/cases.FAULT TRACEABILITY
4. At the lower level, scenarios/cases shall cover all the code (statements or conditions or multiple-conditions or paths)CODE COVERAGE
Feature = Business Logic + Data
Business logic is implemented as a set of conditions that have to be met
For a given test entity, do we clearly understand ‘all the conditions’ that govern the behavior. Have all ‘effective’ combinations been combined to generate the test scenarios?
Do we clearly understand the specification of each test input (data)?Have we generated all the values for each input?Have we combined these values optimally?
Countable Scenarios/Cases
98
Requirements traceability
R1
R2
R3
...
Rm
TC1
TC2
TC3
...
TCi
Requirement traceability is about ensuring that each requirement does indeed have test case(s). So after we design test cases, we map test cases to requirements to ensure that all the requirements are indeed being validated. This is typically used as a measure of test adequacy.
Let us consider a situation wherein there is exactly one test case for each requirement. Now are the test cases adequate? No! Requirement traceability is a necessary condition for test adequacy but not sufficient.
Also understand that the expectation of a requirement is not merely about functional correctness, it is also expected that certain attributes i.e. non-functional aspects have to be also met. So non-functional test cases need to be traced too.
Every test case is mapped to a requirement.orEvery requirement does indeed have a test case
99
Fault traceability
PDT1
PDT2
PDT3
...
PDTi
R1
R2
R3
...
Rm
Having hypothesized the PDTs (Potential Defect Types) in Stage #3, the natural thing to do would be to map these to the Requirement (or entity-under-test). This is accomplished as part of Stage #4 to develop the test strategy.
Continuing further in Stage #4 the specification of the Requirement is used to design test scenarios and cases. Note that by in this approach, test cases are automatically traced to Requirements.
Given that the Requirement could have the PDTs that have been mapped earlier, let us map the designed test cases to the PDTs. The intent of this is to ensure that the designed test cases do have the power to uncover the hypothesized defects.
Mapping the PDTs to each Requirement and its associated Test cases is termed Fault Traceability in HBT.
Fault Traceability in conjunction with Requirements Traceability makes the condition for test adequacy Necessary and Sufficient
TC1
TC2
TC3
...
TCn
PDT1
PDT2
PDT3
...
PDTi
100
Fault traceability + Requirements traceability
PD1
PD2
PD3
...
PDn
R1
R2
R3
...
Rm
PD1
PD2
PD3
...
PDn
TC1
TC2
TC3
...
TCi
Requirements traceability
Faulttraceability
Faulttraceability
Requirements traceability is“Necessary but not sufficient”
Assume that each requirement had just one test case. This implies that we have satisfied the required traceability objective.
What we do know is that could there additional test cases for some of the requirements?
So requirements traceability is a necessary condition, not a sufficient condition.
So, what does it take to be sufficient?
If we had a clear notion of types of defects that could affect the customer experience and then mapped these to test cases, we have Fault Traceability). This allows us to be sure that our test cases can indeed detect those defects that will impact customer experience.
101
Test design documentation
Test objective
Prerequisites
Test data combinationExpected results
Test steps
Useful in manual execution and assist in automating scripting
Useful to detect defects
Useful to setup test environment
Useful to clarify intent/ setup goal
Questions:
What is the value of each of these information?i.e. How useful are they?
What do these various pieces of information help in?
102
Syntax of test case documentationTest objectiveDescribe the test objective in natural language.
PrerequisitesDescribe the prerequisites in natural language.
Test scenario descriptionWrite this as a ‘one-sentence beginning with “Ensure that system does/does-not...”
Test casesFor each scenario list the test cases as a table show below.
Test steps/procedureDescribe the procedure for execution as a series of steps.1 ....2 ....
Note: Be as terse as possible and yet be clear. The intent should be think more rather than document more. Also terseness forces clarity to emerge.
103
HBT Test Case ArchitectureOrganized by Quality levels sub-ordered by items (features/modules..), segregated by type, ranked by importance/priority, sub-divided into conformance(+) and robustness(-), classified by early (smoke)/late-stage evaluation, tagged by evaluation frequency, linked by optimal execution order, classified by execution mode (manual/automated)
A well architected set of test cases is like a effective bait that can ‘attract defects’ in the system. In HBT, we pay attention to the form and structure of the test cases in addition to the content.
The form and structure as suggested by the HBT test case architecture also enables existing test cases to be analyzed for effectiveness/adequacy. This can be done by “flowing the existing test cases” into the “mould of HBT test case architecture”.
104
Discipline #5 : Tooling
Tooling and automation is not simply developing code, it requires a clear analysis and design to ensure that the tooling/automation is flexible enough to keep up with the changes of the system and that it delivers value. This discipline enables you to analysis the tooling needs in a rational manner to ensuring that investment in tooling is not wasted and that the subsequent scripts do allow up improve efficiency and effectiveness.
This discipline consists of TWO tools, each of which uses certain STEM core concepts to ensure these are done in a scientific and disciplined manner.
How to Analyse tooling needs
Tooling needs analysisAutomation complexity analysis
How to Good scripting
Separation of concernsMinimal babysitting principle
105
Tools in D5 - ToolingTools STEM Core Concepts Description
How to Analyse tooling needs
Tooling needs analysisAutomation complexity analysis
This tool enables you to understand what parts of testing needs support of technology in terms of tooling/automation.
How to Good scripting
Separation of concernsMinimal babysitting principle
A script once developed has to be in sync with the application/system and hence requires continuous maintenance. Also a script when run may encounter situations that cause to stop or seek user guidance for continuance. This tool enables you to develop good scripts by ensuring a clear separation of data and code and design of “execution run flow” (i.e what script needs to be executed in case this fails) to ensure that the automated run is maximised (i.e. as much of scripts are indeed run).
106
Discipline #6 : Visibility
This discipline enables one to “quantify quality” to enables goal-focused approach to management. The focus of this discipline is to setup a model for measuring quality and also devise measures that are purposeful and goal-focused.
This discipline consists of TWO tools, each of which uses certain STEM core concepts to ensure these are done in a scientific and disciplined manner.
How to Measure quality
Quality quantification model
How to Devise measures
Goal-Question-MetricMetrics landscape
107
Tools in D6 - VisibilityTools STEM Core Concepts Description
How to Measure quality Quality quantification model
This tool enables you to set a model to measure the “intrinsic” quality by enabling to use the “cleanliness criteria” to give a objective picture of the system quality. This also allows you to come with “cleanliness index” to quantify quality.
How to Devise measures
Goal-Question-MetricMetrics landscape
This technique ensures that that you design measures that are goal-focused. rather than setting my measures and then analyzing them, this tool helps you articulate a goal and then derive appropriate measures.
108
Discipline #7 : Execution and reporting
This discipline enables one to ensure that the reporting of information during testing conveys the information that enables purposeful actions to be executed.
This discipline consists of TWO tools, each of which uses certain STEM core concepts to ensure these are done in a scientific and disciplined manner.
How to Good defect reporting
Defect rating principle
How to Learn & Improve
Contextual awareness
109
Tools in D7 - Execution and reportingTools STEM Core Concepts Description
How to Good defect reporting
Defect rating principle This tool helps you report the outcomes of testing i.e. defects in a clear manner to enable (1) clear understanding problem (2) enable clear resolution (3) provide learning opportunities for improvement
How to Learn & Improve Contextual awareness
The act of a-priori plan/design is useful to the way, but learning from act of testing by being understanding the context is very essential to effective testing. This tool is about sensitizing you this so that the test artefacts are continually enhanced with learnings from testing.
110
Discipline #8 : Management
This discipline takes a “earned value approach” to management (i.e. goal focused). The focus is on using the cleanliness criteria and index as the basis to ascertaining where we are with respect to quality in comparison with where we-should-have-been and then ascertaining risks related to quality & release to enable rational & clear management.
This discipline consists of ONE tool, that uses certain STEM core concepts to ensure these are done in a scientific and disciplined manner.
How to Goal focused management
Quality quantification modelGating principle
111
Tools in D8 - ManagementTools STEM Core Concepts Description
How to Goal focused management
Quality quantification modelGating principle
The tool uses the cleanliness criteria and index to understand as where you are with respect of the goal. Remember that we commenced with setting up cleanliness criteria and quality levels. This tool adopts a “earned value approach to quality” by enabling you to assess as where you in the quality level and helps you compare as to where you should be, helping you clearly understand the gaps and enable you to manage rationally/objectively.
112
Techniques Landscape
Black Box TechniquesBlack Box TechniquesBlack Box TechniquesBlack Box Techniques
Scenario designFunctional test Decision table Flowchart State machine
NFT (LSPS) Operational profiling
Data value generationBoundary value analysisEquivalence partitioningSpecial valueError based
Data value generationBoundary value analysisEquivalence partitioningSpecial valueError based
Test case generationExhaustiveSingle faultAt least onceOrthogonal array(Pair-wise combination)
White Box TechniquesWhite Box TechniquesWhite Box TechniquesWhite Box Techniques
Control flow basedCyclomatic complexityStatement coverageDecision coverageMultiple condition coveragePath coverage
Control flow basedCyclomatic complexityStatement coverageDecision coverageMultiple condition coveragePath coverage
Data flow basedData flow (def-use)Resource basedResource leak
Data flow basedData flow (def-use)Resource basedResource leak
This is a guideline that lists the various test techniques to allow you to choose the appropriate ones.
The techniques are classified into two categories. The first categorisation is based on the type of information used to design i.e. external information i.e. Black box and Internal information i.e. white box. The second categorisation is based on test design outcome - (1) Those are useful for designing test scenarios (2) Those useful to create various test data (3) Those that are useful to combine the test data optimally yet being effective.
A guideline that lists the set of test design techniques based on method of examination, design stage and the type of defect.
114
Landscaping
This technique inspired by Mindmapping enables one to meaningful questions in a systematic manner to understand the needs & expectations.
It is based on the simple principle:“Good questions matter more than the answers. Even if questions do not yield answers, it is fine, as it is even more important to know what you do not know.”
The premise is that understanding about SIXTEEN key information elements and their connections enables one to understand the expectations & system. The act of seeking information results in questions that aid in understanding.
A technique to rapidly understand the system by examining the various elements and the connections between them.
115
Typical questions generated by Landscaping (1/2)
Marketplace What marketplace is my system addressing?Why am I building this application? What problem is attempting to solve? What are the success factors?
Customer type Are there different categories of customers in each marketplace?How do I classify them? How are their needs different/unique?
End user (Actor) Who are the various types of end users (actors) in each type of customer?What is the typical/max. number of end-users for each type?Note: An end user is not necessarily a physical end user, a better word is ‘actor’
Requirement (Use case)
What does each end user want? What are the business use cases for each type of end user?How important is this to an end user - what is the ranking of a requirement/feature?
Attributes What attributes are key for a feature/requirement to be successful (for an end user of each type of customer)?How can I quantify the attribute i.e. make it testable?
Feature What are the (technical) features that make up a requirement (use-case)?What is the ranking of these?What attributes are key for a successful feature implementation?How may a feature/requirement affect other feature(s)/requirement(s)?
116
Typical questions generated by Landscaping (2/2)
Deployment environment What does the deployment environment/architecture look like?What are the various HW/SW that make up the environment?Is my application co-located with other applications?What other softwares does my application connect/inter-operate with?What information do I have to migrate from existing system(s)? Volume, Types etc.
Technology What technologies may/are used in my application?Languages, components, services...
Architecture How does the application structure look like? What is the application architecture?
Usage profile Who uses what?How many times does a end user use per unit time? i.e. #/timeAt what rate do they use a feature/requirement?Are there different modes of usage (end of day, end of month) and what is the profile of usage in each of these modes?What is the volume of data that the application should support?
Behavior conditions What are the conditions that govern the behavior of each requirement/feature?How is each condition met - what data (& value)drives each condition?
117
Viewpoints See the system from various end users’ point of view to identify the needs & expectations to set a clear baseline.
Good testing requires that the tester evaluate the system from the end use angle i.e. put oneself in the end-user’s shoes. This is easier said than done.
Viewpoints is a technique that enables this.
This states that for each type of user:1.Has different expectations from the system2.Uses different features due to differing needs3.Values different attributes4.Views importance of a feature differently5.Uses features at different frequency/rate6.Has different expectations on quality
One the various types of users are identified, this technique is useful in digging deeper to get a clear handle on needs & expectations.
System
118
Reductionist principle
Customer
System
Requirement #1
Requirement #2
Requirement
Functionality
Attributes
End User
Functionality
Feature #1
Feature #2
Attributes
Measure #1
Measure #2
Engineering
Feature
Business logic
Data
Manage complexity by decomposing the information to smaller elements.
Reductionism means reduction, simplification. The objective of this principle is to breakdown an aspect into smaller parts until it is understood clearly. The intent is to gain crystal-clear clarity to enable a job to be performed well.
This principle can be applied at various phases of evaluation to understand various aspects:
Phase Application of this principle
Product understanding
Break down the system needs into into use cases, then features.Break down the requirements into functional and non-functional aspects.
Test designBreak down the entity under test into business logic and data components to design functional test scenarios/cases.
Complexity assessment
Break down complexity into functional, structural, data and attribute complexity.
Effort estimation
Break down large activities into smaller fine-grained activities so that effort can be estimated precisely.
119
Reductionist principle (continued)
The principle is to break down anything into a smallest component. The intention is to gain a better understanding.
So, if you are trying to understand a product,decompose the product into requirements (aka use-cases). Subsequently decompose each use-case into various constituent features.
Decompose the given requirement/feature into functional and non-functional aspects. Decompose the functional aspect of a feature into business logic and data.
In the case of estimation, decompose the act of validation into large grained test life-cycle activities and then break each of these into smaller grained activities.
To understand the complexity, decompose the complexity into functional behavior complexity, structural complexity (how complicated are the innards), attribute complexity (what aspects of non-functional behavior are challenging) and data complexity ( size/volume and data inter-relationship).
120
Interaction matrix Understand the interrelations of the elements - requirements, features.
A system is not a mere collection of distinct features; it is the interplay of the various features that produces value. But this also has an important side-effect, the various features may affect each other in a negative fashion. A highly interacting set of features make the system complex.
This technique allows us to understand the potential interactions among the features/requirements. Modifying a feature may therefore may result in an unwarranted side effect. This technique helps to understand the interaction of the various features of the software and therefore hypothesize the potential unwanted side-effects and therefore formulate an effective strategy of evaluation.It is useful to the inter-relationships quickly initially, rather than elaborate the semantics of the interaction. The semantics of interaction may be deferred to a point when detailed analysis of a change needs to be done.
Understanding the linkages is also useful to appreciate potential side-effects that may affect soem of the key attributes.This is useful in understanding the system complexity and to enable effective strategy formulation and later at optimization of regression tests.
F1 F2 F3 F4
F1 X X
F2 X
F3 X
F4 X
121
Attribute analysis A technique to identify attributes expected of the system and ensure that they are testable.
It is not only sufficient that each feature is functionally clean, it is equally important that the associated attributes be also met. The challenging aspect of the attributes that they could be typically fuzzy. Good testing implies that attributes be testable. This implies that each attribute have a clear measure or metric. For example if performance is one such attribute, it is necessary to understand the performance metric for a feature at the worst case be t<=T, T being the expected performance metric.
Rather than commence with identifying attributes for the whole system, identify attributes for each requirement and then combine these to arrive at system-wide attributes. For each requirement, list the “critical-to-great-experience” attributes. If it is any easier to do this at the level of features, then do so. i.e. identify key attributes for each feature and then arrive at the attributes at the requirement level. Use a standard attribute list like ISO 9126 to ensure that no attributes are missed out. What we have now, is a list of attributes for each requirement.
Attribute Metric
F1
A1 a1
F1 A2 b1F1
A3 c
F2A2 b2
F2A4 d
F3A1 a2
F3A2 b3
Attribute What
A1 F1(a1),F3(a2)
A2 F1(b1),F2(b2), F3(a3)
A3 F1(c)
A4 F2(d)
Once the attributes for each requirement or feature have been identified, now group the common attributes to formulate the system-wide attributes.
This enables better clarity of as to what each attribute really means ensuring that the attributes or non-functional requirements are indeed testable.
122
Attribute analysis (continued)
Attribute
Characteristic(s)
Measure(s)
Expected value(s)
1. Identify these based on users
2. Identify these based on usage patterns
3.Based on (2) derive technical measures
4. Now connect (3) to (2) ensuring that these reflect expectations that are testable
The benefits of application of this technique are:1.That we do focus on the non-functional aspects of the system2.That the non-functional requirements are indeed testable.3.That we are able to come with up with good questions to extract/clarify non-functional requirements when they are not/ill-stated.
It is quite possible that the attributes are descriptive and therefore hazy/fuzzy. It is now important to ensure that every attribute is testable. As a first step identify key characteristic(s) for each attribute. For each characteristic, identify possible measures so that we may come up with a number/metric to ensure clarity. Now identify a measure for that characteristic and then identify the value expected for this metric.
123
Value prioritisationA typical system consists of multiple use cases(requirements) that are used by different types of users in differing frequencies. The business importance of each use case is different and the same is true of the different user types. Since testing is about reducing the business risk to acceptable levels, and accomplishing the same in optimal effort/cost, we need to understand the business importance and criticality of users, use cases and the associated features. This technique enables a logical analysis of prioritisation of value so that test effort is targeted on the right aspects.
Application Identify the various types of users. For each type of type of users, identify the typical number of users for each user type. If the number of users for an user type is large, we may conclude that this user type is indeed important. However, just because the number of users for a given user type is low, we cannot necessary conclude that this user type is not as important. It is important to understand how important this user is to successful deployment of the system i.e. Understanding the impact of this user type’s expectations is not met.Now combine the number of users for a user type and the business impact of this user to successful deployment and arrive at the priority of a user type. Do this for all the user types.
In addition to user type prioritisation based, it is necessary that we understand the importance of what a user type does. i.e. what requirements (use cases/business flows) are most/more important. Here again we can apply the same logic that we we applied for each user type. That is, understand the frequency of usage and the business impact of a incorrectly implemented requirement. Hence it is important to understand what types of users use the requirement and how many times they use it in a given span of time. Applying the same logic that low frequency usage may not necessarily indicate that it is a less important requirement, as that requirement may cause severe business loss if it did not work correctly, despite being used infrequently.To arrive at the prioritisation of a requirement, one can breakdown the requirement into its constituent technical features and perform a similar analysis if it is easier to analyse this from the lower level technical requirements.
The end point of application of this STEM core concept results in a rational way to arrive at prioritisation of features, requirements and user types.
BenefitsThis allows us to develop a test strategy that can indeed can focus on the key aspects more, utilising the effort, time and cost effectively and efficiently. Understanding prioritisation allows us to set the priority of test scenarios/cases to‣enable optimal regression‣enable choosing the key test cases to execute in case of constrained time‣enable correct severity rating of defects e.g. failure of important test cases could result in high severity defects.
A technique to prioritise the elements to be validated to enable effective and effective testing.
124
Value prioritisation (continued)
User Type
#Users Bus. Criticality
UT1 n1 V V High
UT2 n2 High
UT3 n3 V High
Understand the business value of the features and their priorities. Effective testing is about reducing business risk to acceptable levels.This technique helps you rank the various end users, use cases/features.
Req./Feature Usage freq.
Impact
R1(F1-F3) n1 V V High
R2(F2-F4) n2 High
R3(F4-F6) n3 V High
Need Must-have, Could-have, Nice-to-have
Frequency Heavy, Moderate, Light
Loss outcome Huge, Moderate, Acceptable
125
Operational profiling A technique to identify the usage patterns and hence the load profile.
Understanding the rate and number of transactions probably on a real system is critical to ensure that the system is designed well and later sized and deployed well. Good understanding of the business domain is seen as a key enabler to arrive at the usage profile. Operational profiling is technique that enables one to scientifically arrive at a real life profile of usage. Good understanding of this concept alleviates the problem of lack of deep domain knowledge to understand the usage profile. This core concept consists of these key aspects:
1. Mode – Represents a time period of usage e.g. End of month, where the usage patterns are distinctive and different.2. Key operations (features/requirements) used3. Types of end users associated with the key features/requirements4. Number of end users for each type of users5. Rate of arrival of transactions
In short, for a given mode, identify the end users types and their key operations and then identify the number of users for each type of user and then identify the rate of arrival of transaction. Employing this core concept allows us to think better and ask specific questions to understand the marketplace and the usage profile in a typical and worst-case scenario.
The operational profile is extremely useful for creating test scenarios for load, stress, performance, scalability and reliability tests.So, the profiling would consist of identifying various actors, the various use-cases these actors use the frequency (rate), at which they use and understand the no. of operations that they would do in different time periods.
126
Operational profiling (continued)
0
50
100
150
200
t1 t2 t3 t4
O1 O2 O3 O4
TimeTimeTimeTimeUser type #Users Operation t1 t2 t3 t4
UT1n1 O1 50 20 30 20
UT1 n2 O2 25 0 15 10
UT2n3 O3 100 50 15 0
UT2 n4 O4 0 35 35 50
O2
O3
O4
O5
O6
O7
O8
O1
Software/System
UT1
UT2
UT3
1. Identify the key operations of the system 2. Connect the user types & operations i.e. what operations are used bywhich user types3. For each user type list out the typical & maximum number of users4. Identify modes of usage e.g. different times of day/week/month/year5. For each mode, approximate the number of operations in a given time period for each user type6. Finally approximate the rate of arrival of the operations.NOTE:1. An user need not be a physical user, it could be another system
127
GQM Goal-Question-Metric
A technique to ensure that the goal (cleanliness criteria) is indeed testable.
A technique that helps you to set clear goals. Metrics may be viewed as milestone markers towards the goal.Collecting metrics is easy, the hard part is “how is it useful in helping me reach my goal?”
1. Identify goal (s) first2. Come up with questions to understand distance from the goal3. To answer these questions objectively, identify objective measures
Vague cleanliness criteria are useless.This technique enables you derive cleanliness criteria that is clear by forcing you identify :
1. What is cleanliness? (Goal) 2. How do you ascertain the cleanliness? (Question)3. Ensure that this is less subjective i.e. via objective measure (Metric)
Goal
Q1 Q2
M1 M2 M4M3
128
Negative thinking A technique to identify potential defect types based on “Aspects” of a system
The objective is to identify potential defects in the entity under test in a scientific manner by adopting a fault centric approach. The intent is to think ‘negatively’ on various aspects and thereby identify potential defects in the entity under test.
Any entity under test, processes data according to certain business logic, is built using structural components, that uses resources from the environment, and is ultimately used by certain classes of end users. To hypothesise potential defects in a entity-under-test, the above generalisation can be applied in a scientific manner.
Aspect Generalized PDTs
Data Violation of type specificationIncorrect format of data (data layout, fixed vs. variable length)Large volume of dataHigh rate of data arrival Duplication of data that is meant to be unique
Business logic
Missing conditions & values that govern the business logicConflicting conditionsIncorrect handling of erroneous pathsImpact on attributes e.g. performance, scalability, reliability, security etc.Transaction related issues i.e. multiple operations need to complete, else none should be performed
Aspect Generalized PDTs
Structure Consuming dynamic resources and not releasing themError/exceptions not handled well or ignoredSynchronization issues, deadlock issues, race conditionsBlocking leading to “hanging” when dependent code does not return
Environment Potential defects pertaining to environment may be :Improper configuration of settings in environmentNon-availability of resourcesIncorrect versions of dependent sub-systems/componentsSlow connections
Usage Wrong sequencing of usageImproper disconnects/abortsHigh rate of usageLarge usage volumeUnauthorized usage i.e. violation of access controlDifficult to use i.e. not very intuitive
129
Negative thinking A technique to identify potential defect types based on “Aspects” of a system
Data
Business Logic
Environment
Usage
Structure
Aspects
Business Logic
Structure
Data
Environment
Usage
used by
uses
built using
uses, lives in
The objective is to identify potential defects in the entity under test in a scientific manner by adopting a fault centric approach.
This technique decomposes an entity into FIVE elemental aspects that are:‣Data‣Business logic‣Structure‣Environment‣Usage
The intent is to think ‘negatively’ on these FIVE aspects and thereby identify potential defects in the entity under test.
Any entity under test, processes data according to certain business logic, is built using structural components, that uses resources from the environment, and is ultimately used by certain classes of end users.
130
Defect centricity principle A principle to group similar defects into defect types
System
Leve
ls
PDTs
A principle to group similar potential defects into potential defect types(PDT).The intent is to create a manageable list of PDTs.
136
EFF Model (Error-Fault-Failure)
A technique to identify potential defect types by seeing the system from on different “Views”
Errors injected into the system irritate faults causing them to propagate and result in failures.Failure is what customer observes. High impact failures are the result of severe faults.EFF enables failure-centric and error-injection-centric thinking to identify potential defects, complementing the fault-centric thinking.
Each “Aspect” can be viewed from THREE angles.
Error injection
Fault proneness
Failure
What errors can we inject?
What inherent faults can we “irritate”?
What failures may be caused?
ERRORirritatesFAULT
FAULT propagates resulting in FAILURE
137
Orthogonality Principle A principle that clearly delineates quality levels, test types and test techniques.
Typ
e
Stage/Level
Technique
Defect
This principle states that to uncover a defect optimally, you need to identify the earliest stage of detection (i.e. Quality level) and identify the specific type of test and use the most appropriate test techniques (i.e. bait) to ensure that the scenarios & cases are adequate.
This allows us to understand the ‣earliest point of detection‣type of test needed & ‣effective test technique
i.e. Given a potential defect:1. What is the earliest point of detection?2. What type of test needs to be done?3. What test techniques would be most suitable?
Identifying the levels, the corresponding test types and techniques is what constitutes a strategy.
138
Quality growth principleA principle to setup progressively improving levels of quality/cleanliness for an entity under test.
Staging quality growth via levels enables clarity of defect detection - “what to detect when”.
Reaching the “pinnacle of excellence” is like climbing the staircase of quality.
This also allows us to objectively measure quality.
PDT3PDT2
PDT1
PDT5PDT4
PDT8
PDT7PDT6
QL1
QL2
QL3
QL4
Stage
PDT10
PDT9
PDT9
Clea
nlin
ess
139
Process landscape A guideline that lists the various “process” models for test design and execution.
Process model employed must be based on the type of defect to be uncovered. Certain types of defects are best discovered using a disciplined approach while some may rely on the individual’s creativity at the time of testing. Some of the these may rely on pure domain experience while some may better uncovered by a careful analysis of past history of issues and some of them need an good understanding of the context of deployment and usage.
Process modelsProcess models
1. Disciplined /Structured Design first
2. Ad-hoc/Random/Creative On the fly design3. Contextual Context based design
4. Historical Past issues based design
5. Experiential Domain based design
Defect type Process modelDT1 1,4
DT2 2,3
DT3 4
140
Tooling needs analysis A technique to analyse the needs of tooling & automation.
Tooling needs can be in
1. Structure analysis
2. Installation
3. Setup/configuration
4. Data creation
5. Test execution
6. Outcome assessment
7. Behaviour probing
Test type
Scenarios
TT1A.TS1
TT1...
TT2B.TS1
TT2...
TT3C.TS1
TT3 ...
Execution tooling needsExecution tooling needs
TT1 Manually
TT2Not only manually
TT3Nice to automate
Guiding aspects to automation
1. Frequent basic tests
2. Regression oriented
3. Time consuming
4. Effort consuming
5. Requires high skills
A technique to analyse tooling needs in a disciplined manner.1.First analyse what aspect of test life-cycle needs tooling help. 2.Later analyse what scenarios cannot be executed manually at all. 3.Identify what of those scenarios that can be executed manually would be nice to automate
based on suggested parameters i.e. Guiding aspects to automation..
Tooling for automating testing costs money. It is necessary therefore ensure to be be sure of the purpose or the objective to be achieved.
This technique enables analysing the tooling needs as to what is to be automated and the reason/benefits.
141
Cycle scoping A technique to setup goal focused test cycles with clear scope for each cycle.
Cycle#1
What features?{ F1, F2,…,Fn}
Test types{T1, T2,…,Tn}
Scope
x
C1 C2 C4
F1, F2 F3,F4
T1 T1,T2 T1,T2T3
F1,F2,F3,F4Scope
QL1QL2
QL3
Test cycle is the point of time wherein the build is validated. It takes multiple test cycles to validate a product. Each test cycle should have a clear scope. Scope of testing in a cycle is “what needs to be tested and what aspect of cleanliness needs to be evaluated”.
The scope of a cycle in HBT is a Cartesian product of the Features (or Entities) and the Types of tests to be executed.
Scope = {Features} x {Test types} i.e What features will be tested, what tests will be done is the scope of a cycle.
In short the focus of each cycle is uncover certain PDTs enabling a monotonic quality growth in line with the intended quality levels.
142
Defect centred activity breakdown
A technique to estimate test effort by identifying the various activities required to uncover the potential defects in the entity under test.
Estimate effort based on the PDT that have to be uncovered in the various ‘elements’ of the software at different stages. Identify PDTs to be uncovered, stage them, identify tests, breakdown the each test into various activities, estimate effort at leaf level and then sum them.
CC PDT
QL1
QL2
QL3
QL4
TT1 TT2
TT3
TT4 TT5
TT5
QL TT TS Activities
Design
Document
Automate
ExecuteComponents
Screens
Features
Flows
143
Defect centred activity breakdown (continued)
For a given level, estimate effort based on #BasicElements, #TS, #Cycles, #Defects
Execute Log defects ManageUnderstand
Design & Documentation.
Review
Automate
#Cycles #Defects #Hrs/wk, #Cycles
#Elements, #TS
Depends on mode of “doing”Common test cases – ChecklistStatic /dynamic
144
Approximation principle A principle to aid in scientific approximation.
1.Identify the key parameters
2.Work out the formula
3.Understand which of these parameters are ‘sensitive’ i.e. a small variation can affect outcome grossly
4.Check if the parameters can be broken down further until their values can be estimated correctly
5.Now estimate the value of the parameters
5.1.Guess/Hypothesise based on best judgment
5.2.Test the hypothesis and correct same to a value closer to reality
6.Apply the formula and compute the value
7.Iterate based on learning gleaned out of this approximation cycle/estimated potential variation
The measure whose value is to be approximated is based on a set of parameters each having a varying sensitivity to the outcome, with a formula that binds these. The value of the parameters needs to be hypothesised, if sensitive, needs to be tested and then the formula applied. Iterate based on learning and potential estimated variation.
145
Box modelA technique to rapidly understand the intended functional behaviour of an entity under test by identifying the conditions and then the data and business logic(condition sequencing).
Description of business logic
I1
I2
O1
O2
Given an entity to be tested, understand the intended behaviour rapidly to generate the behaviour model.
1. Identify the conditions that govern the behaviour first.2. Then identify the data elements that drive the conditions.3. Finally identify the sequencing of conditions as a flow to understand the business logic (or behaviour)
The focus is to extract the conditions and identify the data elements to enable construction of a behaviour model and also to discover unstated/missing behaviour.
146
Behaviour-Stimuli (BEST) Approach
A technique to design test scenarios and cases ensuing sufficient yet optimal and purposeful test cases
I1
I2
O1
O3
O2
Entity under test
TS #1
TC #1
TC#2
TC#3
… TS #1
Testing is about injecting a variety of stimuli and assessing the behaviour by observing the actual with the expected result. Firstly identify behaviours to be validated and then generate stimuli. A behaviour is denoted by a test scenario while test cases represent stimuli. This is a hierarchical approach to test design, this enables clarity, coverage and optimality.
147
Input granularity principleA principle to identify the data element(s) for an entity under test and their specification.
Fine
Coar
se
The notion of what an input is and therefore its specification is based on the level of testing. The input specification at a lower level is ‘fine’ whereas at higher levels, it is ‘coarse’. Fine implies basic data types, whereas ‘coarse’ implies complex/aggregate data types. Understanding this is key to generating test cases appropriate to the level of testing.
148
Complexity assessment A technique to understand an entity’s complexity to identity suitable test techniques.
Complexity
Behavioural complexity
Business logic complexity
Data complexity
Structural complexity
Logic complexity
Resource complexity
Attribute complexity
Systems that are complex, demand to be tested more carefully.Some systems are business logic wise complex i.e. too many conditions and combinations, while some systems are structurally complex. Also in certain system the attributes may be demanding and therefore the complexity may in attributes.
Complexity can be broken into1. Functional complexity2. Structural complexity3. Attribute complexity
If (1) is complex, black box techniques are usefulIf (2) is complex white box techniques is usefulIf (3) is complex, judicious mix of (1) and (2) is necessary
149
Coverage evaluation A technique to assess test case adequacy.
Test
dep
th
Test breadth
Test porosity
Adequacy of test cases is key to clean software. This principle helps in understanding the test breadth, depth and porosity of test case. Breadth relates to the various types of tests to uncover the different types of defects. Depth relates to the various levels of tests to ensure that defects at all levels can be
uncovered. Porosity is whether test case is a clear combination of data or not. Additionally it is necessary to understand the conformance and defect orientation of test
cases.
Breadth Types of tests
Depth Quality levels
Porosity Test case “fine-ness”
150
Automation complexityanalysis
A technique to analyse complexity of tooling/ automation
The complexity of a script and therefore the effort required to design and code the scripts depends on various parameters. A script consists of sections of code to setup the condition for test, drive the test, compare the outcome, log information and finally cleanup.
The complexity of the script therefore may be decomposed into individual section complexities and analysed.
Setup Complexity depends on #steps, data, inter-relationship
Driver Complexity depends on length of flow (#steps), error-recovery complexity
OracleComplexity depends on #comparisons and type of comparison (course versus fine) and whether it is deterministic or non-deterministic
Log Complexity depends on #log points and log information detAILS
Cleanup Complexity depends on #steps, data inter-relationship
151
Minimal babysitting principle A principle to ensure unattended automated test runs.
Test script #1Test script #2Test script #3Test script #4…Test script #N
When automated tests are run, some of the scripts may fail and abort the entire test cycle. To utilise automation most effectively and increase test efficiencies, it is necessary to maximise the test run. i.e. as many scripts that can be run must be executed.
This principle states that the test scripts must be designed in a such a manner that
‘baby sitting’ i.e. restarting the test run manually must be minimal.
152
Separation of concerns principle
A principle to ensure delineation of code & data in automation to facilitate robust and maintainable automation..
Data
Test data
Setup/Config. information
Code
Common code
Specific code
A script consists of code and data that it uses to drive the system-under-test. The basic attribute of a good script is its ability to be flexible with minimal changes for adaptation.
Hence it is necessary that a script does not contain hard-coded data. The data in a script pertains to configuration/setup and the actual test data. This principle states that there must be a clean separation of the code and data aspects of the script.
153
Quality quantification modelA technique to “quantify quality” in alignment with the cleanliness criteria and quality levels.
Quantify software quality to allow for better decision making. Software is invisible and quality is the invisible aspect of this invisible. This technique enables you to setup an objective measurement system for measuring the quality of software.
Rate each cleanliness criteriaRepresent these as a Kiviat chartArea under a chart for a cycle represent the “Quality Index”.
154
Metrics landscapeA guideline to designing goal oriented metrics to rationally assess quality, delivery risk and test effectiveness
Risk
Quality Progress
Process
Metrics
To know where we are, how we are doing, it is necessary to have a have a beacon to light up the way to ensure good visibility. “Good goal oriented metrics is that beacon”.
Example : ProcessEffectiveness: Test breadth, depth, Defect escapes+:- ratio, CoverageEfficiency: BlockersProductivity: #TC executed/designed
155
Defect rating principle A principle to rate defect severity and priority.
Defects are rated by Severity and Priority. Severity of a defect is decided on by the impact of the defect on the customer. Priority of a defect is decided by the risk posed to timely release.
SystemCustomer“Business risk”decides ‘Severity’
Dev team“Release risk”decides ‘Priority’
SeveritySerious impact implies HIGH severity
PriorityBlocker implies HIGH priority
156
Contextual awarenessA principle to learn from context to enable better understanding and increase test effectiveness.
Good testing requires keen observation skills and a sharp ‘ear to the ground’. Observation of context and learning from it is key to better understanding and improvement of test cases.
“Familiarity breeds contempt” - Getting familiar with the internal workings and, and external behaviour goes a long way in significantly enhancing the test effectiveness.
Test cycle
Test cases(before)
Test cases(after)
157