ibm research © 2008 ibm corporation ibm confidential tariq m. king ibm china research lab june 13,...
Post on 20-Dec-2015
213 views
TRANSCRIPT
IBM Research
© 2008 IBM CorporationIBM Confidential
Tariq M. KingIBM China Research LabJune 13, 2008
Leveraging TraceabilityRecovery in Test Planningand Optimization
IBM Research
IBM Confidential
2
Agenda
Motivation
Storyboard Overview Step By Step
Implementation Use Case Capturer Prototype Design
Evaluation Plan
IBM Research
IBM Confidential
Businesses without mature processes struggle to maintain traceability between software artifacts→ poor traceability, makes testing more difficult
Furthermore, many practitioners still view testing as a separate phase that follows implementation → disjointed, program-based testing strategies
Much active research in area of automatically recovering of traceability links between artifacts
Can we leverage traceability recovery in test planning and optimization?
3
Motivation
IBM Research
IBM Confidential
4
Problem Overview
Use Cases, Formal &Informal Specifications
DesignArtifacts
Source Code, Binaries
RequirementsArtifacts
Architectures, Object Designs
TestArtifacts
Test Cases, Test Logs
ImplementationArtifacts
Specification-BasedTesting Activities
Software DesignActivities
ProgrammingActivities
Program-BasedTesting Activities
Typical Software Process Traceability Links
IBM Research
IBM Confidential
Question: How do missing links affect testing? Negatively. 5
In this case it is no longer possible to trace
testing activities to requirements
Problem Overview
DesignArtifacts
RequirementsArtifacts
TestArtifacts
ImplementationArtifacts
ProgrammingActivities
Program-BasedTesting Activities
? ?
IBM Research
IBM Confidential
Traceability recovery could be leveraged for
test case selection and regression testing
6
Problem Overview
DesignArtifacts
RequirementsArtifacts
TestArtifacts
ImplementationArtifacts
ProgrammingActivities
Program-BasedTesting Activities
Approach: Recover links and optimize future testing effort
IBM Research
IBM Confidential
7
Storyboard: Roles
Jane, Business Manager from the client-side who is interested in receiving a quality product that maximizes her business value, while avoiding unnecessary costs
Hugo, Chief Test Officer from the client-side who has devised a test strategy for Jane’s project but is uncertain as to whether the ongoing testing effort can be improved
Tariq, Test Architect from BIM Testing Consultants Inc. who has been contracted to assess the current testing effort and optimize any further testing activities
Alex, Test Lead from BIM Testing Consultants Inc. who is in charge of realizing an optimized test plan for the client by designing detailed test cases
CL
IEN
T-S
IDE
PR
OV
IDE
R
IBM Research
IBM Confidential
8
Storyboard: High Level Overview
Analyze and Pre-Process Artifacts
Build Traceability Matrix Model
Identify “Risky” Requirements
Acquire Project Test Plan
Create Optimized Test Plan
IBM Research
IBM Confidential
Description A project test plan describes the overall strategy for
testing the final application and products leading up to the completed application [1].
In this example, Hugo (CTO) provides Tariq with a use case driven system test plan, based on decision support factors including criticality and risk estimates.
9
1. Acquire Current Project Test Plan
Hugo(Client-Side CTO)
eCart-CheckouteCart-Login
Medium High 20
eCart-Add, eCart-Remove
Medium Medium 15
eCart-Logout Low Low 4
Use Case ID Risk Criticality # Tests
IBM Research
IBM Confidential
Description Many tools exist for analyzing artifacts such as source
code for metrics. Artifacts that are in a standardized format are more amenable to automated techniques.
In this example, Alex (Test Lead) uses home-grown and third party tools to analyze and pre-process the following:
10
2. Analyze and Pre-Process Artifacts
Artifact Name Application Extension
Use Cases MS Word .doc
Source Code Eclipse / JDK .java
Test Cases JUnit [4] .java
Test (Pass/Fail) Logs JUnit [4] .xml
Test (Coverage) Logs EclEmma [5] .xml
Alex(BIM Test Lead)
IBM Research
IBM Confidential
Description A traceability matrix records the relationships between
various software artifacts.
In this example, Alex builds a traceability model to support the strategy proposed by Tariq (Test Architect).
11
3. Build Traceability Matrix Model
1. Decompose use case flows into fine-grain requirements
2. Apply approach by Zhao et. al [3] to recover traceability links between source code and fine-grain requirements
3. Build a traceability matrix that correlates use case flows with their implementation metrics, unit test results and coverage.
Tariq(BIM Test Architect)
IBM Research
IBM Confidential
Description Test plans based on decision support (DS) use estimates
for the risks of implementing business requirements .
However, these estimations may be inaccurate due to over-emphasized or missing factors.
Source code metrics, unit test results and coverage information can be mapped to requirements to provide concrete evidence to support or refute DS estimates.
For example, requirements implemented in units that exhibit a high level of failures can be considered more risky than those implemented in low failure units.
12
4. Identify “Risky” Requirements (1)
IBM Research
IBM Confidential
Description (cont’d) In this example, Tariq proposes the following method of
assigning risk weights to the use case flow models:
1. Calculate weight for each source code unit u, denoted wu
according to the following formula:
wu = Complexity (u) + Failure Level (u) + Coverage (u)
o Complexity := High (3) | Medium (2) | Low (1)
o Failure Level := High (3) | Medium (2) | Low (1)
o Coverage := Poor (3) | Average (2) | Good (1)
2. The weight of a flow event e, denoted we, is given by:
we = ∑wu_e + | e | (Note: |e| factors for interaction failures)
13
4. Identify “Risky” Requirements (2)
IBM Research
IBM Confidential
Description (cont’d)
3. Finally, the weight of a use case flow f, denoted wf, is
given by the sum of the weights of all its events: wf = ∑we
14
4. Identify “Risky” Requirements (3)
o Highly complex units are likely to contain more defects (or more critical defects).
o If a large number of defects is found in a specific unit then more defects are likely to be found.
o Code not covered by unit tests pose some level of additional risk to further testing efforts.
Tariq(BIM Test Architect)
Rationale (Hypothesis) for Risk Calculation
IBM Research
IBM Confidential
Alex(BIM Test Lead)
Description Use case flows are then ranked and classified using the
risk weights produced by the proposed method.
In this example, Alex generates the following risk assessment using the data from Hugo’s project :
15
5. Create Optimized Test Plan (1)
15
Use Case ID Risk Weight Classification
eCart-Checkout 25 High
eCart-Add 18 Medium
eCart-Remove 15 Medium
eCart-Login 9 Low
eCart-Logout 4 Low
Risk Assessment after Unit Testing
IBM Research
IBM Confidential
Description Tariq compares the risks from Hugo’s initial test plan with
the risk assessment derived from unit test feedback (UTF).
Discrepancies are identified and the testing effort is then updated to reflect the new risk values:
16
5. Create Optimized Test Plan (2)
16
Use Case ID Risk (DS)
Risk (UTF) Testing Effort
eCart-Checkout Medium High Increase
eCart-Add Medium Medium Unchanged
eCart-Remove Medium Medium Unchanged
eCart-Login Medium Low Reduce
eCart-Logout Low Low Unchanged
Optimized Testing Effort
IBM Research
IBM Confidential
Description (cont’d) Alex then designs a set of detailed test cases for an
optimized test plan that adheres to the new testing effort
Members from both the client-side and service provider meet to review and approve the new test plan
1717
5. Create Optimized Test Plan (3)
Test Plan Approval
OK, it addresses business concerns
OK, it can be implemented by my testing team
OK, it is technically sound
OK, it follows my proposed test strategy
06/13/2008
IBM Research
IBM Confidential
18
Use Case Capturer (1) - ScreenShot
IBM Research
IBM Confidential
19
Use Case Capturer (2) – XML Output
IBM Research
IBM Confidential
20
Top-Level Design
IBM Research
IBM Confidential
Major Classes in Algos Subsystem WeightAlgorithm – computes weights for source units,
and cumulative weights for use case flows
PathComparator, UseCaseComparator – used by Java Collections library to sort use cases by their risk weights
Major Classes in Models Subsystem ReqParser, ClassParser, TestParser, CoverageParser
ReqMBuilder, ClassMBuilder, TestMBuilder
FlowEvent – a single use case flow eventFlowList – an entire use case flow
21
Detailed Design
21
IBM Research
IBM Confidential
Mutation on Decision Support Modify decision support risk values for actual project to
simulate bad estimates
Effectiveness of our approach will be judged by how many of the mutant risk values are “killed” after analysis.
Comparison of Multiple Project Results Apply approach three (3) software engineering projects
Consider test plans based on decision support vs. our approach against the results of system testing
Manually determine if our test plan showed any significant improvement 22
Evaluation Plan
22
IBM Research
IBM Confidential
23
References
[1] M. Lormans and A. van Deursen. Reconstructing re-quirements traceability in design and test using latent semantic indexing. Technical report, Delft University of Technology, April 2007.
[2] Y. Zhang, R. Witte, J. Rilling, and V. Haarslev. An Ontology-based Approach for the Recovery of Traceability Links. In 3rd Int. Workshop on Metamodels, Schemas, Grammars, and Ontologies for Reverse Engineering (ATEM 2006), Genoa, Italy, October 2006
[3] W. Zhao, L. Zhang, Y. Liu, J. Luo, and J. Sun. Understanding how the requirements are implemented in source code. In APSEC '03: Proceedings of theTenth Asia-Pacific Software Engineering Conference Software Engineering Conference, page 68, Washington, DC, USA, 2003. IEEE Computer Society.
[4] E. Gamma and K. Beck. JUnit 3.8.1, 2005. http://www.junit.org/index.htm (June 2008)
[5] M. Doliner, G. Lukasik, and J. Thomerson. Cobertura 1.9, 2002. http://cobertura.sourceforge.net/ (June 2008)