se300 swe practicesmercury.pr.erau.edu/~siewerts/se300/documents/lectures/lecture-week-12-b.pdfbased...
TRANSCRIPT
April 2, 2015 Sam Siewert
SE300
SWE Practices
Lecture 12 – Design and Prototype
Testing Strategies
(Design and Code Metrics)
When are We Done Testing?
Based on a Metric
– Code Coverage – Path, Statement
– Hours of Continuous Testing – Soak Time
– Number of Test Cases
– Test Vectors (Variation of Inputs to Functions and Units)
– Other?
Path Coverage Tools and Metrics
– E.g. Enigma code Function and Path Coverage using Gcov
SE420, SQA (Software Quality Assurance)
Sam Siewert 2
Copyright {c} 2014 by the McGraw-Hill Companies, Inc. All rights Reserved.
18-3
Test Driven Development
Y
Write/edit tests for
features to be
implemented next
Run new tests
All new
tests fail?
Write/edit
production code
Run all tests All tests
pass?
Refactor code
More features
to implement?
Halt N
N
N
Y
Y
Test coverage
accomplished? N
Add tests to
increase coverage
Y
Prepare for
TDD
Copyright {c} 2014 by the McGraw-Hill Companies, Inc. All rights Reserved.
18-4
Merits of Test Driven Development
• TDD helps the team understand and improve
requirements.
• TDD produces high-quality code.
• TDD improves program structure and
readability through refactoring.
• TDD makes it easier to debug.
Copyright {c} 2014 by the McGraw-Hill Companies, Inc. All rights Reserved.
19-5
What is Software Quality Assurance?
• SQA is a set of activities to ensure that the software product and/or process confirms to established requirements and standards.
• The activities include – Verification: Are we building the product right?
– Validation: Are we building the right product?
– Technical reviews
– Testing: Attempts to uncover program errors.
Copyright {c} 2014 by the McGraw-Hill Companies, Inc. All rights Reserved.
19-6
Cost of Quality
• Cost of quality consists of
– Costs of preventing software failures
• Costs to implement an SQA framework (one time)
• Costs to perform SQA activities (on going)
• Costs to monitor and improve the framework (on going)
• Costs of needed equipment (machines and tools)
– Costs of failure
• Costs to analyze and remove failures
• Costs of lost productivity (developer and customer)
• Costs of liability, lawsuit, and increased insurance
premium
• Costs associated with damage to company’s reputation
Copyright {c} 2014 by the McGraw-Hill Companies, Inc. All rights Reserved.
19-7
Cost of Quality
Software engineering considers costs to accomplish
something. Quality and cost trade-off.
Cost
Quality
Failure Cost
Prevention
Cost
Higher
Higher
Copyright {c} 2014 by the McGraw-Hill Companies, Inc. All rights Reserved.
19-8
Costs of Quality
10
100
0
100
1 requirement
s
design
coding
development test
system test
field operation relative error correction cost
Copyright {c} 2014 by the McGraw-Hill Companies, Inc. All rights Reserved.
19-9
Software Quality Attributes
• Reliability (adequacy: correctness, completeness, consistency; robustness)
• Testability and Maintainability (understandability: modularity, conciseness, preciseness, unambiguity, readability; measurability: assessibility, quantifiability)
• Usability
• Efficiency
• Portability
• etc.
Copyright {c} 2014 by the McGraw-Hill Companies, Inc. All rights Reserved.
19-10
Quality Measurements and Metrics
• Software measurements are objective,
quantitative assessments of software attributes.
• Metrics are standard measurements.
• Software metrics are standard measurements
of software attributes.
• Software quality metrics are standard
measurements of software quality.
• Class discussion: why do we need software
metrics?
Copyright {c} 2014 by the McGraw-Hill Companies, Inc. All rights Reserved.
19-11
Software Quality Metrics
• Requirements metrics
• Design metrics
• Implementation metrics
• System metrics
• Object-oriented software quality metrics
Copyright {c} 2014 by the McGraw-Hill Companies, Inc. All rights Reserved.
19-12
Requirements Metrics
Requirements Unambiguity Q1 =
#of uniquely interpreted requirements
#of requirements
Requirements Completeness Q2 =
#of unique functions
#of combinations of states and stimuli
Copyright {c} 2014 by the McGraw-Hill Companies, Inc. All rights Reserved.
19-13
Requirements Metrics
Requirements Correctness Q3 =
#of validated correct requirements
#of requirements
Requirements Consistency Q4 =
#of non-conflicting requirements
#of requirements
Copyright {c} 2014 by the McGraw-Hill Companies, Inc. All rights Reserved.
19-14
Design Metric Fan-In
• Number of incoming messages or control flows into a module.
• Measures the dependencies on the module.
• High fan-in signifies a “god module” or “god class.”
• The module may have been assigned too many responsibilities.
• It may indicate a low cohesion.
Copyright {c} 2014 by the McGraw-Hill Companies, Inc. All rights Reserved.
19-15
Design Quality Metrics: Fan-In
DBMgr
M11
M7
M1
M13
M9
M5
M12
M2
M10
M3
M8
M6
M4
Copyright {c} 2014 by the McGraw-Hill Companies, Inc. All rights Reserved.
19-16
Design Quality Metrics Fan-Out
• Number of outgoing messages of a module.
• Measures the dependencies of this module on other modules.
• High fan-out means it will be difficult to reuse the module because the other modules must also be reused.
Copyright {c} 2014 by the McGraw-Hill Companies, Inc. All rights Reserved.
19-17
Quality Metrics: Fan-Out
GUI
M11
M7
M1
M13
M9
M5
M12
M2
M10
M3
M8
M6
M4
Copyright {c} 2014 by the McGraw-Hill Companies, Inc. All rights Reserved.
19-18
Modularity
• Modularity – Measured by cohesion and
coupling metrics.
• Modularity = (a * Cohesion + b * Coupling)/(a
+ b), where a and b are weights on cohesion
and coupling.
Copyright {c} 2014 by the McGraw-Hill Companies, Inc. All rights Reserved.
19-19
Cohesion
• Each module implements one and only one
functionality.
• Ranges from the lowest coincidental cohesion
to the highest functional cohesion.
• High cohesion enhances understanding, reuse
and maintainability.
Copyright {c} 2014 by the McGraw-Hill Companies, Inc. All rights Reserved.
19-20
Coupling
• An interface mechanism used to relate
modules.
• It measures the degree of dependency.
• It ranges from low data coupling to high
content coupling.
• High coupling increases uncertainty of run
time effect, also makes it difficult to test,
reuse, maintain and change the modules.
Copyright {c} 2014 by the McGraw-Hill Companies, Inc. All rights Reserved.
19-21
Module Design Complexity mdc
• The number of integration tests required to
integrate a module with its subordinate
modules.
• It is the number of decisions to call a
subordinate module (d) plus one.
M0
M1 M2 M3
M5 M4 M6 M7
Module Design
Complexity mdc=4
Copyright {c} 2014 by the McGraw-Hill Companies, Inc. All rights Reserved.
19-22
Design Complexity S0
• Number of subtrees in the structure chart with
module M as the root.
• S0(leaf) = 1
• S0(M) = mdc+S0(child1 of M)+...+S0(childn of M)
M0
M1 M2 M3
M5 M4 M6 M7
S0=1
S0=1
S0=1
S0=1
S0=11
mdc=2
S0=3
mdc=1
S0=3
mdc=2
S0=3
mdc=2
Copyright {c} 2014 by the McGraw-Hill Companies, Inc. All rights Reserved.
19-23
Integration Complexity
• Minimal number of integration test cases
required to integrate the modules of a structure
chart.
• Integration complexity S1 = S0 – n + 1, where
n is the number of modules in the structure
chart.
Copyright {c} 2014 by the McGraw-Hill Companies, Inc. All rights Reserved.
19-24
Implementation Metrics
• Defects/KLOC
• Pages of documentation/KLOC
• Number of comment lines/LOC
• Cyclomatic complexity
– equals to number of binary predicates + 1
– measure the difficulty to comprehend a
function/module
– measure the number of test cases needed to cover
all independent paths (basis paths)
Copyright {c} 2014 by the McGraw-Hill Companies, Inc. All rights Reserved.
19-25
What is the cyclomatic
complexity of this
module?
Copyright {c} 2014 by the McGraw-Hill Companies, Inc. All rights Reserved.
19-26
Depth of Inheritance Tree
• Distance from a derived class to the root class
in the inheritance hierarchy
• Measures • the degree of reuse through inheritance
• the difficulty to predict the behavior of a class
• costs associated with regression testing due to change
impact of a parent class to descendant classes
Copyright {c} 2014 by the McGraw-Hill Companies, Inc. All rights Reserved.
19-27
High DIT Means Hard to Predict Behavior
• All three classes include
print()
• It is difficult to determine
which “print()” is used:
public static void main (...) {
Shape p; ….
p.print(); // which print()?
…
}
Shape
print()
Box
print()
Square
print()
Copyright {c} 2014 by the McGraw-Hill Companies, Inc. All rights Reserved.
19-28
ClassA
ClassB
ClassC
ClassD
ClassE
m()
m()
m()
Change to parent
class requires
retesting of
subclasses.
Negative Tests - HA Design Criteria
5 9’s – Out of Service no more than 5 Minutes per Year [Up 99.999% of the Time]
Availability = MTTF / [MTTF + MTTR]
MTTF ≈ MTBF
E.g. 0.99999 ≈ (365.25x60x24 minutes) / [(365.25x60x24) + 5 minutes]
MTBF=Mean Time Between Failures, MTTR=Mean Time to Recovery
Sam Siewert 29
Big iron lessons, Part 2: Reliability and availability:
What’s the difference?
Apply RAS architecture lessons to the autonomic
Self-CHOP roadmap
Availability Time System is Available for Use
Time System is Unavailable for Use
Ratio of Time Available over Total Time Both Available and Not Available
Some Variation in Definitions of MTTF, MTBF, MTTR – MTBF = MTTF + MTTR [In Book]
– MTTF is Simply Time Between Failures (Observed in Accelerated Lifetime / Stress Tests)
– MTBF Often Stated – Does it Really Include MTTR or Not?
– Typically Recovery Time is Small, So Negligible, But if Long, Difference is More Important
– Just Be Clear on Definitions [Can Vary]
% of Time System is Available for Use Each Year (Or Other Arbitrary Period of Time)
Sam Siewert 30
available recovery available recovery
MTBF
Failure #1 Failure #2
MTTF
MTTR
Copyright {c} 2014 by the McGraw-Hill Companies, Inc. All rights Reserved.
19-31
Usefulness of Quality Metrics
1.Definition and use of indicators.
2.Directing valuable resources to critical areas.
3.Quantitative comparison of similar projects
and systems.
4.Quantitative assessment of improvement
including process improvement.
5.Quantitative assessment of technology.