(mis-)communicating with data: a fresh look at ... 2014 © 2014 carnegie mellon university...
TRANSCRIPT
August 2014 © 2014 Carnegie Mellon University
(Mis-)Communicating with Data: A Fresh Look at Measurement for Software Development and Acquisition
Forrest Shull Dave Zubrow August 2014
2 August 2014 © 2014 Carnegie Mellon University
Copyright 2014 Carnegie Mellon University and IEEE This material is based upon work funded and supported by the Department of Defense under Contract No. FA8721-05-C-0003 with Carnegie Mellon University for the operation of the Software Engineering Institute, a federally funded research and development center. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the United States Department of Defense. NO WARRANTY. THIS CARNEGIE MELLON UNIVERSITY AND SOFTWARE ENGINEERING INSTITUTE MATERIAL IS FURNISHED ON AN “AS-IS” BASIS. CARNEGIE MELLON UNIVERSITY MAKES NO WARRANTIES OF ANY KIND, EITHER EXPRESSED OR IMPLIED, AS TO ANY MATTER INCLUDING, BUT NOT LIMITED TO, WARRANTY OF FITNESS FOR PURPOSE OR MERCHANTABILITY, EXCLUSIVITY, OR RESULTS OBTAINED FROM USE OF THE MATERIAL. CARNEGIE MELLON UNIVERSITY DOES NOT MAKE ANY WARRANTY OF ANY KIND WITH RESPECT TO FREEDOM FROM PATENT, TRADEMARK, OR COPYRIGHT INFRINGEMENT. This material has been approved for public release and unlimited distribution. The Government of the United States has a royalty-free government-purpose license to use, duplicate, or disclose the work, in whole or in part and in any manner, and to have or permit others to do so, for government purposes pursuant to the copyright license under the clause at 252.227-7013 and 252.227-7013 Alternate I. This material was prepared for the exclusive use of IEEE Computer Society webinar series participants and may not be used for any other purpose without the written consent of [email protected]. DM-0001543
3 August 2014 © 2014 Carnegie Mellon University
Goals
Share experiences in the application of software measurement for large projects. Discuss some implications of the state-of-the-practice for research. Present my observations on recent trends in research, in particular: • Semi-automated pattern detection (Technical Debt
example) • QUELCE: Early lifecycle program cost estimates
3
4 August 2014 © 2014 Carnegie Mellon University
A little background…
The experiences I’ll discuss in this talk:
• Reflect a particular perspective on software measurement – its use in managing software development and acquisition
• Are drawn from several years’ engagement with customers, including large, government-sponsored, software-intensive systems
5 August 2014 © 2014 Carnegie Mellon University
Measurement a primary means of communicating program progress For example: • Requirements volatility • Requirements traceability and flowdown • Planned vs actual completion dates • Planned vs actual technical reviews • Open vs closed defects over time • Status by system component • Schedule slippage • Earned Value Management
• Etc. etc. etc…
6 August 2014 © 2014 Carnegie Mellon University
Measurement is also of increasing importance for management and oversight
E.g., recent initiatives within the Defense Industrial Base… • Better Buying Power 2.0
• Performance of the Defense Acquisition System reports
• NDIA’s 2014 Acquisition Reform Initiative
7 August 2014 © 2014 Carnegie Mellon University
Mis-communications…
1. Contractors who want to provide charts only.
7
Data in charts are notional.
8 August 2014 © 2014 Carnegie Mellon University
Mis-communications…
2. Using a sophisticated measurement approach… but only reporting the most optimistic estimates.
8
Expected completion date in this range
Very likely not the completion date!
“It is difficult to get a man to understand something, when his salary depends on his not understanding it.” - Upton Sinclair, US author
Data in charts are notional.
9 August 2014 © 2014 Carnegie Mellon University
Mis-communications…
3. Reporting data that look good on the surface… but defer the hard choices.
9
photo by OneEighteen on Flickr http://www.fotopedia.com/items/flickr-2923039651
Defects are being closed at a healthy rate!
But where are they going?
Data in charts are notional.
10 August 2014 © 2014 Carnegie Mellon University
Mis-communications…
4. Detailed plans that are met in practice – but still seem too good to be true.
10
“… and then a miracle occurs… ”
Data in charts are notional.
11 August 2014 © 2014 Carnegie Mellon University
Mis-communications…
5. Changing measurement units or granularity mid-stream.
11
People Team
System Elements
Data Cumulative Data by Period
12 August 2014 © 2014 Carnegie Mellon University
Mis-communications…
Common themes: • Choosing inappropriate metrics • Realizing too late the metrics we really want • Relying on metrics that have already been “massaged”
Implications: • A need for better leveraging past experience • A need for (semi-)automated approaches that better package measurement techniques
12
13 August 2014 © 2014 Carnegie Mellon University
Research Questions
Given these difficulties in the state of the practice, software measurement continues to be an area of ongoing research. • How to improve the accuracy, the fidelity, and the insight
associated with development metrics? • Especially with respect to tracking progress and risks
• How to appropriately leverage ever more available data?
13
14 August 2014 © 2014 Carnegie Mellon University
Empirical Software Engineering (EMSE): What is it?
Field of research that • Focuses on using studies of all kinds to accumulate knowledge about
software engineering • Uses methods such as experiments, case studies, surveys, statistical
analysis – In short: gathering observable, empirical and measurable evidence
• Is based on applying the scientific method to software engineering The legacy of EMSE has been to make SE more of a truly “engineering” field • But we’re not there yet.
14
15 August 2014 © 2014 Carnegie Mellon University
A Researcher’s Perspective
In addressing such issues, Empirical Software Engineering has had successes • Important and rigorous studies • Increasing penetration in CS research
But it has been slow • Practices become adopted or obsolete without us • Recent trends are exacerbating the problem • Are we losing the chance to impact practice?
Empirical software engineering version 2.0 needs to shorten the feedback loop: • Good results • When they can be most useful
15
16 August 2014 © 2014 Carnegie Mellon University
Empirical SE v2.0: What is it?
EMSE 1.0 marked by: • Case studies • Experiments • Literature studies
EMSE 2.0 is incorporating: • A recognition of value propositions • Better pattern detection in big and rich datasets • Aggregating studies for more robust results
Proposed in a technical briefing at ICSE 2011 by Menzies & Shull
16
17 August 2014 © 2014 Carnegie Mellon University
Goals
Share experiences in the application of software measurement for large projects. Discuss some implications of the state-of-the-practice for research. Present my observations on recent trends in research, in particular: • Semi-automated pattern detection (Technical Debt
example) • QUELCE: Early lifecycle program cost estimates
17
18 August 2014 © 2014 Carnegie Mellon University
Semi-automated detection of Technical Debt
Metaphor first articulated by Ward Cunningham, 1992 Recognizes that developers make tradeoffs • Defer investing effort for a long-term good (e.g. refactoring) in order to
achieve short-term ones (e.g. on-time delivery) • Such decisions can incur costs over time (interest) • Need to track such decisions explicitly and “pay them off” when it
makes sense to do so Potentially many types, e.g.: • Design pattern decay (grime) • Architectural breakdown • Deferred documentation • Deferred testing • Lack of conformance to good OO design principles (code smells)
18
19 August 2014 © 2014 Carnegie Mellon University
A semi-automated process
Adapted from: Nico Zazworka, Victor Basili, Forrest Shull: "Tool Supported Detection and Judgment of Nonconformance in Process Execution", 3rd International Symposium on Empirical Software Engineering and Measurement (ESEM), 2009
Define non-conformance rules
Detect non-conformance
Code repository
Change / defect tracker
Etc.
Implement project changes
Judge risks
Adjust non-conformance rules
1 2
3
4a
4b
20 August 2014 © 2014 Carnegie Mellon University
Case Study
IT development company • Web-based applications in C# (using .NET and Visual Studio) • Mature processes
Business goal: • Reduce software maintenance costs
A hypothesized strategy, which is being applied • Have projects use a common reference architecture • Refactor frequently
Large repositories that can be mined for data • Code and other documents under CM • Changes and defects tracked in a tool
20
Reported in: Nico Zazworka, Michele A. Shaw, Forrest Shull, Carolyn Seaman : Investigating the Impact of Design Debt on Software Quality MTD2011: 2nd Workshop on Managing Technical Debt at ICSE, 2011, Honolulu, Hawaii, USA
21 August 2014 © 2014 Carnegie Mellon University
Example metrics-based rule
If the team is refactoring as needed, should see few instances of “god classes”
Marinescu, R. 2004. Detection Strategies: Metrics-Based Rules for Detecting Design Flaws. In Proceedings of the 20th IEEE international Conference on Software Maintenance (September 11 - 14, 2004). ICSM. IEEE Computer Society, Washington, DC, 350-359.
22 August 2014 © 2014 Carnegie Mellon University
Visualizing the Results
Supports the discussion: Is there a good reason why we had a non-standard architecture here?
If no, let’s fix. If yes, let’s find something else to worry about.
22
Source: Jan Schumacher, Nico Zazworka, Forrest Shull, Carolyn B. Seaman, Michele A. Shaw: Building empirical support for automated code smell detection. ESEM 2010
23 August 2014 © 2014 Carnegie Mellon University
Closing the circle: Testing results Assumption: Maintainability effort can be estimated by how often a class has been changed • Rationale: classes that have to be changed often, e.g. with each
revision, are indicators of maintenance bottlenecks => God classes are 5-7 times more change prone
Revision 1452 1457 1471 1472 1424 Likelihood
Changed God Classes
0/4 1/4 1/4 2/4 2/4 0.300
Changed Non-God Classes
1/223 4/223 6/225 4/225 2/225 0.015
Example for change likelihood for god classes and non-god classes
23
Reported in: Nico Zazworka, Michele A. Shaw, Forrest Shull, Carolyn Seaman : Investigating the Impact of Design Debt on Software Quality MTD2011: 2nd Workshop on Managing Technical Debt at ICSE, 2011, Honolulu, Hawaii, USA
24 August 2014 © 2014 Carnegie Mellon University
Closing the circle: Testing results Is the defect likelihood of god classes is higher than for non-god classes? Data: JIRA bugs are linked to subversion change sets (=classes that were part of the bug fix) => God classes are 4-17 times more defect prone
D e f e c t ( J I R A issue)
J-166 J-161 J-377 J-396 J-228 Likelihood
Fix Revisions
9097, 9098
8939 11990 12842, 12844
10269
God Classes
1/3 0/1 0/8 3/8 0/3 0.1417
Non-God Classes
0/94 1/94 1/156 0/157 1/101 0.0067
Example for defect fix likelihood for god classes and non-god classes
Reported in: Nico Zazworka, Michele A. Shaw, Forrest Shull, Carolyn Seaman : Investigating the Impact of Design Debt on Software Quality MTD2011: 2nd Workshop on Managing Technical Debt at ICSE, 2011, Honolulu, Hawaii, USA
25 August 2014 © 2014 Carnegie Mellon University
Goals
Share experiences in the application of software measurement for large projects. Discuss some implications of the state-of-the-practice for research. Present my observations on recent trends in research, in particular: • Semi-automated pattern detection (Technical Debt
example) • QUELCE: Early lifecycle program cost estimates
25
26 August 2014 © 2014 Carnegie Mellon University
Improving the Estimating Process
Estimating often looks like the picture below.
When we prepare the estimate, we think about: – Number (and size) of deliverables we have to produce +
Number of services that must be developed and readied + Number of requirements we must address
– Complexity factors that drive change, and the need for inventing new things.
– Project structure, communications and productivity We make many assumptions to prepare an early estimate. *Graphic from David Herron, David Consulting Group
Scope & Size
Complexity
Composition
27 August 2014 © 2014 Carnegie Mellon University
Motivation for QUELCE
Known-Unknowns • Each decision and especially our assumptions have the potential to be wrong
or at least subject to change. Thus each represents a “known-unknown.” Capture the probability of change and the change effects • We can get data about probability of change from past experience and
reference data. Adjusting an estimate by probability of change • We should be able to estimate the probability of change and cascade effects
to develop a formal probabilistic method to add a range to the estimate. Include assessment of risk • The scenarios devised to assess change effects tell us which risk-based
scenarios have the potential to require formal mitigation strategies, because they will create risks that exceed the boundaries set by contingency budgets.
28 August 2014 © 2014 Carnegie Mellon University
• Mission / CONOPS • Capability Based Analysis
...
• KPP selection • Systems Design • Sustainment issues
...
• Production Quantity • Acquisition Mgt • Scope definition/responsibility • Contract Award
Technology Development Strategy
Operational Capability Trade-offs
System Characteristics Trade-offs
Proposed Material Solution & Analysis of Alternatives
Information from Analogous Programs/Systems
Program Execution Change Drivers
Probabilistic Modeling (BBN) & Monte Carlo Simulation
Expe
rt J
udge
men
ts
Information Flow for Early Lifecycle Estimation
Plans, Specifications, Assessments
• analogy • parametric
Cost Estimates Program Execution Scenarios with conditional probabilities of drivers/states
Driver States & Probabilities
• engineering • CERs
29 August 2014 © 2014 Carnegie Mellon University
Explicit identification of domain specific program change drivers. Unique application of Dependency Structure Matrix techniques for
cost estimation. BBN modeling of a larger number of program change drivers for
estimation than previous research. Scenario modeling of alternate program executions to assess
influence of various underlying assumptions. Monte Carlo simulation applied to estimation input parameters
rather than output values.
Quantifying the Uncertainty of Cost Estimation Inputs and Resulting Estimates
Modeling Uncertainty Complexity Reduction Technical Problem
1. Identify Change Drivers & States
2. Reduce Cause and Effect Relationships
via Dependency Structure Matrix
techniques
3. Assign Conditional Probabilities to BBN Model
4. Calculate Cost Factor Distributions for Program Execution Scenarios
5. Monte Carlo Simulation to Compute Cost Distribution
30 August 2014 © 2014 Carnegie Mellon University
Step 1: Identify Change Drivers and States
Change Driver Nominal State Alternative States
Scope Definition
Stable Users added Additional (foreign) customer
Additional deliverable (e.g. training & manuals)
Production downsized
Scope Reduction (funding reduction)
Mission / CONOPS As defined New condition New mission New echelon Program
becomes Joint
Capability Definition
Stable Addition Subtraction Variance Trade-offs [performance vs affordaility, etc.]
Funding Schedule
Established Funding delays tie up resources {e.g. operational test}
FFRDC ceiling issue
Funding change for end of year
Funding spread out
Obligated vs. allocated funds shifted
Advocacy Change
Stable Joint service program loses particpant
Senator did not get re-elected
Change in senior pentagon staff
Advocate requires change in mission scope
Service owner different than CONOPS users
Closing Technical Gaps (CBA)
Selected Trade studies are sufficient
Technology does not achieve satisfactory performance
Technology is too expensive
Selected solution cannot achieve desired outcome
Technology not performing as expected
New technology not testing well
● ~~~~ ~~~~ ~~~~ ● ~~~~ ~~~~ ~~~~ ~~~~ ● ~~~~ ~~~~ ~~~~ ~~~~ ~~~~ ~~~~
Domain-Specific Program Change Drivers Identified
1. Identify Change
Drivers & States
3. Assign Conditional
Probabilities to BBN Model
4. Calculate Cost Factor
Distributions for Program Execution
Scenarios
5. Monte Carlo Simulation to Compute Cost
Distribution
2.Reduce complexity of Cause and Effect
relationships via matrix techniques
31 August 2014 © 2014 Carnegie Mellon University
Step 2: Reduce Cause and Effect Relationships via Design Structure Matrix Techniques
Change Drivers - Cause & Effects Matrix
Miss
ion /
CONO
PS
Chan
ge in
Stra
tegic
Vision
Capa
bility
Defi
nition
Advo
cacy
Cha
nge
Clos
ing T
echn
ical G
aps (
CBA)
Build
ing T
echn
ical C
apab
ility &
Cap
acity
(CBA
)
Inter
oper
abilit
y
Syste
ms D
esign
Inter
depe
nden
cy
Func
tiona
l Mea
sure
s
Scop
e Defi
nition
Func
tiona
l Solu
tion C
riteria
(mea
sure
)
Fund
ing S
ched
ule
Acqu
isitio
n Man
agem
ent
Prog
ram
Mgt -
Con
tracto
r Rela
tions
Proje
ct So
cial /
Dev E
nv
Prog
Mgt
Stru
cture
Mann
ing at
prog
ram
office
Scop
e Res
pons
ibility
Stan
dard
s/Cer
tifica
tions
Supp
ly Ch
ain V
ulner
abilit
ies
Infor
matio
n sha
ring
PO P
roce
ss P
erfor
manc
e
Susta
inmen
t Iss
ues
Contr
act A
ward
Prod
uctio
n Qua
ntity
Data
Owne
rship
Indus
try C
ompa
ny A
sses
smen
t
Cost
Estim
ate
Test
& Ev
aluati
on
Contr
actor
Per
forma
nce
Size
Proje
ct Ch
allen
ge
Prod
uct C
halle
nge
Total
Numb
er rig
ht of
diago
nal
Mission / CONOPS 3 3 0 6 0Change in Strategic Vision 3 3 3 2 2 2 2 2 3 2 3 2 29 0Capability Definition 3 0 2 1 1 0 0 2 2 2 0 1 0 2 0 0 16 0Advocacy Change 2 1 1 1 1 6 0Closing Technical Gaps (CBA) 2 1 3 1 2 2 1 2 2 1 1 2 1 0 2 2 1 1 2 2 1 2 34 0Building Technical Capability & Capacity (CBA) 1 1 2 1 2 2 1 2 3 2 2 1 2 2 1 1 1 27 0Interoperability 1 2 1 1 1 1 1 1 1 1 2 1 1 2 1 1 3 1 2 2 2 29 1Systems Design 1 2 2 2 2 1 1 1 1 1 2 2 3 21 3Interdependency 1 2 2 1 1 1 1 1 1 1 1 1 2 1 2 2 1 1 1 1 1 2 2 3 33 5Functional Measures 2 2 2 1 1 1 1 1 1 1 2 1 16 0Scope Definition 1 1 3 5 0Functional Solution Criteria (measure) 1 2 2 1 1 2 1 10 1Funding Schedule 1 1 2 1 5 0Acquisition Management 1 1 2 3 1 1 2 2 1 1 1 2 1 19 2Program Mgt - Contractor Relations 1 1 2 1 1 1 1 2 2 12 2Project Social / Dev Env 1 1 1 2 2 1 1 2 1 1 1 14 2Prog Mgt Structure 1 2 1 2 6 1Manning at program office 2 1 2 5 2Scope Responsibility 1 1 1 1 1 1 6 5Standards/Certifications 1 1 1 1 1 1 3 1 10 2Supply Chain Vulnerabilities 1 1 1 1 2 1 7 4Information sharing 1 1 1 1 1 1 1 7 3PO Process Performance 2 2 4 0Sustainment Issues 0 0Contract Award 0 0Production Quantity 2 2 0Data Ownership 2 2 0Industry Company Assessment 0 0Cost Estimate 0 0Test & Evaluation 0 0Contractor Performance 2 2 0Size 0 0Project Challenge 0 0Product Challenge 0 0Totals 0 0 6 4 1 9 5 12 8 7 7 13 4 10 15 18 7 7 8 8 14 17 17 15 12 9 10 13 11 20 19 5 5 17 0Below diagonal 0 0 0 1 1 4 4 4 1 2 0 3 1 3 2 2 3 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Effects
Causes
Capturing interrelationships among change drivers and reducing the complexity of the network
1. Identify Change
Drivers & States
3. Assign Conditional
Probabilities to BBN Model
4. Calculate Cost Factor
Distributions for Program Execution
Scenarios
5. Monte Carlo Simulation to Compute Cost
Distribution
2. Reduce complexity of Cause and Effect
relationships via matrix techniques
32 August 2014 © 2014 Carnegie Mellon University
Step 2: Reduce Cause and Effect Relationships via Dependency Structure Matrix Techniques
Reduction of the complexity of the network from 57 to 29 Program Change Drivers
33 August 2014 © 2014 Carnegie Mellon University
Step 3: Assign Conditional Probabilities to BBN Model
1. Identify Change
Drivers & States
3. Assign Conditional
Probabilities to BBN Model
4. Calculate Cost Factor
Distributions for Program Execution
Scenarios
5. Monte Carlo Simulation to Compute Cost
Distribution
2.Reduce complexity of Cause and Effect
relationships via matrix techniques
ProductChallenge
SizeGrowth
Mission &CONOPS
CapabilityAnalysis
AcquisitionStrategy
Tech. Dev.Strategy
ProductionQuantity
Contract
KPPs
SystemDesign
Logistics& Support Project
Challenge
34 August 2014 © 2014 Carnegie Mellon University
Step 3: Assign Conditional Probabilities to BBN Model
35 August 2014 © 2014 Carnegie Mellon University
Step 3: Assign Conditional Probabilities to BBN Model
36 August 2014 © 2014 Carnegie Mellon University
Step 4: Calculate Cost Factor Distributions for Program Execution Scenarios
An example scenario with 4 drivers in nominal state
BBN model enables computation of different scenarios of program execution on cost model factors
1. Identify Change
Drivers & States
3. Assign Conditional
Probabilities to BBN Model
4. Calculate Cost Factor
Distributions for Program Execution
Scenarios
5. Monte Carlo Simulation to Compute Cost
Distribution
2. Reduce Cause and Effect Relationships via Dependency Structure Matrix techniques
37 August 2014 © 2014 Carnegie Mellon University
Understand and analyze cost model input factors Use empirical analysis
from Repository as basis to map scale (XL … EH) of original cost model input factors to scale (1…5) of BBN output factors
COCOMO ParameterScale Factors PREC
FLEXRESLTEAMPMAT
Effort Multipliers PERSRCPXPDIFPREXFCILRUSESCED
Product Challenge factors (1=low…5=high)COCOMO Parameter XL VL L N H VH EHScale Factors PREC 1 3 5
FLEX 1 2 3 5RESL 1 2 3 4 5
Effort Multipliers RCPX 1 2 3 4 5PDIF 1 5RUSE 1 3 5
Project Challenge factors (1=low…5=high)COCOMO Parameter XL VL L N H VH EHScale Factors TEAM 1 3 5
PMAT 1 2 3 4 5Effort Multipliers PERS 1 3 5
PREX 1 2 3 4 5FCIL 1 3 5SCED 1 3 5
Group similar input factors based on empirical analysis in task 3.
Step 5: Connecting BBNs to Cost Estimation Models
1. Identify Change
Drivers & States
3. Assign Conditional
Probabilities to BBN Model
4. Calculate Cost Factor
Distributions for Program Execution
Scenarios
5. Monte Carlo Simulation to Compute Cost
Distribution
2.Reduce complexity of Cause and Effect
relationships via matrix techniques
38 August 2014 © 2014 Carnegie Mellon University
Monte Carlo simulation using program change factor distributions uses uncertainty on the input side to determine the cost estimate distribution
Mapped COCOMO
value
4
5 BBN Outputs 4
Step 5: Monte Carlo Simulation to Compute Cost Distribution
1. Identify Change
Drivers & States
3. Assign Conditional
Probabilities to BBN Model
4. Calculate Cost Factor
Distributions for Program Execution
Scenarios
5. Monte Carlo Simulation to Compute Cost
Distribution
2.Reduce complexity of Cause and Effect
relationships via matrix techniques
39 August 2014 © 2014 Carnegie Mellon University
QUELCE Technical Report: http://www.sei.cmu.edu/library/abstracts/reports/11tr026.cfm SEI Blog http://blog.sei.cmu.edu • “Improving the Accuracy of Early Cost
Estimates for Software-Reliant Systems, First in a Two-Part Series”
• “A New Approach for Developing Cost Estimates in Software Reliant Systems, Second in a Two-Part Series”
• “Quantifying Uncertainty in Early Lifecycle Cost Estimation (QUELCE): An Update”
Journal of Software Technology http://journal.thedacs.com/issue/64/207 • “An Innovative Approach to Quantifying
Uncertainty in Early Lifecycle Cost Estimation”
For More Information
40 August 2014 © 2014 Carnegie Mellon University
Lessons Learned
EMSE v2.0 combines the strengths of : • Top-down prioritization of goals & factors worth investigating - Because
analysis unguided by domain knowledge can yield results that are – uninteresting, – un-actionable, – or just plain weird
• Bottom-up examination of large datasets to find important patterns – Enables analyses to scale-up to size and speed that could never be done
manually – Analysis is now automatable, repeatable, auditable
Automation helps keep measurement programs in place and make them actionable • All teams have intuition (“hypotheses”) about important factors – but they are
often missing the tools to test.
40
41 August 2014 © 2014 Carnegie Mellon University
Contact Information Presenters / Points of Contact
Forrest Shull [email protected]
Dave Zubrow [email protected]
U.S. Mail Software Engineering Institute Customer Relations 4500 Fifth Avenue Pittsburgh, PA 15213-2612, USA Web www.sei.cmu.edu www.sei.cmu.edu/contact.cfm Customer Relations Email: [email protected] Telephone: +1 412-268-5800 SEI Phone: +1 412-268-5800 SEI Fax: +1 412-268-6257