bridging the gap between cmmi and six sigma trainingbridging the gap between cmmi and six sigma...
TRANSCRIPT
This material is approved for public release. Distribution is limited by the Software Engineering Institute to attendees.
Sponsored by the U.S. Department of Defense© 2005 by Carnegie Mellon University
Version 1.0 page 1
Pittsburgh, PA 15213-3890
Bridging the Gap Between CMMI and Six Sigma Training:An Overview and Case Study of Performance-Driven Process Analysis
Jeannine Siviy, SEIDave Hallowell, Six Sigma Advantage
Report Documentation Page Form ApprovedOMB No. 0704-0188
Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering andmaintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information,including suggestions for reducing this burden, to Washington Headquarters Services, Directorate for Information Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204, ArlingtonVA 22202-4302. Respondents should be aware that notwithstanding any other provision of law, no person shall be subject to a penalty for failing to comply with a collection of information if itdoes not display a currently valid OMB control number.
1. REPORT DATE SEP 2005 2. REPORT TYPE
3. DATES COVERED 00-00-2005 to 00-00-2005
4. TITLE AND SUBTITLE Bridging the Gap Between CMMI and SIx Sigma Training: An Overviewand Case Study of Performance-Driven Process Analysis
5a. CONTRACT NUMBER
5b. GRANT NUMBER
5c. PROGRAM ELEMENT NUMBER
6. AUTHOR(S) 5d. PROJECT NUMBER
5e. TASK NUMBER
5f. WORK UNIT NUMBER
7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Carnegie Mellon University,Software Engineering Institute,Pittsburgh,PA,15213
8. PERFORMING ORGANIZATIONREPORT NUMBER
9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES) 10. SPONSOR/MONITOR’S ACRONYM(S)
11. SPONSOR/MONITOR’S REPORT NUMBER(S)
12. DISTRIBUTION/AVAILABILITY STATEMENT Approved for public release; distribution unlimited
13. SUPPLEMENTARY NOTES
14. ABSTRACT
15. SUBJECT TERMS
16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT Same as
Report (SAR)
18. NUMBEROF PAGES
183
19a. NAME OFRESPONSIBLE PERSON
a. REPORT unclassified
b. ABSTRACT unclassified
c. THIS PAGE unclassified
Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18
© 2005 by Carnegie Mellon University Version 1.0 page 2
ObjectivesDuring this tutorial, we will describe• Motivation and design principles for an analysis course that
also serves CMMI and Six Sigma• A problem-solving methodology and its relationship to CMMI
- plus, a selection of its steps and analytical tools• A cost and schedule variance reduction case study
At the completion of this tutorial, you should be able to explain• considerations for building your own training courses (if you
need to do so)• an overview of the DMAIC methodology• performance driven process improvement • process variation• Y to x flowdown (or “critical to quality” flowdown)• performance driven subprocess selection• how to examine and improve the quality of your data set• baselining and “the basic tools”• an example of a process performance models
© 2005 by Carnegie Mellon University Version 1.0 page 3
OutlineContext: The Value Proposition
• our (your) multi-initiative reality• Six Sigma as a strategic enabler• measurement & analysis as an integrating platform
Approach to building integrated training
A roadmap for performance-driven improvement
Roadmap connections to CMMI
Roadmap execution: a case study overview
Roadmap execution: demystifying steps and methods
Summary
© 2005 by Carnegie Mellon University Version 1.0 page 4
What Drives Process Improvement?
Performance issues: product, project—and, eventually, process issues
Regulations and mandates• Sarbanes Oxley• “Level 3” requirements to win contracts
Business issues and “burning platforms”• lost market share or contracts• continuous cost and cycle time improvement• capitalizing on new opportunities
There is compliance-driven improvement,and there is performance-driven improvement.
© 2005 by Carnegie Mellon University Version 1.0 page 5
Many Solutions
What solutions is your organization implementing? How do they support your organization’s mission???
CMMI®
EIA 731
TSPSM
ISO 12207
Score-card
EIA 632
ISO 9000
ITIL
COBIT
PSM
GQIM
RUP Agile
Section 804
© 2005 by Carnegie Mellon University Version 1.0 page 6
Implementation ConsiderationsMany organizations are implementing one or more models, standards, or technologies simultaneously.
Selection and development considerations include:• What is the goal?• What model(s) or references should be used?• Should they be implemented in parallel or sequentially?• Can they be used “off-the-shelf” or is tailoring needed?• What needs to be created internally?
Integrated process solutions that are seamless and transparent to the engineer in the field significantly contribute to an organization’s success.
© 2005 by Carnegie Mellon University Version 1.0 page 7
CMMI Staged and Six Sigma
Process unpredictable and poorly controlled
Process characterized for projects and is often reactive
Process characterized for the organization and is proactive
Process measuredand controlled
Processimprovement
Optimizing
Quantitatively Managed
Defined
Initial
Managed
4
5
• 6� “drilldown” drives local (but threaded) improvements
• 6� may drive toward and accelerate CMMI solution
1
2
3
• Organization-wide 6� improvements and control• Correlation between key process areas & 6� methods• 6� used within CMM efforts
• 6� philosophy & method focus
Six Sigma is enterprise wide.Six Sigma addresses product and process.
Six Sigma focuses on “critical to quality” factors.
• Infrastructure in place • Defined processes feed 6�
© 2005 by Carnegie Mellon University Version 1.0 page 8
0
1
2
3
4
5
MA
QP
M
CA
R
OP
P
DA
R
PP
PM
C
SA
M
IPM
RS
K
RE
QM
RD
TS PI
VE
R
VA
L
CM
PP
QA
OP
F
OP
D
OT
OID
Cap
abili
ty
Six Sigma and CMMI Continuous
One possible approach:• Achieve high capability in PAs that build Six Sigma skills: MA,
QPM, CAR, OPP• Use capability to help prioritize remaining PAs
[Vickroy 03]
Foundational PAs
Remaining PAs ordered by business factors, improvement opportunity, etc. which are better understood using foundational capabilities. CMMI Staged groupings and DMAIC vs. DMADV are also factors that may drive the remaining order.
© 2005 by Carnegie Mellon University Version 1.0 page 9
A Different Tact:Six Sigma as Transition Enabler
The SEI conducted a research project to explore the feasibility of Six Sigma as a transition enabler for software and systems engineering best practices.
Hypothesis• Six Sigma used in combination with other software, systems,
and IT improvement practices results in - better selections of improvement practices and projects- accelerated implementation of selected improvements- more effective implementation- more valid measurements of results and success from use
of the technology
Achieving process improvement… better, faster, cheaper.
© 2005 by Carnegie Mellon University Version 1.0 page 10
Primary ConclusionsSix Sigma is feasible as an enabler of the adoption of software,systems, and IT improvement models and practices (a.k.a., “improvement technologies”).
The CMMI community is more advanced in their joint use of CMMI & Six Sigma than originally presumed.
Noting that, for organizations studied, Six Sigma adoption & deployment• was frequently decided upon at the enterprise level, with
software, systems, and IT organizations following suit• was driven by senior management’s previous experience
and/or a burning business platform• was consistently comprehensive.
[IR&D 04]
© 2005 by Carnegie Mellon University Version 1.0 page 11
Six Sigma helps integrate multiple improvement approaches to create a seamless, single solution.
Rollouts of process improvement by Six Sigma adopters are mission-focused, flexible, and adaptive to changing organizational and technical situations.
Six Sigma is frequently used as a mechanism to help sustain—and sometimes improve—performance in themidst of reorganizations and organizational acquisitions.
Six Sigma adopters have a high comfort level with a variety of measurement and analysis methods.
Selected Supporting Findings 1
© 2005 by Carnegie Mellon University Version 1.0 page 12
Selected Supporting Findings 2Six Sigma can accelerate the transition of CMMI.• moving from CMMI Maturity Level 3 to 5 in 9 months, or from
SW-CMM Level 1 to 5 in 3 years (the typical move taking 12-18 months per level)
• underlying reasons are strategic and tactical
When Six Sigma is used in an enabling, accelerating, or integrating capacity for improvement technologies, adopters report quantitative performance benefits using measures they know are meaningful for their organizations and clients. For instance,• ROI of 3:1 and higher, reduced security risk, and better cost
containment
[Hayes 95]
© 2005 by Carnegie Mellon University Version 1.0 page 13
CMMI-Specific FindingsSix Sigma is effectively used at all maturity levels.
Participants assert that the frameworks and toolkits of Six Sigma exemplify what CMMI high maturity requires.
Case study organizations do not explicitly use Six Sigma to drive decisions about CMMI representation, domain, variant, and process-area implementation order. However, participants agree that this is possible and practical.
CMMI-based organizational assets enable Six Sigma project-based learnings to be shared across software and systems organizations, enabling a more effective institutionalization of Six Sigma.
© 2005 by Carnegie Mellon University Version 1.0 page 14
High IT performers (development, deployment, and operations) are realizing the same benefits of integrated process solutions and measurable results. • However, they are using the technologies and practices
specific to their domain (ITIL, COBIT, and sometimes CMMI).
CMMI-specific findings apply to IT organizations who have chosen to use CMMI.
IT-Specific Findings
© 2005 by Carnegie Mellon University Version 1.0 page 15
Interpretations & Moving AheadThere are many potential benefits from thoughtful and strategic integration of multiple technologies and practices.
Such integration is currently “state of the art”
Six Sigma and other technologies, can play a role in evolving this to “state of the practice”
How do Process Improvement Groups design and rollout technologies, integrated or otherwise?• How does training, particularly “integrated training,” support
their efforts?
© 2005 by Carnegie Mellon University Version 1.0 page 16
A High Level Implementation Process
Establish Business Drivers
Select Technology
Implement Solution Measure impact
Organization’s Process Improvement Groups: SEPGs, Six Sigma Practitioners, et. al.
SEI (or other institution)
develop technology
transition tech-nology
Business Results, Level Rating
Implement/Integrate tech.
Project Team
Execute project life cycle phases, steps
Measure results
Proj. Results,
transitiondevelop
© 2005 by Carnegie Mellon University Version 1.0 page 17
Features of effective transition planning include:• precision about the problem, clarity about the solution• transition goals and a strategy to achieve them• definition of all adopters and stakeholders and deliberate
design of interactions among them• complete set of transition mechanisms: a whole product• risk management• either a documented plan or extraordinary leadership
throughout the transition
Effective Transition Planning
[Forrester], [Schon], [Gruber]
“Transition” is indicated by each of the following:• maturation, introduction, adoption, implementation,
dissemination, rollout, deployment, or fielding
© 2005 by Carnegie Mellon University Version 1.0 page 18
The “Whole Product” Concept*
andPolicies
T
Installationand
Additional
CoreCoreProductProduct
Standards
Training
Debugging
Software
Etc.Introduction
Support
ororTechnologyTechnology
[Moore]
Economies of scale are needed in training.
A holistic, “connected”approach is needed in training.
Leaving students to their own devices to make connections can be risky and/or time-consuming.
© 2005 by Carnegie Mellon University Version 1.0 page 19
Training ChallengesMany technologies have their own training.• It’s not practical to send everyone to all training courses.• Yet it’s also not practical to custom-build all training.
Cross training (i.e., CMMI & Six Sigma)• At a strategic level: how to increase awareness so that
experts in one technology can make judicious decisions about adoption and implementation of another technology.
• At a tactical level: how to balance the expertise.
Who and how many should be trained? For instance,• Train whole organization in internal process standards and
possibly basic Six Sigma concepts.• Train fewer in Six Sigma BB, CMMI, measurement and
analysis, and other specialty areas.
© 2005 by Carnegie Mellon University Version 1.0 page 20
BenchmarkingIntegrated training solutions underway:• DFSS training that includes awareness sessions of
relevant technologies - SEI’s Product Line Practices, ATAM, CMMI
engineering PAs• DFSS training that leverages ATAM• DMAIC training that references PSP-based
instrumented processes
Strategic Emphasis
Tactical Emphasis
Our approach uses measurement & analysis as an integrator.
© 2005 by Carnegie Mellon University Version 1.0 page 21
Outline
Context: The Value Proposition
Approach to building integrated training
A roadmap for performance-driven improvement
Roadmap connections to CMMI
Roadmap execution: a case study overview
Roadmap execution: demystifying steps and methods
Summary
© 2005 by Carnegie Mellon University Version 1.0 page 22
Intent of New Analysis CoursesOur task is to build new courses that• Focus on analysis
- but more than just SPC• Focus on building skills � hand-on practice• Support CMMI• Appeal to many roles
- process improvement personnel- measurement personnel - project team members - CMMI appraisers (maybe) - Six Sigma practitioners - and so on
• Resonate with organizations at any maturity level
© 2005 by Carnegie Mellon University Version 1.0 page 23
A Base Architecture- Connecting all the Improvement Models
Fuzzy Sense of a Problemor Opportunity
Type of Data
Product/Process Improvement Progress
Sample DataModels
Language + NumbersD
isti
ll
Plan to G
ather
NumbersActual Capability
Variation Over Time
Monitored Results
Dis
till
Plan to G
ather
Focuson key aspects
LanguageStatementsObservations
Plan to G
ather
Dis
till
Solution& Standards
Distilling & Understanding
Experience:Gathering & Discovering
Teams move back and
forth between…
[Kawakita], [Shiba]
© 2005 by Carnegie Mellon University Version 1.0 page 24
Design Strategy
Our Goal for the courses• When you return home, you will have the skills to tackle
process improvement projects in a way that is informed by your data.
Doing this requires• a structured, but easily tailored, approach• confidence and skills to use basic statistical methods
– sufficient awareness of other methods to know when to get help
• experience using a real statistics package and Excel
© 2005 by Carnegie Mellon University Version 1.0 page 25
• Leverage other technologies and initiatives.- reuse demonstrated frameworks and toolkits- build explicit connections to models- return to common roots; don’t reinvent the wheel- define “certification” boundaries and options
• Use a project process (incl. design phases and piloting)- Assemble a cross-organizational development team - Use Gagne’s Model for instructional design- Use Kirkpatrick’s Four-Level Evaluation model- Design for fit with existing measurement courses
• Design for extensibility: case study approach- allows easy swap-in of other domains and technologies- allows easy updates as core technologies evolve
• Couple with an annual Measurement Practices Workshop (future)
Considerations
© 2005 by Carnegie Mellon University Version 1.0 page 26
Certificates and Certifications
Six Sigma Practitioner Certification• SEI Partners who provide Six Sigma training and certification
can leverage courses- adjunct, domain-specific, Black Belt training- domain-specific Yellow Belt training
SEI Certificate Programs• analyst (future)
© 2005 by Carnegie Mellon University Version 1.0 page 27
SEMA M&A Curriculum
© 2005 by Carnegie Mellon University Version 1.0 page 28
Focus of Each Analysis CourseProcess Measurement & Analysis I• Reduce defects, waste, and cycle time by correcting special
cause variation and repairing and/or improving processes.
Process Measurement & Analysis II• Prevent defects and ensure performance by using data for
early detection/correction of issues and by optimizing front-end planning, requirements, and design processes.
© 2005 by Carnegie Mellon University Version 1.0 page 29
Design Highlights: Course Outlines
Process Measurement & Analysis I• Introduce DMAIC flowchart• Call Center Case: DMAIC Process • Defect Containment Case: Data Stratification • Cost & Schedule Case: Variance Reduction
Process Measurement & Analysis II• Project Simulation: Organization and Project Baseline• Defect Containment Case:
- optimize inspection, improve design processes• Cost & Schedule Case:
- optimize estimating, improve requirements processes• Project Process Optimization Case:
- model simultaneous improvements in multiple, interrelated processes via simulation
Today’sillustrations
© 2005 by Carnegie Mellon University Version 1.0 page 30
Analysis 1 Case StudiesThis course uses case studies to help you build skills.• Call Center Case focus
- DMAIC approach and associated guidance questions on basic analytical methods
• Defect Containment Case focus- understanding variation, basic analytical methods, and
SPC charts
• Cost Schedule Case focus- tailoring the DMAIC approach, understanding variation,
excluding data properly, presenting data analysis so that it is practical and useful; CMMI links
© 2005 by Carnegie Mellon University Version 1.0 page 31
Analysis 1: Reinforcing & Expanding Skills
Call Center
Defect Containment Case
Cost and Schedule
DMAIC Approach
Analytical Skills
CMMI Connections
Increasing coverage of topic detail, added information
Reinforcement & more
independent practice
© 2005 by Carnegie Mellon University Version 1.0 page 32
Analysis 1: Balancing Lecture & Practice
Guided Exercises
Independent & Group Exercises
Case 1:
Call Center
Case 2:
Defect Containment
Case 3:
Cost & Schedule Variance
Lectures
© 2005 by Carnegie Mellon University Version 1.0 page 33
Outline
Context: The Value Proposition
Approach to building integrated training
A roadmap for performance-driven improvement
Roadmap connections to CMMI
Roadmap execution: a case study overview
Roadmap execution: demystifying steps and methods
Summary
© 2005 by Carnegie Mellon University Version 1.0 page 34
A General Purpose Problem-Solving Methodology: DMAIC
Define
Problem or goal statement (Y)
ControlAnalyze ImproveMeasure
• An improvement journey to achieve goals and resolve problems by discovering and understanding relationships between process inputs and outputs, such asY = f(defect profile, yield)
= f(review rate, method, complexity……)
• Refine problem & goal statements.
• Define project scope & boundaries.
© 2005 by Carnegie Mellon University Version 1.0 page 35
DMAIC: Relevance and Robustness
Codified method in use across many industries
Domain specificity• Added aspects of GQ(I)M, PSM, and CMMI to make the
“domain independent” methodologies more relevant to the software and system engineering domains
• Presenting domain-specific case studies
Can be used to• discover an ill-defined problem or opportunity• gather and evaluate information to refine the problem or goal
statement• gather and evaluate more information to reach a proposed
solution• implement the solution• gather data to monitor success
© 2005 by Carnegie Mellon University Version 1.0 page 36
DMAIC Roadmap
Define ControlAnalyze ImproveMeasure
Define project scope
Establish formal project
Identify needed data
Obtain data set
Evaluate data quality
Summarize& baseline data
Explore data
Characterize process & problem
Identify possible solutions
Implement (pilot as needed)
Define control method
Implement
Update improvement project scope & scale
Document
Select solution
Evaluate
Phase Exit Review
© 2005 by Carnegie Mellon University Version 1.0 page 37
Define Guidance Questions
Define project scope
Establish formal project
Define
Identify needed data
Obtain data set
Evaluate data quality
Summarize& baseline data
Explore data
Characterize process & problem
Identify possible solutions
Implement (pilot as needed)
Define control method
Implement
Update improvement project scope & scale
Select solution
Evaluate
ControlAnalyze ImproveMeasure
Document
• What is the current problem to be solved?• What are the goals, improvement targets, & success criteria?• What is the business case, potential savings, or benefit that
will be realized when the problem is solved?• Who are the stakeholders? The customers?• What are the relevant processes, and who owns them?
• Have stakeholders agreed to the project charter or contract?
• What is the project plan, including the resource plan and progress tracking?
• How will project progress be communicated?
© 2005 by Carnegie Mellon University Version 1.0 page 38
Identify needed data
Measure Guidance Questions
Obtain data set
Evaluate data quality
Summarize& baseline data
Explore data
Characterize process & problem
Identify possible solutions
Implement (pilot as needed)
Define control method
Implement
Update improvement project scope & scale
Select solution
Define project scope
Establish formal project
Measure ControlAnalyze ImproveDefine• Does the
measurement system yield accurate, precise, and reproducible data?
• Are urgently needed improvements revealed?
• Has the risk of proceeding in the absence of 100% valid data been articulated?
• What are the process outputs and performance measures?
• What are the process inputs?• What info is needed to understand relationships
between inputs and outputs? Among inputs?• What information is needed to monitor the progress of
this improvement project?
• Is the needed measurement infrastructure in place?
• Are the data being collected and stored?
Document
Evaluate
• What does the data look like upon initial assessment? Is it what we expected?
• What is the overall performance of the process?• Do we have measures for all significant factors, as
best we know them?• Are there data to be added to the process map?• Are any urgently needed improvements revealed?• What assumptions have been made about the
process and data?
© 2005 by Carnegie Mellon University Version 1.0 page 39
Measure
Analyze Guidance Questions
Define ControlAnalyze Improve
Define project scope
Establish formal project
Identify needed data
Obtain data set
Evaluate data quality
Summarize, baseline data
Explore data
Characterize process and problem
Identify possible solutions
Implement (pilot as needed)
Define control method
Implement
Update improvement project scope and scale
DocumentSelect solution
Evaluate
• What do the data look like?• What is driving the variation?• What is the new baseline? • What are associated risks and
assumptions associated with the revised data set and baseline?
• Should the improvement goal be updated?
• Is additional data exploration, data decomposition, and/or process decomposition needed? Is additional data needed?
• Can I take action? Are there evident improvements and corrections to make?
• Have I updated the project tracking and communication mechanisms?
• Are there any hypotheses that need to be tested?
• What causal factors are driving or limiting the capability of this process?
• What process map updates are needed?
• Are there any immediate issues to address? Any urgent and obvious needs for problem containment?
© 2005 by Carnegie Mellon University Version 1.0 page 40
Control
Evaluate
Implement (pilot as needed)
Explore Data
Improve Guidance Questions
Define Analyze ImproveMeasure
Define project Scope
Establish formal project
Identify needed data
Obtain data set
Evaluate data quality
Summarize, baseline data
Characterize process and problem
Identify possible solutions
Define control method
Implement
Update improvement project scope and scale
Select solution
• What type of improvement is needed?• What are solution alternatives to address urgent
issues and root causes of identified problems? • What are the process factors to be adjusted?• What is the viability of each potential solution?• What is the projected impact or effect of each viable
solution?
• What is the action plan with roles, responsibilities, timeline and estimated benefit?
• Is piloting needed prior to widespread implementation?
• Did the solution yield the desired impact? • Has the goal been achieved?• If piloted, are adjustments needed to the solution
prior to widespread rollout? Is additional piloting needed?
• How will baselines, dashboards, and other analyses change?
• What are the relative impacts and benefits?
• What are relevant technical and logistical factors?
• What are potential risks, issues, and unintended consequences?
© 2005 by Carnegie Mellon University Version 1.0 page 41
Control Guidance Questions
Define ControlAnalyze ImproveMeasure
Define project scope
Establish formal project
Identify needed data
Obtain data set
Evaluate data quality
Summarize, baseline data
Explore Data
Characterize process and problem
Identify possible solutions
Implement (pilot as needed)
Define control method
Implement
Update improvement project scope and scale
Document
Select solution
Evaluate
• Should data be compared to a range? If so, which range?
• Does procedural adherence need to be monitored?
• What updates are needed in the measurement infrastructure?• What process documentation needs to be updated? • What new processes or procedures need to be established? • Who is the process or measurement owner who will be taking
responsibility for maintaining the control scheme?
• Have we documented improvement projects for verification, sustainment, and organizational learning?
• What are the realized benefits?• Is the project documented or archived in the organization
asset library?• Have documentation and responsibility been transferred
to process or measurement owner?
© 2005 by Carnegie Mellon University Version 1.0 page 42
Toolkit
����������������� � ��������
����� ������������������
�������������������� � �������������������
� ���������������
� ��������������������
���������������������
�������
�!"
����������
#$����������
�%����������&���
��������'�#��&� ���%���
�� ��������������� ����
�������������������� ����
������� ������������������
������������������
#����$����%� ���%���
�������������� ������!�"��
#�$������%��
()*������*�����������������
�������������������
����������������&������
������'������(
������+������
)��� ����
"����������������
*������������'����
)�����%�,�����������%����
���� ����+� ��$��&� �������
,�'���%��� -Histogram, Scatter Plot, Run Chart, Flow Chart, Brainstorming, Pareto Chart).������������� -��������������������/.�'�����.��������,�0����.���1��������������.�- ��������.�����/ ��$�����%�����0��.������%�������.�������� �����
© 2005 by Carnegie Mellon University Version 1.0 page 43
Exercise / DiscussionIn groups of 3:• Answer the “Define Project Scope” questions for a problem that
each of you faces in your organization.- What is the current problem to be solved?- What are the goals, improvement targets, & success criteria?- What is the business case, potential savings, or benefit that
will be realized when the problem is solved?- Who are the stakeholders? The customers?- What are the relevant processes, and who owns them?
• Discuss the “fit” of the DMAIC roadmap in your organization- How does it fit with your defined processes?- How might it help you define new processes?- If you are a Six Sigma organization, how does our roadmap
fit with your internal roadmap?
Debrief: Voluntary Sharing
TAKE A FEW NOTES. YOU WILL NEED THEM LATER.
© 2005 by Carnegie Mellon University Version 1.0 page 44
Copies of Slides
Later today: case study and practice sessions
If you would like copies of our slides,add your name to our signup sheet, orsend an email to Debra Morrison, [email protected].
• Put “SEPG Tutorial” in the subject line.
© 2005 by Carnegie Mellon University Version 1.0 page 45
Outline
Context: The Value Proposition
Approach to building integrated training
A roadmap for performance-driven improvement
Roadmap connections to CMMI
Roadmap execution: a case study overview
Roadmap execution: demystifying steps and methods
Summary
© 2005 by Carnegie Mellon University Version 1.0 page 46
CMMI
Engineering SupportProcess
ManagementProject
Management
• Organizational Process Focus
• Organizational Process Definition
• Organizational Training• Organizational Process
Performance• Organizational Innovation
and Deployment
• Project Planning• Project Monitoring and Control
• Supplier Agreement Mgmt.• Integrated Project Mgmt.• Risk Management• Quantitative Project Mgmt.
• Requirements Management• Requirements Development• Technical Solution• Product Integration• Verification• Validation
• Configuration Mgmt.• Process and ProductQuality Assurance
• Measurement & Analysis• Decision Analysis andResolution
• Causal Analysis and Resolution
IPPD
• Organizational Environment for Integration
• Integrated Team
Acquisition
• Supplier Selection and Monitoring• Integrated Supplier Management• Quantitative Supplier Management
CMMI-SE/SW/IPPD/A - Continuous
© 2005 by Carnegie Mellon University Version 1.0 page 47
Capability Evolution of Measurement via Generic Practices
Identify and correct the root causes of defects and other problems in the process
5.2 Correct common cause of problems
Ensure continuous improvement of the process in fulfilling the relevant business objectives of the organization
5.1 Ensure continuous process improvement
Stabilize the performance of one or more subprocesses to determine the ability of the process to achieve the established quantitative quality and process performance objectives
4.2 Stabilize sub-process performance
Establish and maintain quantitative objectives for the process about quality and process performance based on customer needs and business objectives
4.1 Establish quality objectives
Collect work products, measures, measurement results, and improvement information derived from planning and performing the process to support the future use and improvement of the organization’s processes and process assets
3.2 Collect improvement information
Monitor and control the process against the plan for performing the process and take appropriate corrective action
2.8 Monitor and control the process
FocusGeneric Practice
© 2005 by Carnegie Mellon University Version 1.0 page 48
DMAIC Relationship to CMMI PAsOverall• DMAIC: a problem solving approach• CMMI: a process & measurement deployment approach
PAs that align with DMAIC include the following:• MA, GPs• QPM, CAR, OID (either “continuous” or high-maturity view)
A DMAIC project may leverage these existing processes:• PP, PMC, IPM• OPP for organization level execution, mgmt, oversight
PAs through which DMAIC may be incorporated into organizational process definition include the following:• OPF, OPD
[Penn & Siviy 05]
© 2005 by Carnegie Mellon University Version 1.0 page 49
Relationship to CMMI PAs
PAs “eligible” for DMAIC-based improvement• all
PAs with links to the analytical toolkit include• Decision Analysis & Resolution
- e.g., concept selection methods, such as Pugh’s• Risk Management
- e.g., Failure Modes & Effects Analysis (FMEA)• Technical Solution
- e.g., Design FMEA, Pugh’s
[Penn & Siviy 04]
© 2005 by Carnegie Mellon University Version 1.0 page 50
Relationship to CMMI Goals & Practices
“Define” Roadmap Steps
• Define project scope � Align process improvements with business objectives- Organization Process Focus (SG 1)- Organization Process Performance (SG 1)- GP 4.1, GP 5.1
• Establish formal project � Establish improvement projects - Organization Process Focus (SG 1)- Organization Innovation and Deployment (SG 1)- Implied by GP 4.1, GP 5.1
© 2005 by Carnegie Mellon University Version 1.0 page 51
Relationship to CMMI Goals & Practices“Measure” and “Analyze” Roadmap Steps
• Define data and establish repositories- Measurement and Analysis (SG 1) - Organization Process Definition (SG 1) - Organization Process Performance (SG 1) - Causal Analysis and Resolution (SG 2) - Quantitative Project Management (SG 2)- GP 2.2, GP 3.2, GP 5.1
• Baseline data- Organizational Process Performance (SG 1)
• Analyze data- Measurement and Analysis (SG 2)- Organization Process Performance (SG 1)- Causal Analysis and Resolution (SG 1)- GP 2.8, GP 5.2
© 2005 by Carnegie Mellon University Version 1.0 page 52
Relationship to CMMI Goals & Practices
“Improve” and “Control” Roadmap Steps
• Identify Improvement Alternatives- Decision Analysis and Resolution (SG 1) - Organization Innovation and Deployment (SG 1)- Organization Process Performance (SG 1)- GP 5.1
• Control Processes- Measurement and Analysis (SG 2)- Organization Process Performance (SG 1)- Organization Innovation and Deployment (SG 2)- Causal Analysis and Resolution (SG 2)- Quantitative Project Management (SG 2)- GP 2.8, GP 4.2
© 2005 by Carnegie Mellon University Version 1.0 page 53
Discussion / ExerciseOur lists are not exhaustive.
In your group of 3• Are there other dimensions via which DMAIC and CMMI are
connected?• Are there other goals which link to DMAIC?
- Do any specific practices come to mind?• Are there any other • Are there other analytical methods from the Six Sigma toolkit
that are frequently used within a Process Area?• How are CMMI and DMAIC different? Synergistic?
Debrief: Voluntary sharing
© 2005 by Carnegie Mellon University Version 1.0 page 54
Outline
Context: The Value Proposition
Approach to building integrated training
A roadmap for performance-driven improvement
Roadmap connections to CMMI
Roadmap execution: a case study overview
Roadmap execution: demystifying steps and methods
Summary
© 2005 by Carnegie Mellon University Version 1.0 page 55
Pre-Case DiscussionWhat is the cost and schedule performance of one of the projectsin your organization? Of your organization’s projects as a whole?
Does your answer reflect• total variance from initial estimate? Or variance from most
recent re-estimates?• all projects? Or just certain types of projects?• “level of effort” or maintenance work?
We are going to “run through” a high level view of the cost-schedule variance reduction case from the course.
Keep track of any methods or approaches that you don’t understand. We will ask in ~1 hour.
??
© 2005 by Carnegie Mellon University Version 1.0 page 56
Organizational ContextMotivation for project • improve customer satisfaction
- indicated by field defects and effort and schedule variance
Project portfolio • both development and maintenance• size and complexity vary• schedules from <1 month to >18 months
CMMI implementation• assessed at SW-CMM Level 3 five years ago• began transition to CMMI four years ago• Working toward high maturity• striving to implement new Process Areas to add value, not
just achieve compliance
© 2005 by Carnegie Mellon University Version 1.0 page 57
Starting Points
Partial list of work that is already complete• measurement infrastructure for project and
product metrics • measurement infrastructure akin to GP2.8 and
GP3.2 for processes implemented per SW-CMM• SPC pilots within selected organizational units• initial data rollups for management• initial benefits calculations (ROI and other
indicators)
© 2005 by Carnegie Mellon University Version 1.0 page 58
Excerpt of CMMI PlansWhat “M&A” type efforts lie ahead, to fulfill CMMI-related goals?• ensure compliance to MA Process Area• organizational and project baselines (OPP and QPM SPs)• model building at the project level (QPM SPs)• relating process behavior and performance to product and
service quality (OPP and QPM SPs)• quantitatively manage key processes and subprocesses
(QPM SPs)
Problematic CMMI language and practices• “process performance model” (OPP SPs)• subprocess selection (QPM SPs)
If you are working on Level 2 & 3 PAs, don’t tune out…• Remember our research project findings: the lessons from
this case can be applied to build a foundation for high maturity
© 2005 by Carnegie Mellon University Version 1.0 page 59
Documented M&A Process
Select Business Goal Gather
DataAnalyze Data
Prioritize Issues
Identify Possible Causes
(Brainstorm)
Perform Causal Analysis
Prioritize Actual Causes
Identify Potential Solutions
Develop Action Plan
Implement Improvement
IdentifiedThresholds
Business ObjectiveSpecsPerformance Thresholds
•Project Performance•Measures Quality•SPI Implementation
•Snapshot (1st Iteration � Baseline)•Issues (Validity of data, Quality of Data, Variance (performance)
No “Issues”Establish capability, models, etc.
Start subprocess selection
Draft Improvement Goal (SMART) or Identify focus area
Improvements
Gather Data/Analyze Data
Goal Refinement1st Iteration � Final Goal
CAR
D
M
A
I
OPP QPMMA
C
© 2005 by Carnegie Mellon University Version 1.0 page 60
Customer Satisfaction Goal Structure
Deliver high quality products and services
{other goals}
Achieve Customer Satisfaction
Success Indicators(Customer Survey)
Y’s
Plot, plot, plot defect and reliability data:• trends• distributions• control charts (c-charts)• scatter plots
Plot, plot, plot cost and schedule data:• trends • distributions• control charts (x, mr)• scatter plots
other factors
Inspect & testproduct, monitor field performance
Plan and manage project cost and schedule
Analysis Indicatorsy’s
SPI Task Plans
Progress Indicators
Consonant with GQIM, CMMI business goal alignment, Six Sigma CTQs, Y to x
D M A I C
© 2005 by Carnegie Mellon University Version 1.0 page 61
Cost/Schedule Measurement System
Monthly effort, schedule (earned value based)• variance: (current actual – most recent estimate)
- compared to variance specification (+/- 20%)- text entry for out of specification explanations
• data manually transferred from project files to monthly report • cost in terms of effort hours• schedule performance derived from effort hours
Completed project data: a special data collection• variance: (original plan – final actual)
- categorized by internal and external causes- categorized by life-cycle phase
• cost in terms of effort + purchase costs• schedule in terms of calendar days• each replan recorded
D M A I C
© 2005 by Carnegie Mellon University Version 1.0 page 62
%S
ch. V
ar.
-100
-80
-60
-40
-20
0
20
40
60
80
100
1: oct-01 2: nov-01 3: dec-01 4: jan-02 5: feb-02 6: mar-02
month label
What is Monthly Average & Variability?Cost/Schedule Performance Baseline
•Reminder: This is (current actual – most recent estimate)
•Averages within spec, and close to 0
•Need to examine extreme values, especially for cost
• even if extreme values are outliers, it looks like we need to investigate variability
All Org Units, all projects, October to March
%C
ost V
ar.
-8000
-7000
-6000
-5000
-4000
-3000
-2000
-1000
0
1: oct-01 2: nov-01 3: dec-01 4: jan-02 5: feb-02 6: mar-02
month label
Range: 60%
x
90th percentile75th percentileMedian: 50th percentile25th percentile10th percentile
mean
D M A I C
© 2005 by Carnegie Mellon University Version 1.0 page 63
Disposition of Extreme ValuesExtreme values investigated• removed all documentation projects• removed small project that was re-baselined 26 times
Immediate improvement opportunity• establish data validation guidelines
D M A I C
We’ll talk more about measurement system evaluation and outliers this afternoon
© 2005 by Carnegie Mellon University Version 1.0 page 64
What is Monthly Average & Variability?Cost/Schedule Performance Baseline, Outliers Removed
%C
ost V
ar.
-100
-80
-60
-40
-20
0
20
40
60
80
100
1: oct-01 2: nov-01 3: dec-01 4: jan-02 5: feb-02 6: mar-02
month label
Scaled to match schedule variance chart, some values below -100
Even with outliers removed, variance has more variability than expected.
There is month to month consistency.
%S
ch. V
ar.
-100
-80
-60
-40
-20
0
20
40
60
80
100
1: oct-01 2: nov-01 3: dec-01 4: jan-02 5: feb-02 6: mar-02
month label
This and following slides consonant with Six Sigma toolkit, CMMI MA, QPM, OPP, CAR
D M A I C
© 2005 by Carnegie Mellon University Version 1.0 page 65
Overall Monthly Performance StatsCost/Schedule Performance Baseline, Outliers Removed
In-process effort/cost data
• All life-cycle phases, all projects, Oct – June
Reminder: variance = current actual – most recent estimate
Observations• averages running close to target• more data than expected outside of spec• high variability
cost scheduleaverage -2 -7.6standard deviation 25 19.2n 770 770% outside spec 17% 17%
D M A I C
© 2005 by Carnegie Mellon University Version 1.0 page 66
How Do Projects Typically End? Cost/Schedule Performance Baseline, Outliers Removed
-1
-0.5
0
0.5
LSL
USLTarget
-0.5
0
0.5
1
1.5
LSL
USLTarget
% effort variance % sched varianceavg -2% 13%std dev 33% 36%median 2% 7%min to max -95% to 50% -128% to 71%capability notes(spec = +/- 20%)
43.8% outside spec
39% outside spec
Reminder: This is the total cumulative variance.• (initial plan – final actual) • customer-driven changes are included
Some extreme values still present• there are no valid reasons to remove or segment the data
Large % of data outside specification• process not capable
D M A I C
© 2005 by Carnegie Mellon University Version 1.0 page 67
Cost/Schedule Data QualityAccuracy• re-baselining project estimates biases the data, promotes the
perception that performance is better than it actually is
Repeatability• different estimating methods across life cycle, across projects• unclear definition of “project”
Completeness• Which project types are “in” and which are “out”?• sparse data – some parts of organization better represented• completed project data missing from organization data
Sampling Homogeneity• All monthly data was being rolled together, regardless of
which part of life cycle it represented
D M A I C
© 2005 by Carnegie Mellon University Version 1.0 page 68
Initial Baseline Summary (All Data)Existing customer satisfaction information was positive.
Cost/Schedule data frequently and consistently “out of spec,”both monthly and final data.• But, the average looked great!
There were few field defects and the inspection process seems to be effective.
Many measurement infrastructure improvements were identified• customer surveys• improved operational definitions• improved automation
- to improve quality and minimize quantity of missing data• add completed project data into central repository• group data by life cycle phase or project %complete for
reporting
D M A I C
© 2005 by Carnegie Mellon University Version 1.0 page 69
Why Pursue Cost & Schedule Variation Reduction? Why should we care about effort & schedule variance reduction?• our customers are pretty happy• our project managers seem to have things under control
What are the business drivers? What is the benefit for the effort that will likely be invested?• competitive advantage• funding increasingly harder to obtain• credibility• domino effect of projects running late• more small projects anticipated• fewer “level of effort” projects anticipated• operational cost-based budget
Why does YOUR organization care about cost & schedule performance???
D M A I C
© 2005 by Carnegie Mellon University Version 1.0 page 70
Goal Realignment
While the re-alignment may not change the specific improvement activities, it will positively affect “change management” efforts.
D M A I C
Position organization to win a larger percent of
available business
Reduce variability of cost and schedule performance
Possible meta goals:*increase capacity, *reduce percent of effort spent in rework
Achieve/Maintain
Customer Satisfaction
Deliver High Quality Products,
Services{other subgoals?)
Deliver projects within cost and
schedule thresholds
Deliver products with zero
(nearzero?) field defects
© 2005 by Carnegie Mellon University Version 1.0 page 71
Next StepsCost and schedule variance and measurement quality selected as primary improvement opportunities
Dashboard established to monitor unintended consequences while organizational focus shifts to cost & schedule improvements
© 2005 by Carnegie Mellon University Version 1.0 page 72
Segmenting the Data: Are there Differences by Org Unit?Schedule Variance, all projects, Oct 01 to Jun 02Charted by organizational unit
%sc
h va
r
-100
0
100
A B B F K N T
branch
All Pairs
Tukey-Kramer
0.05groups within organization
Visual indicator of significant difference
Circle size influenced by sample size
Concentric circles indicate no difference.
Separate circles indicate difference.
Overlapping circles are somewhere in between.
Are there statistically significant group-to-group differences: NO
D M A I C
© 2005 by Carnegie Mellon University Version 1.0 page 73
Variance Causal AnalysisBrainstorming Workshop• process and measurement points of contact met for 2 days to
review data, brainstorm about sources of variation
Transformed original brainstorm list • initial experiential assessment of frequency, impact of each
cause code• refined operational definitions and regrouped brainstorm list• tagged causes to historical data• refined again
Process decomposition• decomposed process into four main subprocesses• mapped cause codes to process • identified cause codes that are resolved in-process
D M A I C
© 2005 by Carnegie Mellon University Version 1.0 page 74
Transformed original brainstorm list • initial experiential assessment of frequency, impact of each
cause code• refined operational definitions and regrouped brainstorm list• tagged causes to historical data• refined again
Final list included such things as• missed requirements• underestimated task • over commitment of personnel • skills mismatch • tools unavailable • EV method problem • planned work not performed • external
Cause Code Taxonomy
Coded historical data by cause codes.
Evaluated for frequency, severity of impact.
D M A I C
© 2005 by Carnegie Mellon University Version 1.0 page 75
Co-Optimized Pareto Analysis
EV ProblemsUnexpected departure
Planned work not performed
Asset availability
Under-estimated task
Skills Mismatch
5
Unexpected departure of personnel
Missed Requirements
EV ProblemsUnder planned rework
Planned work not performed
Missed requirements
4
Missed requirements
Under-estimated Task
Missed requirements
Missed requirements
Under planned rework
EV Problems3
Under-estimated Task
Skills mismatch
Under planned rework
EV ProblemsAssets not available
Tools2
ToolsToolsUnder-estimated Task
Under-estimated Task
ToolsUnder-estimated Task
1
Organization Slice 2 Effort
Organization Slice 2
Schedule
Organization Slice 1 Effort
Organization Slice 1
Schedule
EffortScheduleImpact # (from Pareto)
D M A I C
© 2005 by Carnegie Mellon University Version 1.0 page 76
SMART* Schedule Variance GoalReduce the total variance by decreasing the variance of the top 3 internal causes by 50% in 1 year.
Reduce the impact of external causes by 50%.
Indicators (defined via GQIM Indicator Template):• trend for each cause independently• trend for total variance
D M A I C
Goal statements address what needs to be changed, scope of the organization, current and target measure and timeline:
To [increase/decrease] primary metric [where?] [from existing level] to [target] by [when?]
*Make your goals SMART:SpecificMeasurableAttainable or Agreed-uponRelevant or RealisticTimely or Timebound
© 2005 by Carnegie Mellon University Version 1.0 page 77
Schedule Variance Sub Process Selection
Cause Code: Underestimated tasksProcess: Project managementSubprocesses: Planning
• establish requirements • define project process • perform detailed planning
Requirements Management
As subprocesses are explored, process mapping and input/output analysis may be used with (or based on) ETVX diagrams.
CMMI Friendly
Six Sigma Friendly
D M A I C
© 2005 by Carnegie Mellon University Version 1.0 page 78
Schedule Variance Root Cause Root causes of common cause variation• inexperience in estimation process • flawed resource allocation• estimator inexperience in product (system)• requirements not understood
Root causes of special cause variation• too much multitasking• budget issues
D M A I C
© 2005 by Carnegie Mellon University Version 1.0 page 79
Management by Fact Problem / Goal StatementReduce the total schedule variance by decreasing the variance ofthe top 3 internal causes by 50% in 1 year.
Total variance w/mean comparison
Variance for top 3 causes: • underestimated tasks • EV method problem• missed requirements
Prioritization & Root Cause
1. Inexperience2. Resource allocation3. Requirements not
understood4.….
Counter Measures
First: Gather real time data and verify “data archaeology”Then:1.….2.…
Impact, Capability
In total, these countermeasures will remove 15% of typical variance. (as possible, list impact of each countermeasure)
0
5
10
15
20
25
30
1 2 3 4 5Month
Per
cent
Sch
edul
e V
aria
nce
0123456789
1 2 3 4 5Month
Per
cen
t S
ched
ule
Var
ian
ce
Cause 1Cause 2Cause 3
D
M
A I
OPP
CAR
MA
CM OID
C
D M A I C
© 2005 by Carnegie Mellon University Version 1.0 page 80
From Organizational to Project View*:Variability Across the Life CycleIt was hypothesized that there are more cost & schedule variabilities early in the project than later.• relative to most current estimate
A L Q 1 8 4 p r o je c t c o s t in d e
- 1 0
- 5
0
5
1 01 3 5 7 9 11 13 15
M o n t h
project cost index
wider limits for projects in planning phase
narrower limits for projects in execution phase
*As a reminder: For those working on Level 2 and 3, the project view can be addressed as part of MA and the GPs for respective process areas. This can build a foundation for higher maturity.
© 2005 by Carnegie Mellon University Version 1.0 page 81
Within-Project Variance Results
Three groupings (practical and statistical basis):• <20% complete (planning)• 20-80% complete (majority of effort; routine execution)• >80% complete (converging on completion)
More data anomalies to resolve • When *exactly* do projects start and stop?• Are data from on-hold projects included? (No.)• Should level of effort data be included? (Yes, but separately.)
© 2005 by Carnegie Mellon University Version 1.0 page 82
EV Estimate-At-Completion ModelA transformation from traditional control chart view to traditional PM view
Earned Value Chart
0
2000
4000
6000
8000
10000
12000
14000
16000
18000
10/1
/200
1
11/1
/200
1
12/1
/200
1
1/1/
2002
2/1/
2002
3/1/
2002
4/1/
2002
5/1/
2002
6/1/
2002
7/1/
2002
8/1/
2002
9/1/
2002
10/1
/200
2
11/1
/200
2
12/1
/200
2
1/1/
2003
2/1/
2003
3/1/
2003
4/1/
2003
5/1/
2003
6/1/
2003
7/1/
2003
8/1/
2003
9/1/
2003
10/1
/200
3
11/1
/200
3
12/1
/200
3
1/1/
2004
2/1/
2004
3/1/
2004
4/1/
2004
Month
Ho
urs
EAC
BCWS
BCWP
ACWP
EAC Line
10% Upper Bounds
10% Lower Bounds
20% Upper Bounds
20% Lower Bounds
UCL
LCL
Process UCL, Effort and Schedule
Process LCL, Effort and Schedule
Internal Spec Limits. 20%
and 10%
Calculated Estimate at Completion
Baseline Estimate at Completion
Limits for schedule and cost variance for each grouping projected to a range around the estimate-at-completion values.
Future enhancements: sensitivity analysis and prediction model
© 2005 by Carnegie Mellon University Version 1.0 page 83
DMAIC Summary for Full CaseMultiple D-M-A iterations• iteration 1: problem identification & project selection
- Reduce cost and schedule variance - Improve data quality
• subsequent iterations: - “peel the onion” to better understand problems - establish specific quantitative goals
Improvements instituted• measurement infrastructure expansion, automation• cost and schedule variance cause code taxonomy• estimating (training, minor process adjustments)• adoption of “management by fact” (MBF) format • homogeneous sampling for cost and schedule data
Monitoring & control mechanisms• organization: dashboards with charts for cost, schedule,
defects, data quality, and customer satisfaction• projects: cost/schedule prediction model
© 2005 by Carnegie Mellon University Version 1.0 page 84
Methodology/Model ConnectionsWV Model (the ‘base architecture’): • DMAIC used for project identification
CMMI• Process areas* used: MA, OPP, QPM, OID, CAR• Process areas touched: PP, PMC, RD, REQM• Terms addressed: Baseline, process performance model
Measurement best practices• indicator template key component of measurement plan
Six Sigma• problem-solving approach influenced design and definition of
measurement & analysis processes• used MBF as an organizational innovation• indicator templates added as a domain-specific tool to the Six
Sigma toolkit
*See Addenda for list of CMMI Process Areas
© 2005 by Carnegie Mellon University Version 1.0 page 85
Skills-Building for the Full Case
In-class skills-building (practice) sessions• baselining, using box plots, distributions, other charts• Hypothesis testing, means comparison tests• Prioritizing causes for action• Failure Modes Effects Analysis
In-class discussions and other exercises• What drives cost & schedule variance• risks of using historical data• small-sample sizes and homogeneous sampling• corrective action guidance (as part of indicator template)
© 2005 by Carnegie Mellon University Version 1.0 page 87
Copies of Slides
After lunch, we will • decompose various elements of the case• demystify selected roadmap steps and methods• Practice
If you would like copies of our slides,Add your name to our signup sheet, orsend an email to Debra Morrison, [email protected].
• Put “SEPG Tutorial” in the subject line.
© 2005 by Carnegie Mellon University Version 1.0 page 88
Outline
Context: The Value Proposition
Approach to building integrated training
A roadmap for performance-driven improvement
Roadmap connections to CMMI
Roadmap execution: a case study overview
Roadmap execution: demystifying steps and methods
Summary
© 2005 by Carnegie Mellon University Version 1.0 page 89
Our Approach for the AfternoonWe excerpted/shortened several lectures from the course
• Statistical thinking• “Define”: business benefits• “Measure”:
- distilling data: segmentation, input/output analysis, Y to x- Baselining- Measurement system analysis
• “Analyze”:- Exploring the data- Characterizing the process- Root Cause Analysis
We’ve interspersed the lectures with exercises, discussions and examples
• The exercises will be based on the cost-schedule case just shown, or on the DMAIC project scope you defined earlier today
• Examples are drawn from any of the 3 cases in the course
© 2005 by Carnegie Mellon University Version 1.0 page 90
Evolution of Six Sigma Perspective
Fitness to a
Standard
Internal View
Fitness to
Latent Requirement
Customer IntimacyInnovation
Competitive Advantage
Compliance
Customer View
Fitness for
UsePerformance
Fitness to
Costand Speed
Lean Efficiency
Rapid Delivery
© 2005 by Carnegie Mellon University Version 1.0 page 91
Six Sigma and ‘Lean’ - Natural Partners
Six Sigma
• Customer Focus:
• Defects
• Business Growth
• Performance
• Thinking Statistically
• Process ‘Control’
• Systematic Development of Infrastructure:
• ‘Belts,’ ‘Champions,’Leadership Engagement
‘Lean’
• Speed
• Flow
• Removing Constraints
• Delays
• Value-add analysis
• Reducing Waste
• Reducing Complexity
• Increasing Flexibility
Powerful in Concert
DMAIC: Follow the Defects Lean: Follow the Time
© 2005 by Carnegie Mellon University Version 1.0 page 92
Lean Origins and Evolution
Seminal work at Toyota laid ‘Lean’ foundations:• Understanding and reducing ‘work in process’ (WIP)
- Highly visible in manufacturingmaterials-waiting scrap inventory
- Present, but often less visible, in software and IT
decisions-waiting rework multiple projects underway
• Making process flow, timing, and costs visible,so dynamics and constraints can be understood and exploited.
Extension to other processes has been a natural from the beginning – with history of success in services- emerging success stories in software and IT
© 2005 by Carnegie Mellon University Version 1.0 page 93
Everything is a process.
All processes have inherent variability.
Data are used to understand variation and to drive decisions to improve the processes.
Statistical Thinking: A Paradigm
[ASQ 00], [ASA 01]
Original Mean
New mean after improvement(Spread due to common cause variation will re-establish itself.)
Special Cause Variation
Data Spread due to Common Cause Variation
© 2005 by Carnegie Mellon University Version 1.0 page 94
In Other Words…
Target
USLLSL
CenterProcess
ReduceSpread
Target
USLLSL
Process Off Target
Defects
Target
USLLSL
Excessive Variation
Defects
© 2005 by Carnegie Mellon University Version 1.0 page 95
Types of Data
Defect counts by type
Job titles
Examples
TimeCostCode size
deg. F, CAssignment of observations to points on a scale … enabling determination of interval sizes and differences
A B
Interval
Ratio
0Variables(a.k.a., measures,
continuous,analog)
Attribute(a.k.a., nominal,
categorical,digital)
IncreasingInformation
Content
Satisfaction ratings: unsatisfied, neutral, delighted
Risk estimates: low, med, high
CMM maturity levels
Attribute data, with > or < relationships among the categories
Ordinal
< <A B C
Placing observations into categories
A B C
© 2005 by Carnegie Mellon University Version 1.0 page 96
Terms & Definitions
A sample is a set of observations selected from a population.
X3
A population consists of the totality of observations with which we are concerned.
average = x1 + x2 + x3
3
A statistic is a function that summarizes observations.
X2
X1
A sampling distribution maps values of a sample statistic (x) vs. their probability of occurrence (y).
P(X=x)
values of x0 200105
© 2005 by Carnegie Mellon University Version 1.0 page 97
Mean =(the average) n
valuesmeasured n of sum
Measures of Central Tendency: location, middle, the balance point
Descriptive Statistics
Median = the midpoint by count
Mode = the most frequently observed point
Variance Deviation Standard �In the units of the original measures; indicator of the spread of points from the mean
�
� �2n
1iix
Variancen
��
��
�Average squared distance from the population mean�2
(the center of gravity by value)9.227
46
10486.3��
Measures of Dispersion: Spread, variation, distance from central tendency
Range maximum - minimum
© 2005 by Carnegie Mellon University Version 1.0 page 98
Inferential vs. Descriptive Stats
Example All of last quarter’s calls to the support hotline
StatisticsMean, median, mode, variance (�2), standard deviation (�), range, etc.
Issues Measurement system accuracy and repeatability
Do we have all the data ?Yes
DescriptiveStatistics
Greek letters (e.g. ���)usually designate descriptive stats
Surveys returned from about 120 customers
Estimates of mean, median, mode, variance (s2), standard deviation(s), etc.
Measurement system accuracy and repeatability
Sampling bias
Choice of statistic
No, a sample
InferentialStatistics
Roman letters (e.g. s, y)usually designate inferential (sample) stats
© 2005 by Carnegie Mellon University Version 1.0 page 99
Define Guidance Questions
Define project scope
Establish formal project
Define
Identify needed data
Obtain data set
Evaluate data quality
Summarize& baseline data
Explore data
Characterize process & problem
Identify possible solutions
Implement (pilot as needed)
Define control method
Implement
Update improvement project scope & scale
Select solution
Evaluate
ControlAnalyze ImproveMeasure
Document
• What is the current problem to be solved?• What are the goals, improvement targets, & success criteria?• What is the business case, potential savings, or benefit that
will be realized when the problem is solved?• Who are the stakeholders? The customers?• What are the relevant processes and who owns them?
• Have stakeholders agreed to the project charter or contract?
• What is the project plan, including the resource plan and progress tracking?
• How will the project progress be communicated?
OPP OPF
GPs OID
© 2005 by Carnegie Mellon University Version 1.0 page 100
Reconciling Different “Voices”Voice of the Business (VOB)• term to describe the stated and unstated needs or
requirements of the business/shareholders
Voice of the Customer (VOC)• term to describe the stated and unstated needs or
requirements of the customer
Voice of the Process (VOP)• term to describe the performance, capability of the
organization’s processes
Understanding the relationship between these voices • enables appropriate selection of processes to begin study• further refines project scope and enables the establishment of
a formal project• lays the foundation for work to be done in “measure, analyze,
improve”
[isixsigma.com]
© 2005 by Carnegie Mellon University Version 1.0 page 101
Evaluating Business BenefitsThings to consider:• What is the business motivation for this project?
– How big is the problem? – What are the implications?– What is the projected payback or net value?
• Why now?• What would be the consequences if we delayed or failed to
execute?• How does this project fit with others?
– relative priority– leverage/support re: other initiatives
© 2005 by Carnegie Mellon University Version 1.0 page 102
Measuring Benefits
$$ saved
Return on Investment
Return on Equity
Cash Flow Rate of Return
Cost of Poor Quality
© 2005 by Carnegie Mellon University Version 1.0 page 103
Cost of Poor Quality (COPQ)
Lost Sales/Customers
Write Offs
Rework
Hidden CostsHidden Costs
Long Cycle TimesRequirements Errors
Time Value of Money
Cash MisapplicationLost Customer Loyalty
Contract Penalties
Audit Cost
Employee Attrition
Measured Measured (in (in somesome businesses)businesses)
Working on the Wrong Thing
Costs of Poor Quality (COPQ)
Support Costs
© 2005 by Carnegie Mellon University Version 1.0 page 104
Evolving the COPQ IndicatorIdentification: (based on a financial metric)
Classification:e.g. Y = unit cost per call =
Rep
+Storage/Monitoring
+
Supervisor
+ ??$$ $$ $$ $$
Hard vs. soft savings
Appraisal cost vs. cost avoidance
Validation:
Define
Analyze
Improve
Control
Realization
Initial (I) COPQ
COPQ
Pre-Implementation Cost-Benefit Analysis
Post-Implementation Cost-Benefit Analysis
Verified (longer term) Cost-Benefit Analysis
Confidence
Moderate
High
© 2005 by Carnegie Mellon University Version 1.0 page 105
Validating Business Costs & Benefits
While you can estimate some non-value-added costs, it is important to validate costs with your organization’s accounting department early in the project.
This will:• confirm the business case• document the project’s “before” picture as a baseline• document the project’s net financial gains
(‘after’ – project costs – “before”)
© 2005 by Carnegie Mellon University Version 1.0 page 106
Example: Call Center Case
Improve Return on Equity
Voice of the Business calls for improved Return on Equity
What are ways to accomplish this accomplish this? Why?
Reduce Time-to-Market
Reduce Product Development Costs
Reduce Working Capital Requirements (Days Sales Outstanding + WIP)
Reduce Rework Costs
Increase Market Share, Account Rate
Optimize Customer Satisfaction
How?
Reduce Call Center Costs
© 2005 by Carnegie Mellon University Version 1.0 page 107
Call Center Project Scope
Problem Statement (from benchmarking and high level baselining):Competitors are growing their levels of satisfaction with support customers, and they are growing their businesses while reducing support costs per call. Our support costs per call have been level or rising over the past 18 months, and our customer satisfaction ratings at or below average. Unless we stop, or better, reverse this trend we are likely to see compounded business erosion over the next 18 months.
Business Case: Increasing our new business growth from 1% to 4% (or better) would increase our gross revenues by about $3mm. If we can do this without increasing our support costs per call, we should be able to realize a net gain of at least $2mm.
Goal Statement:Increase the call center’s industry-measured Customer Satisfaction rating from its current level (75%) to the target level (85%) without increasing support costs, by end of Q4.
© 2005 by Carnegie Mellon University Version 1.0 page 108
Identify needed data
Measure Guidance Questions
Obtain data set
Evaluate data quality
Summarize& baseline data
Explore data
Characterize process & problem
Identify possible solutions
Implement (pilot as needed)
Define control method
Implement
Update improvement project scope & scale
Select solution
Define project scope
Establish formal project
Measure ControlAnalyze ImproveDefine• Does the
measurement system yield accurate, precise, and reproducible data?
• Are urgently needed improvements revealed?
• Has the risk of proceeding in the absence of 100% valid data been articulated?
• What are the process outputs and performance measures?
• What are the process inputs?• What info is needed to understand relationships
between inputs and outputs? Among inputs?• What information is needed to monitor the progress of
this improvement project?
• Is the needed measurement infrastructure in place?
• Are the data being collected and stored?
Document
Evaluate
• What does the data look like upon initial assessment? Is it what we expected?
• What is the overall performance of the process?• Do we have measures for all significant factors, as
best we know them?• Are there data to be added to the process map?• Are any urgently needed improvements revealed?• What assumptions have been made about the
process and data?MA OPD GPsOPP CAR
© 2005 by Carnegie Mellon University Version 1.0 page 109
Distilling DataRaw Data
Events
Patterns
PARETO CHARTTIME SERIES HISTOGRAMGraphicalAnalysis
What? When? Where?How? How much?
Measure
Structure
CausalAnalysis
Why?
What’s really going on?
What’s driving the patterns?
STATISTICALTests and Models
5 18
4
12
TRANSFER FUNCTIONS and OPTIMIZATION
CAUSE & EFFECT
Analyze
Measure&
Analyze
© 2005 by Carnegie Mellon University Version 1.0 page 110
“Funneling” to the Critical x’s
DefineEstablish project, process, and requirements scope This identifies Y(s) and the “search space” for x’s
Process Maps20+ x’s
Charter, Y All the x’s
MeasureY-to-x Tree
I/O Factor AssessInitial C&E
10 -15
FMEA
Initial focus
Analyze Graphical Analysis(simple) Stats
Data-Driven C&E
8 -10Analysis narrows the focus
Improve 4 - 8ModelingPiloting
Critical x’s confirmed
Control Control Plan X’s to Control3 – 6
Critical x’s
© 2005 by Carnegie Mellon University Version 1.0 page 111
Methods to Narrow Our Focus
What are the process outputs (y’s) that drive performance?What are key process inputs (x’s) that drive outputs (process performance) and overall performance?
Techniques to address these questions• segmentation / stratification• input and output analysis• Y to x trees• cause & effect diagrams • cause & effect matrices• failure modes & effects analysis
Using these techniques yields a list of relevant, hypothesized, process factors to measure and evaluate.
© 2005 by Carnegie Mellon University Version 1.0 page 112
Natural Segmentation
Procedure• Consider what factors, groupings, segments, and situations may be driving the mean performance and the variation in Y.
• Draw a vertical tree diagram, continually reconsidering this question to a degree of detail that makes sense.
• Calculate basic descriptive statistics, where available and appropriate, to identify areas worthy of real focus.
DescriptionA logical reasoning about which data groupings have different performance, often verified by basic descriptive statistics.
Y
© 2005 by Carnegie Mellon University Version 1.0 page 113
Segmentation vs. StratificationSegmentation• grouping the data according to one of the data elements (e.g.,
day of week, call type, region, etc.)• gives discrete categories • in general we focus on the largest, most expensive,
best/worst – guides “where to look”
Stratification• grouping the data according to the value range of one of the
data elements (e.g., all records for days with “high” volume vs. all records with “low” volume days)
• choice of ranges is a matter of judgment• enables comparison of attributes associated with “high” and
“low” groups – what’s different about these groups?• guides diagnosis
© 2005 by Carnegie Mellon University Version 1.0 page 114
Input / Output Analysis
Inputs Process Outputs
Step 1
Step 2
Assess the Inputs:
• Controllable: can be changed to see effect on key outputs (also called “knob” variables)
• Critical: statistically shown to have impact on key outputs
• Noise: impact key outputs, but difficult to control
Supplier Customer
inputs
outputsInputs
Outputs
© 2005 by Carnegie Mellon University Version 1.0 page 115
Controlled and Uncontrolled Factors
Controlled factors are within the project team’s scope of authority and are accessed during the course of this project.
Studying their influence may inform:
• cause-and-effect work during Analyze
• solution work during Improve
• monitor and control work during Control
Uncontrolled factors are factors we do not or cannot control.
We need to acknowledge their presence and, if necessary, characterize their influence on Y.
A robust process is insensitive to the influence of uncontrollable factors.
© 2005 by Carnegie Mellon University Version 1.0 page 116
Example: Development Process Map
���� ���$��� 1���%�������
� Requirements� Estimate� Concept design
• Code• Data: Defects,
Fix time, Defect Injection Phase, Phase duration
• Detailed Design• Test cases • Complexity• Data: Design Review
defects, Fix time, Phase duration
• Executable Code
• Data: Defects, Fix time, Defect Injection Phase, Phase duration
• Functional Code
• Data: Defects, Fix time, Defect Injection Phase, Phase duration
� Executable Code� Test Plan, Technique� Operational Profiles
�Resources� Code Stds� LOC counter� Interruptions
� Code
Inspection
Rework
� Critical Inputs� Noise
� Standard Procedure� Control Knobs
© 2005 by Carnegie Mellon University Version 1.0 page 117
Lean Methods and Tools
Process Mapping• (Customer)Value-Add / Non Value-Add / Business Value-Add• Cycle time and throughput analysis (see “Little’s Law)
Process Dynamics / Constraints Analysis• Characterizing bottlenecks, queues• Identifying throughput, capacity, and quality constraints
© 2005 by Carnegie Mellon University Version 1.0 page 118
Mapping Variation: Value MapIdentify the process to map.
Identify the boundaries.
Create input-process-output for the critical processes.
Create the process map.
Color code each step identifying value.• green = value added• red = non value added• yellow = non value added but necessary
Identify hand-off points, queues, storage, and rework loops in the process.
Quantitatively measure the map (throughput, cycle time, and cost).
Validate map with process owners.
© 2005 by Carnegie Mellon University Version 1.0 page 119
Value Mapping: ExampleRequest
(Need Identified)
SelectMethod/Path
ProvideAdditionalGuidance
Gather MoreInformation
FeedbackPreliminary
RequestAccepted?
AdditionalGuidanceNeeded?
Yes
No
No
No
YesYes
InitialAssessment*
RightDecision?
Forward toBoard
Yes
No
5% Rework
*Initial Assessment will:• Determine Impact Assessment• Identify Stakeholder• Coordinate with Product/Process Owner• Perform Impact Analysis
AssessmentCoordination
ValidateRedYellowGreen
RequestValidated
?
© 2005 by Carnegie Mellon University Version 1.0 page 120
Flowdown (Y-to-x) Tree -1
Description• Y to x is a hypothesis tool
- Depicts hypothesized causal relationships between customer-critical performance measures and process factors
- represents portions of a transfer function over time
Procedure• Draw a vertical tree diagram to depict the causal relationship.
- Use information from process mapping, natural segmentation as inputs.
• Identify x's as uncontrollable vs. controllable and measurable.• Select y’s and x's for initial data collection and evaluation.
© 2005 by Carnegie Mellon University Version 1.0 page 121
Flowdown (Y-to-x) Tree -2
Considerations• Flowdown trees are very useful when Y is hard to measure
directly or hard to influence.• Cause & effect diagrams are another means of diagramming
hypothesized causal relationships.• Subsequent “measure” and “analyze” tasks will help
determine the strength and the nature of each important x–Y relationship.
• The initial selection of y’s and x's for data collection may be based on logic and data availability. As more about the process is understood, quantitative causal relationships will drive selections.
© 2005 by Carnegie Mellon University Version 1.0 page 122
Tool Connections – “Finding X”SIPOC
Identify suppliers, inputs, process, outputs, & customers. Discover Y’s.
1
Discover x’s.
Y –to- x Tree2
Co
ntro
llable x’s
Clarify x’s
Controlled/UncontrolledSales orders Controlled Inventory reduction
Inventory Controlled Cost of Sales bookedCustomer InstructionsControlled Backlog Reduction
Packing list
TA91-Manual collectionControlled Invoice numbersof invoices Blocked invoice report
TA97-Automatic collectionControlled of invoices (reads TV84)
A/P vouchers for Controlled memo billing
SAP system UncontrolledBilling employees Controlled Computer UncontrolledElectricity Uncontrolled
Inputs Process OutputsDELIVERY NOTE & GOODS ISSUE
INVOICE RANGE TO PROCESS
Operations Controlled Update GL Computer UncontrolledRelease SBDC sessionsControlled
POST INVOICES DEBITS AND
DSO cloc
k start
s
Input Output Analysis
Classify x's.
3
1 2 3 4 5 6 7
Req
uire
men
t
Req
uire
men
t
Req
uire
men
t
Req
uire
men
t
Req
uire
men
t
Req
uire
men
t
Req
uire
men
t
Total
Process Step Process Input
1234567891011
Cause and Effects Matrix
Prioritize steps and x's.
4
Prioritize Steps and x’s
Process Understanding
Deep dive into process
steps
Function or Process Step
Potential Failure Mode
Potential Failure Effects
SEV
Potential CausesOCC
Detection ProvisionsDET
RPN
0
0
0
0
FMEA
Begin to understand x’s impact on Y’s; further prioritize x’s.
6 Refined Process Map
Understand prioritized process steps.
5
© 2005 by Carnegie Mellon University Version 1.0 page 123
Y to y Example: Cost Schedule Case
DeliveredProductQuality
{other goals}
Achieve Customer Satisfaction
Success Indicators(Customer Survey)
Y’s
Plot, plot, plot defect and reliability data:• trends• distributions• control charts (c-charts)• scatter plots
Plot, plot, plot cost and schedule data:• trends • distributions• control charts (x, mr)• scatter plots
other factors
In-process qualityCost and Schedule Variance
Analysis Indicatorsy’s
SPI Task Plans
Progress Indicators
© 2005 by Carnegie Mellon University Version 1.0 page 124
Example: Cost-Schedule Case Segmentation by Organizational UnitSchedule Variance, all projects, Oct 01 to Jun 02Charted by organizational unit
%sc
h va
r
-100
0
100
A B B F K N T
branch
All Pairs
Tukey-Kramer
0.05groups within organization
Visual indicator of significant difference
Circle size influenced by sample size
Concentric circles indicate no difference.
Separate circles indicate difference.
Overlapping circles are somewhere in between.
Are there statistically significant group-to-group differences: NO
© 2005 by Carnegie Mellon University Version 1.0 page 125
Example: Call Center Customer Sat.
Customer Sat
Wait Time Call TimeTransfers Service Span(Days to Close)
Input Queue Depth
Call Volume
Help Demand
Phone vs Web -Based Service
Preference
Client Comfort with new Web Based System
Right Escalation %
Staff Experience
Staff Training
Access to Information
Callbacks
Suitability of First CallService
Staff Available Attention
Issues not Surfaced During First Call
Wrong Escalation %
Staffing vs.Call Volume
Staff Attention Required per Call
Call Type
Awareness Web Access
Readiness
Quality of Supported Products
and Services
Staff on Duty
= identified as factors for first round data collection
Y-to-x
Does Wait Time impact C-Sat?
Do volume & staffing ratio impact
Wait Time?
Does Days to Close impact C-Sat?
© 2005 by Carnegie Mellon University Version 1.0 page 126
Example: Call Center Support Cost
= identified as factors for first round data collection
Support Cost
Call Volume Service Time (per call)
Knowledge Recovery Time(within & between calls)
Help Demand
Phone vs Web-Based Service
Preference
Client Comfort with new Web Based System
Transfers
Staff Experience
Staff Training
Suitability of First CallService
Staff Attention Required
Issues not Surfaced
During First Call
Staffing vs.Call Volum e
Call Type
Awareness Web Access
Readiness
Quality of Supported
Products and Services
Callbacks
Availability of the right Info
Unexpected Follow-ups
Availability of the right Info
Docs / FAQ Quality
System Search and Access
Quality
© 2005 by Carnegie Mellon University Version 1.0 page 127
Exercise: Segmentation & I/O Analysis
Earlier today, you answered the “define project scope” questions for a problem you are facing
Identify a key performance measure (a ‘big Y’)
Since that measure cannot be changed directly – begin to build a ‘Y to x’ tree to identify factors (x’s), groups, situations, or other categorical reasons that may be driving performance or variation?
For some key ‘x’s – classify whether they would be ‘controllable’or ‘uncontrolled’ in the scope of the project at hand.
Think ahead about:- Availability of data to study x-Y relationships- Quality of the measurement system for x data
1
2
3
4
© 2005 by Carnegie Mellon University Version 1.0 page 128
Measure Guidance Questions
Identify needed data
Obtain data set
Evaluate data quality
Summarize& baseline data
Explore data
Characterize process & problem
Identify possible solutions
Implement (pilot as needed)
Define control method
Implement
Update improvement project scope & scale
Select solution
Define project scope
Establish formal project
Measure ControlAnalyze ImproveDefine• Does the
measurement system yield accurate, precise, and reproducible data?
• Are urgently needed improvements revealed?
• Has the risk of proceeding in the absence of 100% valid data been articulated?
• What are the process outputs and performance measures?
• What are the process inputs?• What info is needed to understand relationships
between inputs and outputs? Among inputs?• What information is needed to monitor the progress of
this improvement project?
• What does the data look upon initial assessment? Is it what we expected?
• What is the overall performance of the process?• Do we have measures for all significant factors, as
best we know them?• Are there data to be added to the process map?• Are any urgently needed improvements revealed?• What assumptions have been made about the
process and data?
• Is the needed measurement infrastructure in place?
• Are the data being collected and stored?
© 2005 by Carnegie Mellon University Version 1.0 page 129
Measurement System Analysis (MSA)
Purpose• Understand the data source and the reliability of the
process that created it
Indicators of data quality and reliability• Validity• Integrity• Accuracy• Repeatability• Reproducibility• Stability• credibility
© 2005 by Carnegie Mellon University Version 1.0 page 130
MSA Tools & MethodsSometimes, a simple “eyeball” test reveals problems.
More frequently, a methodical approach is warranted.
Useful tools and methods include• process mapping• indicator templates• operational definitions• initial evaluation/exploration assessment using statistical tools• checklists
Use common sense, basic tools, and good powers of observation.
© 2005 by Carnegie Mellon University Version 1.0 page 131
MSA Practical TipsFrequently encountered problems include the following:• wrong data• missing data• skewed or biased data
Map the data collection process.• Know the assumptions associated with the data.
Look at indicators as well as raw measures.• ratios of bad data still equal bad data
Data systems to focus on include the following:• manually collected or transferred data • categorical data• startup of automated systems
© 2005 by Carnegie Mellon University Version 1.0 page 132
Exercise: What if I Skip MSA?What if…• All 0’s in the inspection database are really missing data?• “Unhappy” customers are not surveyed? • Delphi estimates are done only by experienced engineers?• A program adjusts the definition of “line of code” and doesn’t
mention it?• Inspection data doesn’t include time and defects prior to the
inspection meeting?• Most effort data are tagged to the first work breakdown
structure item on the system dropdown menu?• The data logger goes down for system maintenance in the
first month of every fiscal year?• A “logic error” to one engineer is a “___” to another
?? Which are issues of validity? Bias? Integrity? accuracy?How might they affect your conclusions and decisions?
© 2005 by Carnegie Mellon University Version 1.0 page 133
Identify needed data
Measure Guidance Questions
Obtain data set
Evaluate data quality
Summarize& baseline data
Explore data
Characterize process & problem
Identify possible solutions
Implement (pilot as needed)
Define control method
Implement
Update improvement project scope & scale
Select solution
Define project scope
Establish formal project
Measure ControlAnalyze ImproveDefine• Does the
measurement system yield accurate, precise, and reproducible data?
• Are urgently needed improvements revealed?
• Has the risk of proceeding in the absence of 100% valid data been articulated?
• What are the process outputs and performance measures?
• What are the process inputs?• What info is needed to understand relationships
between inputs and outputs? Among inputs?• What information is needed to monitor the progress of
this improvement project?
• Is the needed measurement infrastructure in place?
• Are the data being collected and stored?
Document
Evaluate
• What does the data look like upon initial assessment? Is it what we expected?
• What is the overall performance of the process?• Do we have measures for all significant factors, as
best we know them?• Are there data to be added to the process map?• Are any urgently needed improvements revealed?• What assumptions have been made about the
process and data?MA GPsOPP
© 2005 by Carnegie Mellon University Version 1.0 page 134
BaseliningWhat is baselining?• establishing a snapshot of performance and/or the
characteristics of a process
Why baseline performance?• provides a basis by which to measure improvement
How is it done?• map the process of interest
- including scope (process boundaries) and timeframe • gather data
- sample appropriately• summarize data using basic tools
© 2005 by Carnegie Mellon University Version 1.0 page 135
The 7 Basic ToolsDescription• Fundamental data plotting and diagramming tools
- cause & effect diagram - histogram- scatter plot- run chart- flow chart- brainstorming- Pareto chart
• The list varies with source. Alternatives include the following:- statistical process control charts- descriptive statistics (mean, median, etc.)- check sheets
© 2005 by Carnegie Mellon University Version 1.0 page 136
Software not required reliability
MethodsEnvironment
Management PeopleMinimum
application experience
No test specialists
No formal inspection process
No formal defect tracking mechanism
Test beds to not match user configuration
No risk management
Inadequate test resources
Unrealistic completion date
7 Basic Tools: Cause & Effect
[Westfall]
Traditional diagram
Variation
ProblemProcess
Subcause A1 Cause A
Process
Cause C
Cau
se B
Cau
se D
Subcause B1
Subcause C1
Subcause C2
Subcause D1
Subcause D2
Subsu
bcau
seA1.
1
Subsu
bcau
seD2.
1
© 2005 by Carnegie Mellon University Version 1.0 page 137
7 Basic Tools: Chart Examples 2
Scatter Plot
Histogram
34 36 38 40 42 44 46 48 50 52 54 5602468
101214161820
Num
ber
of D
ays
Product-Service Staff Hours
Development Effort (person months) vs. Size (KSLOC)
0
100
200
300
400
500
0 10 20 30 40 50 60 70
Size (thousands of executable source lines)
Eff
ort
(p
erso
n m
on
ths)
© 2005 by Carnegie Mellon University Version 1.0 page 138
7 Basic Tools: Chart Examples
Defects Removed By Type
0102030405060708090
20 40 80 50 10 100 30 60 70 90
Defect Type Codes
Qu
anti
ty
Mean Time To Repair
0.00
20.00
40.00
60.00
80.00
100.00
1 2 3 4 5 6 7 8 9 10
Product
Tim
e (m
inu
tes)
Removed in test
Injected in Design
Pareto Chart
Run Chart
© 2005 by Carnegie Mellon University Version 1.0 page 139
7 Basic Tools: Chart Examples
Box & whisker plot for assessment data
CMMI BENCHMARK SEI Level 3
0
10
20
30
40
50
60
70
80
90
100
Req
uire
men
ts
Dev
elop
men
t
Tec
hnical
Solut
ion
Pro
duct
Inte
grat
ion
Ver
ifica
tion
Validat
ion
Org
anizat
ion
Pro
cess
Foc
us
Org
anizat
iona
l
Pro
cess
Def
inition
Org
anizat
iona
l
Tra
ining
Inte
grat
ed
Pro
ject
Man
agem
ent
Risk
Man
agem
ent
Dec
ision
Ana
lysis
&
Res
olut
ion
x
90th percentile75th percentileMedian: 50th percentile25th percentile10th percentile
mean
© 2005 by Carnegie Mellon University Version 1.0 page 140
7 Basic Tools: Chart Examples
0 5 10 15 20 25
-20
-10
0
10
20
30
40
50
60
.
LCL
CL
UCL
Indi
vidu
als
Subgroup No.
Zone A
Zone A
Zone B
Zone C
Zone B
Zone C
SPC Chart: Individual, Moving Range
© 2005 by Carnegie Mellon University Version 1.0 page 141
%S
ch. V
ar.
-100
-80
-60
-40
-20
0
20
40
60
80
100
1: oct-01 2: nov-01 3: dec-01 4: jan-02 5: feb-02 6: mar-02
month label
Example: Cost/Schedule Monthly Performance Baseline
•Reminder: This is (current actual – most recent estimate)
•Averages within spec, and close to 0
•Need to examine extreme values, especially for cost
• even if extreme values are outliers, it looks like we need to investigate variability
All Org Units, all projects, October to March
%C
ost V
ar.
-8000
-7000
-6000
-5000
-4000
-3000
-2000
-1000
0
1: oct-01 2: nov-01 3: dec-01 4: jan-02 5: feb-02 6: mar-02
month label
Range: 60%
© 2005 by Carnegie Mellon University Version 1.0 page 142
Exercise: OutliersWhat is an outlier?• a data point which does not appear to follow the characteristic
distribution of the rest of the data• an observation that lies an abnormal distance from other
values in a random sample from a population
Consider this cost variance data: - 13, 22, 16, 20, 16, 18, 27, 25, 30, 333, 40- average = 50.9, standard deviation = 93.9
If “333” is a typo and should have been “33”- corrected average = 23.6, corrected standard deviation = 8.3
But, what if it’s a real value?
In groups of 3 • share your approach for deciding if and when to remove
extreme values from data sets
??
[Frost 03], [stats-online]
© 2005 by Carnegie Mellon University Version 1.0 page 143
Removing OutliersThere is not a widely-accepted automated approach to removing outliers.
Approaches• Visual
- examine distributions, trend charts, SPC charts, scatter plots, box plots
- couple with knowledge of data and process• Quantitative methods
- interquartile range- Grubbs’ test
Time is running short… So, we shall leave it to the student to look up the quantitative methods using the listed references as “homework.” (We will display 1 slide on interquartile range during break)
[Frost 03], [stats-online]
© 2005 by Carnegie Mellon University Version 1.0 page 144
When Not to Remove OutliersWhen you don’t understand the process
Because you “don’t like the data points” or they make your analysis more complicated.
Because IQR or Grubbs method “says so”
When they indicate a “second population”• identify the distinguishing factor and separate the data
When you have very few data points
Innocent until proven guilty
© 2005 by Carnegie Mellon University Version 1.0 page 145
Copies of Slides
After break: Exploring & Characterizing Data
If you would like copies of our slides,add your name to our signup sheet, orsend an email to Debra Morrison, [email protected].
• Put “SEPG Tutorial” in the subject line.
© 2005 by Carnegie Mellon University Version 1.0 page 146
Interquartile Range: Example33340302725222018161613
Q1
Q2Interquartile Range
30 – 16 = 14
Lower outlier boundary
16 – 1.5*14 = -5
Upper outlier boundary
30 + 1.5*14 = 51
Example adapted from “Metrics, Measurements, & Mathematical Mayhem,” Alison Frost, Raytheon, SEPG 2003
12
4
3
Procedure1. Determine 1st and 3rd quartiles
of data set: Q1, Q32. Calculate the difference:
interquartile range or IQR3. Lower outlier boundary =
Q1 – 1.5*IQR 4. Upper outlier boundary =
Q3 + 1.5*IQR
© 2005 by Carnegie Mellon University Version 1.0 page 147
Analyze Guidance Questions
Define ControlAnalyze Improve
Define project Scope
Establish formal project
Identify needed data
Obtain data set
Evaluate data quality
Summarize, baseline data
Explore data
Characterize process and problem
Identify possible solutions
Implement (pilot as needed)
Define control method
Implement
Update improvement project scope and scale
DocumentSelect solution
Evaluate
• What do the data look like?• What is driving the variation?• What is the new baseline? • What are associated risks and
assumptions associated with revised data set and baseline?
• Are there any hypotheses that need to be tested?
• What causal factors are driving or limiting the capability of this process?
• What process map updates are needed?
• Are there any immediate issues to address? Any urgent and obvious needs for problem containment?
• Should the improvement goal be updated?
• Is additional data exploration, data decomposition, and/or process decomposition needed? Is additional data needed?
• Can I take action? Are there evident improvements and corrections to make?
• Have I updated the project tracking and communication mechanisms?
MA
GPs
OPP
CAR
© 2005 by Carnegie Mellon University Version 1.0 page 148
Exploring the DataConsiderations when answering the guidance questions• What should the data look like? And, does it?
- first principles, heuristics or relationships- mental model of process (refer to that black box)- what do we expect, in terms of cause & effect
• Are there yet-unexplained patterns or variation? If so, - conduct more Y to x analysis- plot, plot, plot using the basic tools
• Are there hypothesized x’s that can be removed from the list?
The objective• Go into “characterize” with as close as possible to the “right
list” of Y’s, y’s, and x’s to study/characterize in a detailed way. • (At the end of characterize, the list is further narrowed and
prioritized)
© 2005 by Carnegie Mellon University Version 1.0 page 149
‘Lean’ Formulas for Data Exploration
Lead Time =Work ‘Units’ in Process
Average Completion Rate
Little’s Law
Process Efficiency =Value-Added Process Time
Total Process Time
Cycle Time Performance Ratio =
Actual TimeTheoretical Time
© 2005 by Carnegie Mellon University Version 1.0 page 150
A typical exploration question: Are There Multiple Populations?
Multimodal distributions point to multiple processes.
When there are multiple populations, •Do we understand the causes based on work done in “measure”?•Do we need to further explore Y to x relationships?•Do we need to segment or stratify the data for further analysis?
© 2005 by Carnegie Mellon University Version 1.0 page 151
Characterizing Your ProcessWhat causal factors are driving or limiting the capability of this process?• Which x’s are or are not significant?• Can plausible changes in x’s deliver targeted/desired changes
in y’s and Y’s?• Do we need to find more x’s?• Do we need to refine goals?
To support the ability to answer these questions, are there any hypotheses that need to be tested?• How do we test? (tests for significant difference, correlations,
experiments)
What is the stability and capability of the process?• What are assignable causes for “special cause” variation?• What are root causes for “common cause” variation?
© 2005 by Carnegie Mellon University Version 1.0 page 152
Testing for DifferencesComparing a process or product to “specification”• Is the process on aim?• Is the variability satisfactory?
Comparing two processes or products or populations• Are the means (or medians) the same?• Is the variation the same?
Approaches to determining if means/medians are the same• one-way analysis of variance (ANOVA)• means comparison tests• confidence interval for the delta, B – A
© 2005 by Carnegie Mellon University Version 1.0 page 153
More Advanced Tools that Support Characterization
AttributeA
ttrib
ute
Y
Pareto
Chi-Square
Contingency Table
Var
iabl
e Logistic Regression Scatter Plot
Correlation
Linear and Non-linear Regression
F-test
Variable
Analysis of Variance (ANOVA)
Scatter PlotTime Series
X
© 2005 by Carnegie Mellon University Version 1.0 page 154
X vs. Y Correlation
�������
�������
������������������������
��
��
��
��
��
������������������ �������
����������� �����
�������
��������������
�
�
�
�
�
������������������ �������������������
May indicate …but not prove causation
Pearson correlation of if-then and Defects = 0.930
P-Value = 0.000
Pearson correlation of Complexity (McCabe) and Defects = 0.837
P-Value = 0.000
© 2005 by Carnegie Mellon University Version 1.0 page 155
Variability & Capability
25 30 35 40 45 50 550
2
4
6
Product Service Staff-Hours
Fre
quen
cy C
ount
LCL= 36.08 UCL=54.04CL= 45.06
LSL= 30 USL= 50Target = 40
Voice of the Process
Voice of the customer
Process Capability
vs.
Capable Process
© 2005 by Carnegie Mellon University Version 1.0 page 156
Reminder: This is the total cumulative variance.• (initial plan – final actual) • customer-driven changes are included
Some extreme values still present• there are no valid reasons to remove or segment the data
Large % of data outside specification• process not capable
Example: Final Project Cost & Schedule Cost/Schedule Performance Baseline, Outliers Removed
-1
-0.5
0
0.5
LSL
USLTarget
-0.5
0
0.5
1
1.5
LSL
USLTarget
% effort variance % sched varianceavg -2% 13%std dev 33% 36%median 2% 7%min to max -95% to 50% -128% to 71%capability notes(spec = +/- 20%)
43.8% outside spec
39% outside spec
D M A I C
© 2005 by Carnegie Mellon University Version 1.0 page 157
Analyze Guidance Questions
Define ControlAnalyze Improve
Define project Scope
Establish formal project
Identify needed data
Obtain data set
Evaluate data quality
Summarize, baseline data
Explore data
Characterize process and problem
Identify possible solutions
Implement (pilot as needed)
Define control method
Implement
Update improvement project scope and scale
DocumentSelect solution
Evaluate
• What do the data look like?• What is driving the variation?• What is the new baseline? • What are associated risks and
assumptions associated with revised data set and baseline?
• Are there any hypotheses that need to be tested?
• What causal factors are driving or limiting the capability of this process?
• What process map updates are needed?
• Are there any immediate issues to address? Any urgent and obvious needs for problem containment?
• Should the improvement goal be updated?
• Is additional data exploration, data decomposition, and/or process decomposition needed? Is additional data needed?
• Can I take action? Are there evident improvements and corrections to make?
• Have I updated the project tracking and communication mechanisms?
CARGPs
© 2005 by Carnegie Mellon University Version 1.0 page 158
“Real” Process Characteristics
Real processes exist or occur only in execution/operation.1
Exactly stable conditions never exist with real processes.1
Real processes are never entirely free of perturbations or anomalies.1
Real processes are subject to entropy, requiring continual repair of its effects.
Variations in the “system of causes” results in process variation.
Process variation fluctuates over time.
1 Deming, W.Edwards. Out of the Crisis. Cambridge, Mass.: Massachusetts Institute of Technology, Center for Advanced Engineering, 1986.
© 2005 by Carnegie Mellon University Version 1.0 page 159
Process Analysis
Must take “real” process behavior into consideration before making statistical inferences about its performance.
• What is the normal or inherent process variation?• What differentiates inherent from anomalous variation?• What is causing the anomalous variation?• Why is the anomalous variation occurring?
Methods and tools are needed to measure and analyze process behavior so that inductive inferences about the process performance can be supported.
© 2005 by Carnegie Mellon University Version 1.0 page 160
Process Performance
Process performance is behavior that can be described or measured using attributes of• process operation or execution• resultant products or services
Process performance measures answer this question:“How is the process performing with respect toquality, quantity, effort (cost), and time?”
Process performance analysis answers this question:“Why is the process behaving as it is?”
© 2005 by Carnegie Mellon University Version 1.0 page 161
Process Variation
Shewhart’s notion of dividing variation into two types:
1. Common cause variation
• variation in process performance due to normal or inherent interaction among process components (people, machines, material, environment, and methods)
2. Assignable cause (special) variation
• variation in process performance due to events that are not part of the normal process
• represents sudden or persistent abnormal changes to one or more of the process components
© 2005 by Carnegie Mellon University Version 1.0 page 162
Controlled (Predictable) Variation
t1
t2
t3
t4
Distribution
Process behavior measured at times t1, t2, t3, and t4
All measurements • have same central tendency and dispersion• fall within the same limits
© 2005 by Carnegie Mellon University Version 1.0 page 163
Uncontrolled (Unpredictable) Variation
t1
t2
t3
Distribution
Process behavior measured at times t1, t2, t3, and t4
Not all measurements• have same central tendency and dispersion• fall within the same limits
t4
© 2005 by Carnegie Mellon University Version 1.0 page 164
Flagging VariationControl charts • a diagnostic mechanism to determine if a process is stable and to
flag variation that requires causal analysis.• when at desired performance, they are also a control mechanism
Other popular methods that flag variation and launch causal analysis• distributions• boxplots (also relative to time)
Lower Control Limit (LCL)
Upper Control Limit (UCL)
SpecificationLimits
Event Time or Sequence
Mean or Center Line (CL)
CL + 3�
CL - 3�
© 2005 by Carnegie Mellon University Version 1.0 page 165
PatternsRapid Shift in Level Unstable Mixture
Stratification Trends
Cycles PatternBunching or Grouping
© 2005 by Carnegie Mellon University Version 1.0 page 166
Root Cause TermsProblem • any deviation from the standard, expected, or desired• that which is outside the accepted tolerance, norm, benchmark
Symptom or direct cause• observable indicator, cue, or event that flags the existence of a
problem
Theory or hypothesis• unproven assertion about reasons for the problems & symptoms
Cause• event, factor, or circumstance that has been demonstrated to
produce the problem or deviation
Root cause• the cause that can be turned on & off to produce the problem,
which is not itself an interim symptom resulting from another cause
© 2005 by Carnegie Mellon University Version 1.0 page 167
Finding Root Cause: Why Care?Applying corrective action or improvement to something other than root cause is unlikely to result in sustained improvement.
A brief scenario:• A project is half complete and has overrun cost by 25%.• The project team replans and negotiates an adjusted cost with
the customer. No other actions are taken
What would you expect to happen if the root cause is• constant rotation of team members on/off project• increased cost of purchased materials
- with all purchases now complete
What is the possible impact for future projects? For the business?
??
© 2005 by Carnegie Mellon University Version 1.0 page 168
Diagnosing Root CauseKey Steps• generate and organize hypotheses
- additional data exploration, characterization as needed• select hypotheses to test• test and evaluate
Generating and organizing lists of hypotheses• 5 whys (just what it says: keep asking “why”)• Brainstorming• PSM Performance Analysis Model• diagrams: cause & effect, affinity, tree, interrelationship
© 2005 by Carnegie Mellon University Version 1.0 page 169
Diagnosing Root CauseTesting hypotheses• historical review• additional data collection• process decomposition• experiments
Evaluating tests• 7 basic tools• capability analysis• means-comparison tests• diagramming techniques• failure modes and effects analysis
© 2005 by Carnegie Mellon University Version 1.0 page 170
Root Cause TipsPlay detective. Suspect everything!
Evaluate paired comparisons• characteristics of a “good data point” vs.• characteristics of a “bad data point”
Or stratify the data into “good” and “bad” subsets and evaluate.
For sporadic problems, troubleshoot in real time if possible.• memories fade quickly
Frequently used indicators for symptoms• frequency• severity or impact
If you can turn the problem on and off, you probably have found root cause!
© 2005 by Carnegie Mellon University Version 1.0 page 171
OutlineContext: The Value Proposition
Approach to building integrated training
A roadmap for performance-driven improvement
Roadmap connections to CMMI
The roadmap in practice: case example
The roadmap in practice: mini lectures & practice
Summary• Illustrative summary• Key points
© 2005 by Carnegie Mellon University Version 1.0 page 172
Composite Project Illustration
9 Faults/KSLOC? Are these frequent? Are they severe?
Is this Ada, C++,Pascal, Smalltalk?
As a user,I care about how
often and when thesoftware fails!!!!!
Define
Problem and goal statement (Y): • maximum latent defects released• minimum mean time between failure in the field
I think I like KSLOC! They are easy to count. I can identify bad KSLOC
and I know how to fix KSLOC.
But how do KSLOCrelate to my
customer’s satisfaction?I need confidence
that fixing KSLOCmakes my customer
happy!
• Problem & goal statements
• Define boundaries• Process maps• “Management by fact”
ControlAnalyze ImproveMeasure
• Discovery: paretos, histograms, distributions, c&e• Understanding: root cause, critical factors• Improvement: adjust critical factors, redesign• Performance: on target, with desired variation
Y = f(defect profile, yield) = f(review rate, method, complexity……)
© 2005 by Carnegie Mellon University Version 1.0 page 173
Engineering Support Processes
Defined Processes
Requirements Definition &Architecture
Development
System of SystemTransition to Operations
Verify & ValidateSystem of System
Support to OperationsDevelopment
Segment/ElementAcquisitions
Engineering Development Life Cycle Processes
Organizational Processes
RiskManagement
Monitor, Control
Effort
QuantitativeManagement
Program & Project
Planning
Ensure Product Quality
ConfigurationMgmt, Cntl
Define andImprove SEProcesses
Manage Product
Evolution
Manage SESupport
Environment
Knowledge Mgmt
Supplier/SubcontractorCoordination
Program Management & Control Processes
System Analysis Decision AnalysisUnderstand Customer Needs
[LMCO 02]
© 2005 by Carnegie Mellon University Version 1.0 page 174
Project Boundaries
���� ���$��� 1���%�������
� Requirements� Estimate� Concept design
• Code• Data: Defects,
Fix time, Defect Injection Phase, Phase duration
• Detailed Design• Test cases • Complexity• Data: Design Review
defects, Fix time, Phase duration
• Executable Code
• Data: Defects, Fix time, Defect Injection Phase, Phase duration
• Functional Code
• Data: Defects, Fix time, Defect Injection Phase, Phase duration
� Executable Code� Test Plan, Technique� Operational Profiles
�Resources� Code Stds� LOC counter� Interruptions
� Code
Inspection
Rework
� Critical Inputs� Noise
� Standard Procedure� Control Knobs
Process Map
© 2005 by Carnegie Mellon University Version 1.0 page 175
Drilldown to Inspection Process
�������������
����������������� %���.��2
����
� Artifacts to inspect�, � Artifact size� Reviewers� Data repository
� Critical Inputs� Noise� Standard Procedure� Control Knobs
• Defect Log• Record of plan
• Direct Cause • Root Cause
• Corrective Action
� Review Rate� , � Checklists�, � Inspection method, procedure� Proficiency � Taxonomy interpretations
What would you list?
What would you list?
Data feed DMAIC project process
What are the sources of variation? The control knobs?
© 2005 by Carnegie Mellon University Version 1.0 page 176
DMAI IterationsCollecting basic data
Leads to• injecting fewer defects• detecting defects earlier• removing them efficiently• process stability
Refining processes• inspections
Improving using• cause & effect matrix• pareto analysis
Post-Improvement Defect Density
0
20
40
60
80
100
120
PSP-basedDesignReview
PSP-basedCode
Review
Code Compile Test
Def
ects
/KL
OC
Initial Defect Density
0
20
40
60
80
100
120
PSP-basedDesignReview
PSP-basedCode
Review
Code Compile Test
Def
ects
/KL
OC
© 2005 by Carnegie Mellon University Version 1.0 page 177
Rayleigh Distribution as Control Mechanism
Error Discovery Data and Rayleigh Fitted Histograms
3.48
7.2
4.11
1.82
3
98.52
9.46
0
1
2
3
4
5
6
7
8
9
10
PreliminaryDesign
DetailedDesign
Code & UnitTest
Integration System Test Deployment
Err
ors
Estimated
Actual
© 2005 by Carnegie Mellon University Version 1.0 page 178
Forecasting Defects and Repair Costs by PhasePre-release and post-release defect counts can drive further models to forecast defects and their repair costs over time, by development phase:
Code size Size units
1,000 FP
1,050
Reqts 22% 231 127 52 29 15 8 96% Reqts
Design 35% 368 221 81 43 23 94% Design
Code 43% 452 262 123 66 85% Code
Test 0 0 0 Test
Totals 100% 1050 127 272 371 181 98 90.7% <TCE
Reqts Design Code Test
55% 60% 58%
50% 55% 65%
Reqt 1.19 2.28 6.19 13.53 71.86Design 0.92 2.75 6.49 18.84Code 1.10 2.58 12.87Test
Reqt 151 119 177 206 588Design 203 222 279 436Code 288 318 854Test
Reqt 15,119$ 11,850$ 17,695$ 20,569$ 58,825$ 124,058$ Reqt
Design 20,286$ 22,234$ 27,905$ 43,619$ 114,044$ Design
Code 28,806$ 31,801$ 85,419$ 146,025$ Code
Test -$ Test
Totals 15,119$ 32,136$ 68,734$ 80,276$ 187,863$ 384,128$
Cost per FP : 384.13$
1,922.68$ Hourly labor rate
(loaded)
100.00$
Totals
Total Fix Hours
Test
Origin Phase
Cost per Released Defect
Project Cost
Rollup Person-months
29%84.3 Fix Cost %
Cost of Effort
1,317,188$
Defect Source Distribution
Reqts Design CodeTotal Defects> Post-Release
Total Containment Effectivenes (TCE)
per phase
Enter the yellow fields: Size Size units (e.g. SLOC, KLOC, Fpts) Total Defects Defect Source Insertion %s Phase Containment Effectiveness (errors) Defect Containment Effectiveness (upstream defects) Hours to fix one defect Hourly labor rate (loaded)
PCE (errors)
DCE (defects)
Enter Hours to Fix One Defect
for each insertion and find category
Loaded Defect Fix Costs
Defect Analysis ScorecardTo-Be Prediction
© 2002 Six Sigma Advantage
Hours to Fix One Defect
Insertion % Count
Date:
Team Leader:Notes
Title:
Found-in PhaseSize and Defect Count
Estimates (From Forecast Model)
Defect Repair Costs
Rework costis 29% high ?
Predicted Defects by Phase
© 2005 by Carnegie Mellon University Version 1.0 page 179
From single process to systemic view:Cause-Effect Model Using Bayesian Modeling
SystemRequirements
Allocated
Contract bookBaseline &Approved
DesignReadiness
SystemTest
Readiness
Ready forField Test
Ready forControlled
Introduction
SystemRequirements
Baselined
VolumeDeployment
ProjectInitiation
HistoricalHW FFR
SWReuse
Field TestResults
ALTResults
HW TestResults
HWSimulation
Results
ProductFFR
SW TestingResults
SW CodeAssessment
SW DesignAssessment
SW ReqtsAssessment
SW ArchitectureAssessment
SWIFTResults
SW DomainExpertise
SW ProcessMaturity
CASREResults
[Stoddard 02]
© 2005 by Carnegie Mellon University Version 1.0 page 180
Beyond Control Charts: Prediction Models
[Stoddard 02], *CASRE = Computer Aided Software Reliability Estimation
Actual field defects = f(CASRE predicted defects)CASRE predicted defects = f(weekly arrival rate of SW failures, weekly test intensity measures)$3M/year savings from premature SW releases
0
10
20
30
40
50
60
Ac
tua
ls
0 10 20 30 40 50 60Predicted
Fit Mean
Bivariate Fit of Actuals By Predicted
© 2005 by Carnegie Mellon University Version 1.0 page 181
Summary – Key PointsValue Proposition• It is a multi-initiative world• Measurement & analysis is a common root and an integrating
platform
Approaches to Integration• Training courses are a mechanism to make connections
DMAIC & CMMI relationships• DMAIC and CMMI are different and synergistic• Relationships exist at the PA, goal, practice and toolkit level
DMAIC Execution• Performance driven improvement• Enables CMMI subprocess selection to be informed by
business priorities, baselines, problem statements and causal analysis
• Propels you to high maturity and/or high capability
© 2005 by Carnegie Mellon University Version 1.0 page 182
A Base Architecture- Connecting all the Improvement Models
Fuzzy Sense of a Problemor Opportunity
Type of Data
Product/Process Improvement Progress
Sample DataModels
Language + NumbersD
isti
ll
Plan to G
ather
NumbersActual Capability
Variation Over Time
Monitored Results
Dis
till
Plan to G
ather
Focuson key aspects
LanguageStatementsObservations
Plan to G
ather
Dis
till
Solution& Standards
Distilling & Understanding
Experience:Gathering & Discovering
Teams move back and
forth between…
[Kawakita], [Shiba]
© 2005 by Carnegie Mellon University Version 1.0 page 183
Contact InformationIf you would like copies of our slides• Add your name to our signup sheet, or• Send an email to Debra Morrison, [email protected].
- Put “SEPG Tutorial” in the subject line.
For questions or other information:Jeannine SiviySoftware Engineering [email protected]
Dave Hallowell Six Sigma [email protected]
Dave ZubrowSoftware Engineering [email protected]
© 2005 by Carnegie Mellon University Version 1.0 page 184
References[ASA 01] American Statistical Association, Quality & Productivity Section, Enabling Broad Application of
Statistical Thinking, http://web.utk.edu/~asaqp/thinking.html, 2001
[ASQ 00] ASQ Statistics Division, Improving Performance Through Statistical Thinking, Milwaukee: ASQ Quality Press, 2000. H1060
[BPD] Process Maturity / Capability Maturity, http://www.betterproductdesign.net/maturity.htm, a resource site for the Good Design Practice program, a joint initiative between the Institute for Manufacturing and the Engineering Design Centre at the University of Cambridge, and the Department of Industrial Design Engineering at the Royal College of Art (RCA) in London.
[Forrester] Forrester, Eileen, Transition Basics
[Frost 03] Frost, Alison, Metrics, Measurements and Mathematical Mayhem, SEPG 2003
[Gruber] William H. Gruber and Donald G. Marquis, Eds., Factors in the Transfer of Technology, 1965.
[Kawakita] Kawakita, Jiro, The Original KJ Method, Kawakita Research Institute
[Moore] Geoffrey Moore, Crossing the Chasm: Marketing and Selling Technology Products to Mainstream Customers. Harper Business. 1991.
[Penn&Siviy 05] Penn, M. Lynn and Jeannine Siviy, Relationships between CMMI and Six Sigma, CMU/SEI-2005-TN-005, publication pending
[Schon] Donald A. Schon, Technology and Change: The New Heraclitus, 1967.
[Shiba] Shiba, Shiji, et al., New American TQM – Four Practical Revolutions in Management, Productivity Press, 1993.
[stats online] Definitions from electronic statistics textbook, http://www.statsoft.com/textbook/stathome.html, and engineering statistics handbook, http://www.itl.nist.gov/div898/handbook/prc/section1/prc16.htm
All URLs subject to change