engineering systems handbook

164
NASA Systems Engineering Handbook SP-610S June 1995

Upload: gaipat

Post on 09-Apr-2015

209 views

Category:

Documents


3 download

TRANSCRIPT

Page 1: Engineering Systems Handbook

NASASystems Engineering Handbook

SP-610S June 1995

Page 2: Engineering Systems Handbook

Contents

Foreword ................................ ................................ ................................ ............................ ixForeword to the September 1992 Draft ................................ ................................ .. x

Preface ................................ ................................ ................................ ............................... xi

Acknowledgements ................................ ................................ ................................ .......... xii

Introduction ................................ ................................ ................................ ....................... 1Purpose ................................ ................................ ................................ ................... 1Scope and Depth ................................ ................................ ................................ .... 1

Fundamentals of System Engineering ................................ ................................ ........... 3Systems, Supersystems, and Subsystems ................................ ............................ 3Definition of Systems Engineering ................................ ................................ .......... 4Objective of Systems Engineering ................................ ................................ .......... 4Disciplines Related to Systems Engineering ................................ .......................... 6The Doctrine of Successive Rinfinement ................................ ................................ 7

The Project Cycle for Major NASA Systems ................................ ................................ . 13Pre-Phase A - Advanced Studies ................................ ................................ ........... 13Phase A - Preliminary Analysis ................................ ................................ ............... 14Phase B - Definition ................................ ................................ ................................ . 17Phase C - Design ................................ ................................ ................................ .... 18Phase D - Development ................................ ................................ .......................... 19Phase E - Operations ................................ ................................ .............................. 19Role of Systems Engineering in the Project Life Cycle ................................ .......... 20

The “Vee” Chart ................................ ................................ ........................... 20The NASA Program/Project Life Cycle Process Flow................................. 22

Funding: The Budget Cycle ................................ ................................ .................... 25

Management Issues in Systems Engineering ................................ ............................... 27Harmony of Goals, Work Products, and Organizations ................................ ......... 27Managing the Systems Engineering Process: The Systems EngineeringManagement Plan ................................ ................................ ................................ ... 28

Role of the SEMP ................................ ................................ ........................ 28Contents of the SEMP ................................ ................................ ................. 28Development of the SEMP ................................ ................................ .......... 29Managing the Systems Engineering Process: Summary .......................... 30

The Work Breakdown Structure ................................ ................................ .............. 30Role of the WBS ................................ ................................ .......................... 31Techniques for Developing the WBS ................................ .......................... 31Common Errors in Developing a WBS ................................ ....................... 32

Scheduling ................................ ................................ ................................ ............... 33Role of Scheduling ................................ ................................ ....................... 33Network Schedule Data and Graphical Formats ................................ ........ 33Establishing a Network Schedule ................................ ................................ 34Reporting Techniques ................................ ................................ ................. 35Resource Leveling ................................ ................................ ....................... 35

Page 3: Engineering Systems Handbook

Contents

Budgeting and Resource Planning ................................ ................................ ..................... 35Risk Management ................................ ................................ ................................ ............... 37

Types of Risks ................................ ................................ ................................ ........ 39Risk Identification and Characterization Techniques ................................ ............. 40Risk Analysis Techniques ................................ ................................ ...................... 41Risk Mitigation and Tracking Techniques ................................ .............................. 42Risk Management: Summary ................................ ................................ ............... 44

Configuration Management ................................ ................................ ................................ 44Baseline Evolution ................................ ................................ ................................ .. 45Techniques of Configuration Management ................................ ............................ 45Data Management ................................ ................................ ................................ .. 48

Reviews, Audits, and Control Gates ................................ ................................ ................... 48Purpose and Definitions ................................ ................................ ......................... 48General Principles for Reviews ................................ ................................ .............. 49Major Control Gates ................................ ................................ ............................... 50Interim Review ................................ ................................ ................................ ....... 55

Status Reporting and Assessment ................................ ................................ ..................... 58Cost and Schedule Control Measures ................................ ................................ ... 59Technical Performance Measures ................................ ................................ ......... 61Systems Engineering Process Metrics ................................ ................................ .. 64

Systems Analysis and Modeling Issues ................................ ................................ ..................... 67The Trade Study Process ................................ ................................ ................................ ... 67

Controlling the Trade Study Process ................................ ................................ ..... 70Using Models ................................ ................................ ................................ .......... 71Selecting the Selection Rule ................................ ................................ .................. 73Trade Study Process: Summary ................................ ................................ ........... 77

Cost Definition and Modeling ................................ ................................ .............................. 77Life-Cycle Cost and Other Cost Measures ................................ ............................ 77Controlling Life-Cycle Costs ................................ ................................ ................... 79Cost Estimating ................................ ................................ ................................ ...... 80

Effectiveness Definition and Modeling ................................ ................................ ............... 83Strategies for Measuring System Effectiveness ................................ .................... 83NASA System Effectiveness Measures ................................ ................................ . 84Availability and Logistics Supportability Modeling ................................ ................. 85

Probabilistic Treatment of Cost and Effectiveness ................................ ............................ 87Sources of Uncertainty in Models ................................ ................................ .......... 88Modeling Techniques for Handling Uncertainty ................................ ..................... 88

Integrating Engineering Specialties Into the Systems Engineering Process ........................ 91Role of the Engineering Specialties ................................ ................................ ................... 91Reliability ................................ ................................ ................................ ............................. 91

Role of the Reliability ................................ ................................ ............................. 92Designing Reliable Space-Based Systems ................................ ........................... 94Reliability Analysis Tools and Techniques ................................ ............................. 94

Quality Assurance ................................ ................................ ................................ ............... 95Role of the Quality Assurance Engineer ................................ ................................ 95Quality Assurance Tools and Techniques ................................ .............................. 96

Maintainability ................................ ................................ ................................ ..................... 96Role of the Maintainability Engineer ................................ ................................ ...... 96The System Maintenance Concept and Maintenance Plan ................................ ... 97

Page 4: Engineering Systems Handbook

Contents

Designing Maintainable Space-Based Systems ................................ .................... 97Maintainability Analysis Tools and Techniques ................................ ...................... 98

Integrated Logistics Support ................................ ................................ ............................... 99ILS Elements ................................ ................................ ................................ .......... 99Planning for ILS ................................ ................................ ................................ ...... 99ILS Tools and Techniques: The Logistics Support Analysis ................................ . 100Continuous Acquisition and Life-Cycle Support ................................ ..................... 103

Verification ................................ ................................ ................................ .......................... 103Verification Process Overview ................................ ................................ ............... 104Verification Program Planning ................................ ................................ ............... 106Qualification Verification ................................ ................................ ......................... 109Acceptance Verification ................................ ................................ .......................... 119Preparation for Deployment Verification ................................ ................................ 110Operational and Disposal Verification ................................ ................................ .... 110

Producibility ................................ ................................ ................................ ......................... 111Role of the Production Engineer ................................ ................................ ............ 111Producibility Tools and Techniques ................................ ................................ ....... 111

Social Acceptability ................................ ................................ ................................ ............. 112Environmental Impact ................................ ................................ ............................ 112Nuclear Safety Launch Approval ................................ ................................ ........... 114Planetary Protection ................................ ................................ ............................... 115

Appendix A - Acronyms ................................ ................................ ................................ ............... 117Appendix B - Systems Engineering Templates and Examples ................................ ................ 119

Appendix B.1 - A Sample SEMP Outline ................................ ................................ ............ 119Appendix B.2 - A “Tailored” WBS for an Airborne Telescope ................................ ............ 120Appendix B.3 - Characterization, Mission Success, and SRM&QA Cost Guidelines for

Class A-D Payloads ................................ ................................ ................... 123Appendix B.4 - A Sample Risk Management Plan Outline ................................ ................ 124Appendix B.5 - An Example of a Critical Items List ................................ ............................ 125Appendix B.6 - A Sample Configuration Management Plan Outline ................................ .. 126Appendix B.7 - Techniques of Functional Analysis ................................ ............................ 127

B.7.1 Functional ................................ ................................ ................................ .... 127B.7.2 N2 Diagrams ................................ ................................ ................................ 127B.7.3 Time Line Analysis ................................ ................................ ...................... 129

Appendix B.8 - The Effect of Changes in ORU MTBF on Space Station FreedomOperations ................................ ................................ ................................ . 132

Appendix B.9 - An Example of a Verification Requirements Matrix ................................ ... 135Appendix B.10 - A Sample Master Verification Plan Outline ................................ ............. 137

Appendix C - Use of the Metric System ................................ ................................ ...................... 139C.1 NASA Policy ................................ ................................ ................................ ............... 139C.2 Definition of Units ................................ ................................ ................................ ....... 139

C.2.1 SI Prefixes ................................ ................................ ................................ .. 139C.2.2 Base SI Units ................................ ................................ ............................. 139C.2.3 Supplementary SI Units ................................ ................................ .............. 140C.2.4 Derived SI Units with Special Names ................................ ........................ 140C.2.5 Units in Use with SI ................................ ................................ .................... 141

C.3 Conversion Factors ................................ ................................ ................................ .... 142

Bibliography ................................ ................................ ................................ ................................ .. 145Index ................................ ................................ ................................ ................................ ............... 151

Page 5: Engineering Systems Handbook

List of Figures

Figure 1 - The Enveloping Surface of Non-dominated Designs ................................ ..................... 5Figure 2 - Estimates of Outcomes to be Obtained form Several Design Concepts Including

Uncertainty ................................ ................................ ................................ ..................... 5Figure 3 - The Doctrine of Successive Refinement ................................ ................................ ........ 7Figure 4 - A Quantitative Objective Function, Dependent on Life-Cycle Cost and All Aspects

of Effectiveness ................................ ................................ ................................ .............. 10Figure 5 - The NASA Program/Project Cycle ................................ ................................ ................. 15Figure 6 - Overruns are Very Likely if Phases A and B are Underfunded ................................ ...... 17Figure 7 - Overview of the Technical Aspect of the NASA Project Cycle ................................ ...... 21Figure 8 - The NASA Program/Project Life Cycle Process Flow ................................ ................... 23Figure 9 - Typical NASA Budget Cycle ................................ ................................ ........................... 25Figure 10 - The Relationship Between a System, a Product Breakdown Structure, and a

Work Breakdown Structure ................................ ................................ ......................... 31Figure 11 - Examples of WBS Development Errors ................................ ................................ ....... 32Figure 12 - Activity-on-Arrow and Precedence Diagrams for Network Schedules ........................ 34Figure 13 - An Example of a Gantt Chart ................................ ................................ ....................... 36Figure 14 - An Example of an Unleveled Resource Profile ................................ ............................ 37Figure 15 - Risk Management Structure Diagram ................................ ................................ .......... 38Figure 16 - Characterizing Risks by Likelihood and Severity ................................ ......................... 39Figure 17 - Evolution of the Technical Baseline ................................ ................................ .............. 45Figure 18 - Configuration Management Structure Diagram ................................ ........................... 46Figure 19 - Contract Change Control Process ................................ ................................ ................ 47Figure 20 - Planning and Status Reporting Feedback Loop ................................ ........................... 59Figure 21 - Cost and Schedule Variances ................................ ................................ ...................... 60Figure 22 - Three TPM Assessment Methods ................................ ................................ ................ 62Figure 23 - The Trade Study Process ................................ ................................ ............................. 67Figure 24 - Results of Design Concepts with Different Risk Patterns ................................ ............ 74Figure 25 - Life-Cycle Cost Components ................................ ................................ ........................ 78Figure 26 - System Effectiveness Components (Generic) ................................ ............................. 84Figure 27 - Roles of Availability and Logistics Supportability Models ................................ ............ 87Figure 28 - A Monte Carlo Simulation with Three Uncertain Inputs ................................ ............... 89Figure 29 - Basic Reliability Block Diagrams ................................ ................................ .................. 94Figure 30 - A Simple Fault Tree ................................ ................................ ................................ ...... 95Figure 31a - Logistics Support Analysis Process Flow (Phases A and B) ................................ ..... 101Figure 31b - Logistics Support Analysis Process Flow (Phases C/D and E) ................................ . 101Figure 32a - Verification Process Flow (Phases A/B and C) ................................ .......................... 104Figure 32b - Verification Process Flow (Phase D) ................................ ................................ ......... 104Figure B-1 - Stratospheric Observatory for Infrared Astronomy (SOFIA) Product Breakdown Structure ................................ ................................ ................................ ...................... 120Figure B-2 - SOFIA Project WBS (Level 3) ................................ ................................ .................... 121Figure B-3 - SOFIA Observatory System WBS (Level 4) ................................ ............................... 121Figure B-4 - SOFIA Airborne Facility WBS (Level 5) ................................ ................................ ...... 122Figure B-5 - SOFIA Telescope Element WBS (Level 6) ................................ ................................ . 122Figure B-6 - Development of Functional Flow Block Diagrams ................................ ..................... 128Figure B-7 - N2 Chart Definition ................................ ................................ ................................ ...... 129Figure B-8 - N2 Chart Key Features ................................ ................................ ................................ 130Figure B-9 - Flight Mission Time Lines ................................ ................................ ........................... 131Figure B-10 - Sample Maintenance Time Line Sheet ................................ ................................ ..... 131Figure B-11 - Effect of MTBF on Operations Cost ................................ ................................ .......... 132Figure B-12 - Effect of MTBF on Crew Time ................................ ................................ .................. 132

Page 6: Engineering Systems Handbook

List of Figures

Figure B-13 - Effect of MTBF on Upmass ................................ ................................ ...................... 133Figure B-14 - Effect of MTBF on Number of Crew (Available Crew Time Maintained) ................. 133Figure B-15 - Effect of MTBF on Number of STS Flights (Available Upmass Maintained) ........... 133Figure B-16 - Effect of MTBF on Five-year Operations Cost (Maintaining vs. Not Maintaining

Available Upmass and Crew Time) ................................ ................................ ..................... 134

Page 7: Engineering Systems Handbook

List of Sidebars

Selected Systems Engineering Reading ................................ ................................ ......................... 1A Hierarchical System Terminology ................................ ................................ ................................ 3The Technical Sophistication Required to do Systems Engineering Depends on the Project ....... 3Systems Engineering per EIA/IS-632 ................................ ................................ ............................. 4Cost, Effectiveness, Cost-Effectiveness ................................ ................................ ......................... 4The System Engineer’s Dilemma ................................ ................................ ................................ ... 6As an Example of the Process of Successive Refinement, Consider the Choice of Altitude

for a Space Station such as Alpha ................................ ................................ ...................... 8Simple Interfaces are Preferred ................................ ................................ ................................ ...... 10Pre-Phase A - Advanced Studies ................................ ................................ ................................ .... 14Phase A - Preliminary Analysis ................................ ................................ ................................ ....... 14Phase B - Definition ................................ ................................ ................................ ......................... 17A Credible, Feasible Design ................................ ................................ ................................ ............ 18Phase C - Design ................................ ................................ ................................ ............................ 19Phase D - Development ................................ ................................ ................................ .................. 19Phase E - Operations ................................ ................................ ................................ ...................... 20Integrated Product Development Teams ................................ ................................ ........................ 25SEMP Lessons Learned form DoD Experience ................................ ................................ .............. 30Critical Path and Float Calculation ................................ ................................ ................................ .. 34Desirable Features in Gantt Charts ................................ ................................ ................................ 36Assessing the Effect of Schedule Slippage ................................ ................................ .................... 37Risk ................................ ................................ ................................ ................................ .................. 38Probabilistic Risk Assessment Pitfalls ................................ ................................ ............................ 42An Example of a Decision Tree for Robotic Precursor Mission to Mars ................................ ........ 43Configuration Control Board Conduct ................................ ................................ ............................. 46Project Termination ................................ ................................ ................................ ......................... 49Computing the Estimate at Completion ................................ ................................ .......................... 60Examples of High-Level TPMs for Planetary Spacecraft and Launch Vehicles ............................. 61An Example of the Risk Management Method for Tracking Spacecraft Mass ............................... 63Systems Analysis ................................ ................................ ................................ ............................ 67Functional Analysis Techniques ................................ ................................ ................................ ...... 68An Example of a trade Tree for a Mars Rover ................................ ................................ ................ 70Trade Study Reports ................................ ................................ ................................ ....................... 71The Analytic Hierarchy Process ................................ ................................ ................................ ...... 75Multi-Attribute Utility Theory ................................ ................................ ................................ ............ 76Calculating Present Discounted Value ................................ ................................ ............................ 79Statistical Cost Estimating Relationships: Example and Pitfalls ................................ ................... 81Learning Curve Theory ................................ ................................ ................................ .................... 82An Example of a Cost Spreader Function: The Beta Curve ................................ .......................... 82Practical Pitfalls in Using Effectiveness Measures in Trade Studies ................................ ............. 83Measures of Availability ................................ ................................ ................................ .................. 86Logistics Supportability Models: Two Examples ................................ ................................ ........... 87The Cost S-Curve ................................ ................................ ................................ ............................ 88Reliability Relationships ................................ ................................ ................................ .................. 92Lunar Excursion Module (LEM) Reliability ................................ ................................ ...................... 92The Bathtub Curve ................................ ................................ ................................ .......................... 93

Page 8: Engineering Systems Handbook

List of Sidebars

Maintenance Levels for Space Station Alpha ................................ ................................ ................. 97Maintainability Lessons Learned form HST Repair (STS-61) ................................ ......................... 98MIL-STD 1388-1A/2B ................................ ................................ ................................ ...................... 102Can NASA Benefit form CALS? ................................ ................................ ................................ ...... 103Analyses and Models ................................ ................................ ................................ ...................... 106Verification Reports ................................ ................................ ................................ ......................... 107Test Readiness Reviews ................................ ................................ ................................ ................. 109Software IV & V ................................ ................................ ................................ ............................... 110What is NEPA? ................................ ................................ ................................ ............................... 112Prefixes for SI Units ................................ ................................ ................................ ........................ 139

Page 9: Engineering Systems Handbook

NASA Systems Engineering Handbook

Foreword

In an attempt to demonstrate the potentialdangers of relying on purely ''cookbook" logicalthinking, the mathematician/philosopher CarlHempel posed a paradox. If we want to prove thehypothesis “AII ravens are black," we can look formany ravens and determine if they all meet ourcriteria. Hempel suggested changing the hy -pothesis to its logical contrapositive (a rewordingwith identical meaning) would be easier. The newhypothesis becomes: "All nonblack things arenonravens." This transformation, supported by thelaws of logical thinking, makes it much easier totest, but unfortunately is ridicu lous. Hempel's ravenparadox points out the importance of commonsense and proper background exploration, even tosubjects as intricate as systems engineering.

In 1989, when the initial work on the NASASystems Engineering Handbook was started, therewere many who were concerned about thedangers of a document that purported to teach ageneric NASA approach to systems engineering.Like Hempel's raven, there were concerns over thepotential of producing a "cookbook" which of feredthe illusion of logic while ignoring experience andcommon sense. From the tremendous response tothe initial (September 1992) draft of the h andbook(in terms of both requests for copies and praise forthe product), it seems early concerns were largelyunfounded and that there is a strong need for thishandbook.

The document you are holding representswhat was deemed best in the original draft andupdates information necessary in light ofrecommendations and changes within NASA. Thishandbook represents some of the best think ingfrom across NASA. Many experts influenced itsoutcome, and consideration was given to eachidea and criticism. It truly represents a NASA-wideproduct and one which furnishes a good overviewof NASA systems engineering.

The handbook is intended to be aneducational guide written from a NASAperspective. Individuals who

take systems engineering courses are the primaryaudience for this work. Working professionals whorequire a guidebook to NASA systems engineeringrepresent a secondary audience.

It was discovered during the review of thedraft document that interest in this work goes farbeyond NASA. Requests for translating this workhave come from international sources, and wehave been told that the draft hand book is beingused in university courses on the subject. All ofthis may help explain why copies of the originaldraft handbook have been in short supply.

The main purposes of the NASA SystemsEngineering Handbook are to provide: 1) usefulinformation to system engineers and projectmanagers, 2) a generic descrip tion of NASAsystems engineering which can be supported bycenter-specific documents, 3) a common languageand perspective of the systems engineeringprocess, and 4) a reference work which isconsistent with NMI 7120.4/NHB 7120.5. Thehandbook approaches systems engineering from asystems perspective, starting at mission needs andconceptual studies through operations anddisposal.

While it would be impossible to thank all ofthe people directly involved, it is essential to notethe efforts of Dr. Robert Shishko of the JetPropulsion Laboratory. Bob was largelyresponsible for ensuring the completion of thiseffort. His technical expertise and nonstopdetermination were critical factors to ensure thesuccess of this project.

Mihaly Csikzenthmihali defined an optimalexperience as one where there is "a sense ofexhilaration, a deep sense of enjoyment that islong cherished and becomes a landmark inmemory of what life should be like." I am not quitesure if the experience which produced this hand -book can be described exactly this way, yet thesentiment seems reasonably close.

—Dr. Edward J. HoffmanProgram Manager, NASA Headquarters

Spring 1995

Page 10: Engineering Systems Handbook

NASA Systems Engineering Handbook

Foreword to the September 1992 Draft

When NASA began to sponsor agency -wideclasses in systems engineering, it was to adoubting audience. Top management was quick toexpress concern. As a former DeputyAdministrator stated "How can you teach anagency-wide systems engineering class when wecannot even agree on how to define it?" Goodquestion, and one I must admit caused usconsiderable concern at that time. The same doubtcontinued up until the publication of this handbook.

The initial systems engineering educationconference was held in January 1989 at theJohnson Space Center. A number ofrepresentatives from other Centers at tended thismeeting and it was decided then that we neededto form a working group to support thedevelopment of appropriate and tailored systemsengineering courses. At this meeting therepresentatives from Marshal1 Space Flight Center(MSFC) expressed a strong desire to docu menttheir own historic systems engineering processbefore any more of the key players left the Center.Other Centers also expressed a desire, if not asurgent as MSFC, to document their processes.

It was thought that the best way to reflect thetotality of the NASA systems engineering processand to aid in developing the needed training wasto prepare a top level (Level 0) document thatwould contain a broad definition of systemsengineering, a broad process outline, and typi caltools and procedures. In general, we wanted a toplevel overview of NASA systems engineering. Tothis document would be appended each Center'sunique systems engineering manual. The groupwas well aware of the diversity each Center may

have, but agreed that this approach would be quiteacceptable.

The next step and the most difficult in thisarduous process was to find someone to head thisyet-to-be-formed working group. Fortunately forNASA, Donna [Pivirotto] Shirley of the JetPropulsion Laboratory stepped up to thechallenge. Today, through her efforts, those of theworking group, and the skilled and dedicatedauthors, we have a unique and possibly a historicdocument.

During the development of the manual wedecided to put in much more than may beappropriate for a Level 0 document with the ideathat we could always refine the document later. Itwas more important to capture the knowledgewhen we could in order to better position our selvesfor later dissemination. If there is any criticism, itmay be the level of detail contained in the manual,but this detail is necessary for young engineers.The present document does appear to serve as agood instructional guide, although it does go wellbeyond its original intent.

As such, this present document is to beconsidered a next-to-final draft. Your comments,corrections and suggestions are welcomed, valuedand appreciated. Please send your remarksdirectly to Robert Shishko, NASA SystemsEngineering Working Group. NASA/Jet PropulsionLaboratory, California Institute of Technology,4800 Oak Grove Drive, Pasadena, CA91109-8099.

—Francis T. HobanProgram Manager, NASA Headquarters

Page 11: Engineering Systems Handbook

NASA Systems Engineering Handbook

Preface

This handbook was written to bring thefundamental concepts and techniques of systemsengineering to NASA personnel in a way that recognizesthe nature of NASA systems and the NASAenvironment. The authors readily acknowledge that thisgoal will not be easily realized. One reason is that noteveryone agrees on what systems engineering is, nor onhow to do it. There are legitimate differences of opinionon basic definitions, content, and techniques. Systemsengineering itself is a broad subject, with many differentaspects. This initial handbook does not (and cannot)cover all of them.

The content and style of this handbook show ateaching orientation. This handbook was meant toaccompany formal NASA training courses on systemsengineering, not to be a stand-alone, comprehensiveview of NASA systems engineering. Systemsengineering, in the authors' opinions, cannot be learnedsimply by starting at a well-defined beginning andproceeding seamlessly from one topic to another.Rather, it is a field that draws from many engineeringdisciplines and other intellectual domains. Theboundaries are not always clear, and there are manyinteresting intellectual offshoots. Consequently, thishandbook was designed to be a top-level overview ofsystems engineering as a discipline; brevity ofexposition and the provision of pointers to other booksand documents for details were considered importantguidelines.

The material for this handbook was drawn frommany different sources, including field center systemsengineering handbooks, NASA management instructions(NMls) and NASA handbooks (NHBs), field centerbriefings on systems engineering processes, non-NASAsystems engineering textbooks and guides, and threeindependent systems engineering courses taught toNASA audiences. The handbook uses this material toprovide only top-level information and suggestions forgood systems engineering practices; it is not intended inany way to be a directive.

By design, the handbook covers some topics thatare also taught in Project Management/Program Control(PM/PC) courses, reflecting the unavoidableconnectedness

of these three domains. The material on the NASAproject life cycle is drawn from the work of theNASA-wide Systems Engineering Working Group(SEWG), which met periodically in 1991 and 1992, andits successor, the Systems Engineering ProcessImprovement Task (SEPIT) team, which met in 1993and 1994. This handbook's project life cycle is identicalto that promulgated in the SEPIT report, NASA SystemsEngineering Process for Programs and Projects,JSC-49040. The SEPIT project life cycle is intentionallyconsistent with that in NMI 7120.4/NHB 7120.5(Management of Major System Programs and Projects),but provides more detail on its systems engineeringaspects.

This handbook consists of five core chapters: (1)systems engineering's intellectual process, (2) the NASAproject life cycle, (3) management issues in systemsengineering, (4) systems analysis and modeling issues,and (5) engineering specialty integration. These corechapters are supplemented by appendices, which canbe expanded to accommodate any number of templatesand examples to illustrate topics in the core chapters.The handbook makes extensive use of sidebars todefine, refine, illustrate, and extend concepts in the corechapters without diverting the reader from the mainargument. There are no footnotes; sidebars are usedinstead. The structure of the handbook also allows foradditional sections and chapters to be added at a laterdate.

Finally, the handbook should be considered onlya starting point. Both NASA as a systems engineeringorganization, and systems engineering as a discipline,are undergoing rapid evolution. Over the next fiveyears, many changes will no doubt occur, and some arealready in progress. NASA, for instance, is movingtoward implementation of the standards in theInternational Standards Organization (ISO) 9000 family,which will affect many aspects of systems engineering.In systems engineering as a discipline, efforts areunderway to merge existing systems engineeringstandards into a common American National Standardon the Engineering of Systems, and then ultimately intoan international standard. These factors should be keptin mind when using this handbook.

Page 12: Engineering Systems Handbook

NASA Systems Engineering Handbook

Acknowledgements

This work was conducted under the overall direc-tion of Mr. Frank Hoban, and then Dr. Ed Hoffman,NASA Headquarters/Code FT, under the NASA Pro-gram/Project Management Initiative (PPMI). Dr. ShahidHabib, NASA Headquarters/Code QW, providedadditional financial support to the NASA-wide SystemsEngineering Working Group and Systems EngineeringProcess Improvement Task team. Many individuals,both in and out of NASA, contributed material, reviewedvarious drafts, or otherwise provided valuable input tothis handbook. Unfortunately, the lists below cannotinclude everyone who shared their ideas andexperience.

Principal Contributors:Dr. Robert Shishko, NASA/Jet Propulsion LaboratoryMr. Robert G. Chamberlain, P.E., NASA/Jet Propulsion

Laboratory

Other Contributors:Mr. Robert Aster, NASA/Jet Propulsion LaboratoryMs. Beth Bain, Lockheed Missiles and Space CompanyMr. Guy Beutelschies, NASA/Jet Propulsion LaboratoryDr. Vincent Bilardo, NASA/Ames Research CenterMs. Renee I. Cox, NASA/Marshall Space Flight CenterDr. Kevin Forsberg, Center for Systems ManagementDr. Walter E. Hammond, Sverdrup Techology, Inc.Mr. Patrick McDuffee, NASA/Marshall Space Flight

CenterMr. Harold Mooz, Center for Systems ManagementMs. Mary Beth Murrill, NASA/Jet Propulsion LaboratoryDr. Les Pieniazek Lockheed Engineering and ScienceSCo. Mr. Lou Polaski, Center for Systems ManagementMr. Neil Rainwater, NASA/Marshall Space Flight CenterMr. Tom Rowell, NASA/Marshall Space Flight CenterMs. Donna Shirley, NASA/Jet Propulsion LaboratoryMr. Mark Sluka, Lockheed Engineering and SciencesCo. Mr. Ron Wade, Center for Systems Management

SEPIT team members (not already mentioned):Mr. Randy Fleming, Lockheed Missiles and Space Co.Dr. Frank Fogle, NASA/Marshall Space Flight CenterMr. Tony Fragomeni, NASA/Goddard Space FlightCenter Dr. Shahid Habib, NASA Headquarters/CodeQWMr. Henning Krome, NASA/Marshall Space FlightCenterMr. Ray Lugo, NASA/Kennedy Space CenterMr. William C. Morgan, NASA/Johnson Space CenterDr. Mike Ryschkewitsch, NASA/Goddard Space Flight

CenterMr. Gerry Sadler, NASA/Lewis Research CenterMr. Dick Smart, Sverdrup Technology, Inc.Dr. James Wade, NASA/Johnson Space CenterMr. Milam Walters, NASA/Langley Research CenterMr. Don Woodruff, NASA/Marshall Space Flight Center

Other sources:

Mr. Dave Austin, NASA Headquarters/Code DSSMr. Phillip R. Barela, NASA/Jet Propulsion LaboratoryMr. J.W. Bott, NASA/Jet Propulsion LaboratoryDr. Steven L. Cornford, NASA/Jet Propulsion LaboratoryMs. Sandra Dawson, NASA/Jet Propulsion LaboratoryDr. James W. Doane, NASA/Jet Propulsion LaboratoryMr. William Edminston, NASA/Jet Propulsion LaboratoryMr. Charles C. Gonzales, NASA/Jet Propulsion

LaboratoryDr. Jairus Hihn, NASA/Jet Propulsion LaboratoryDr. Ed Jorgenson, NASA/Jet Propulsion LaboratoryMr. Richard V. Morris, NASA/Jet Propulsion LaboratoryMr. Tom Weber, Rockwell International/Space Systems

Reviewers:Mr. Robert C. Baumann, NASA/Goddard Space Flight

CenterMr. Chris Carl, NASA/Jet Propulsion LaboratoryDr. David S.C. Chu, Assistant Secretary of Defense/Pro-

gram Analysis and EvaluationMr. M.J. Cork, NASA/Jet Propulsion LaboratoryMr. James R. French, JRF Engineering ServicesMr. John L. Gasery, Jr., NASA/Stennis Space CenterMr. Frederick D. Gregory, NASA Headquarters/Code QMr. Don Hedgepeth, NASA/Langley Research CenterMr. Jim Hines, Rockwell International/Space SystemsMr. Keith L. Hudkins, NASA Headquarters/Code MZDr. Jerry Lake, Defense Systems Management CollegeMr. T.J. Lee, NASA/Marshall Space Flight CenterMr. Jim Lloyd, NASA Headquarters/Code QSMr. Woody Lovelace, NASA/Langley Research CenterDr. Brian Mar, Department of Civil Engineering, Univer -

sity of WashingtonDr. Ralph F. Miles, Jr., Center for Safety and System

Management, University of Southern CaliforniaDr. Richard A. Montgomery, Montgomery and

AssociatesMr. Bernard G. Morais, Synergistic Applications, Inc.Mr. Ron Moyer, NASA Headquarters/Code QWMr. Rudy Naranjo, NASA Headquarters/Code MZMr. Raymond L. Nieder, NASA/Johnson Space CenterMr. James E. Pearson, United Technologies Research

CenterMr. Leo Perez, NASA Headquarters/Code QPMr. David Pine, NASA Headquarters/Code BMr. Glen D. Ritter, NASA/Marshall Space Flight Center

Page 13: Engineering Systems Handbook

NASA Systems Engineering Handbook

Dr. Arnold Ruskin, NASA/Jet PropulsionLaboratory and University of California atLos Angeles

Mr. John G. Shirlaw, Massachusetts Institute ofTechnology

Mr. Richard E. Snyder, NASAlLangley ResearchCenter

Mr. Don Sova, NASA Headquarters/Code AEMr. Marvin A. Stibich, USAF/Aeronautical Systems

CenterMr. Lanny Taliaferro, NASAtMarshall Space Flight

Center

Editorial and graphics support:Mr. Stephen Brewster, NASA/Jet Propulsion

LaboratoryMr. Randy Cassingham, NASA/Jet Propulsion

LaboratoryMr. John Matlock, NASA/Jet Propulsion Laboratory

—Dr. Robert ShishkoTask Manager, NASA/Jet Propulsion Laboratory

Page 14: Engineering Systems Handbook

NASA Systems Engineering HandbookIntroduction

1 Introduction

1.1 Purpose

This handbook is intended to provide informationon systems engineering that will be useful to NASA sys-tem engineers, especially new ones. Its primaryobjective is to provide a generic description of systemsengineering as it should be applied throughout NASA.Field centers' handbooks are encouraged to providecenter-specific details of implementation.

For NASA system engineers to choose to keep acopy of this handbook at their elbows, it must provideanswers that cannot be easily found elsewhere. Conse-quently, it provides NASA-relevant perspectives andNASA-particular data. NASA management instructions(NMIs) are referenced when applicable.

This handbook's secondary objective is to serveas a useful companion to all of the various courses insystems engineering that are being offered underNASA's auspices.

1.2 Scope and Depth

The subject matter of systems engineering is verybroad. The coverage in this handbook is limited togeneral concepts and generic descriptions of processes,tools, and techniques. It provides information on goodsystems engineering practices, and pitfalls to avoid.There are many textbooks that can be consulted forin-depth tutorials.

This handbook describes systems engineering asit should be applied to the development of major NASAsystems. Systems engineering deals both with thesystem being developed (the product system) and thesystem that does the developing (the producing system).Consequently, the handbook's scope properly includessystems engineering functions regardless of whetherthey are performed by an in-house systems engineeringorganization, a program/project office, or a systemcontractor.

While many of the producing system's design fea-tures may be implied by the nature of the tools and tech-niques of systems engineering, it does not follow thatinstitutional procedures for their application must beuniform from one NASA field center to another.

Selected Systems Engineering Reading

See the Bibliography for full reference data and further reading suggestions.

Fundamentals of Systems EngineeringSystems Engineering and Analysis (2nd ed.), B.S. Blanchard and W.J. FabryckySystems Engineering, Andrew P. SageAn Introduction to Systems Engineering, J.E. Armstrong and Andrew P. Sage

Management Issues in Systems EngineeringSystems Engineering, EIA/IS-632IEEE Trial-Use Standard for application and Management of the Systems Engineering Process, IEEE Std 1220-1994Systems Engineering Management Guide, Defense Systems Management CollegeSystem Engineering Management, B.S. BlanchardSystems Engineering Methods, Harold ChestnutSystems Concepts, Ralph Miles, Jr. (editor)Successful Systems Engineering for Engineers and Managers, Norman B. Reilly

Systems Analysis and ModelingSystems Engineering Tools, Harold ChestnutSystems Analysis for Engineers and Managers, R. de Neufville and J.H. StaffordCost Considerations in Systems Analysis, Gene H. Fisher

Space Systems Design and OperationsSpace Vehicle Design, Michael D. Griffin and James R. FrenchSpace Mission Analysis and Design (2nd ed.), Wiley J. Larson and James R. Wertz (editors)Design of Geosynchronous Spacecraft, Brij N. AgrawalSpacecraft Systems Engineering, Peter W. Fortescue and John P.W. Stark (editors)Cost-Effective Space Mission Operations, Daryl Boden and Wiley J. Larson (editors)Reducing Space Mission Cost, Wiley J. Larson and James R. Wertz (editors)

Page 15: Engineering Systems Handbook

NASA Systems Engineering HandbookFundamentals of Systems Engineering

2 Fundamentals of Systems Engineering

2.1 Systems, Supersystems, and Subsystems

A system is a set of interrelated componentswhich interact with one another in an organized fashiontoward a common purpose. The components of asystem may be quite diverse, consisting of persons,organizations, procedures, software, equipment, end 'orfacilities. The purpose of a system may be as humble asdistributing electrical power within a spacecraft or asgrand as exploring the surface of Mars.

A Hierarchical System Terminology

The following hierarchical sequence of terms for suc-cessively finer resolution was adopted by the NASA -wide Systems Engineering Working Group (SEWG) andits successor, the Systems Engineering Process Im-provement Task (SEPIT) team:

SystemSegment

ElementSubsystem

AssemblySubassembly

Part

Particular projects may need a different sequenceof layers— an instrument may not need as many layers,while a broad initiative may need to distinguish morelayers. Projects should establish their own terminology.The word system is also used within NASA generically,as defined in the text. In this handbook, "system" isgenerally used in its generic form.

Every system exists in the context of a broadersupersystem, i.e., a collection of related systems. It is inthat context that the system must be judged. Thus,managers in the supersystem set system policies,establish system objectives, determine systemconstraints, and define what costs are relevant. Theyoften have oversight authority over system design andoperations decisions.

Most NASA systems are sufficiently complex thattheir components are subsystems, which must functionin a coordinated way for the system to accomplish itsgoals. From the point of view of systems engineering,each subsystem is a system in its own right—that is,policies, requirements, objectives, and which costs arerelevant are established at the next level up in thehierarchy. Spacecraft systems often have suchsubsystems as propulsion, attitude control,telecommunications, and power. In a large project, thesubsystems are likely to be called "systems". The wordsystem is also used within NASA generically, as defined

in the first paragraph above. In this handbook, system"is generally used in its generic form.

The NASA management instruction for theacquisition of “major" systems (NMI 7120.4) defines aprogram as “a related series of undertakings thatcontinue over a period of time (normally years), whichare designed to pursue, or are in support of, a focusedscientific or technical goal, and which are characterizedby: design, development, and operations of systems."Programs are managed by NASA Headquarters, andmay encompass several projects.

In the NASA context, a project encompasses thedesign, development, and operation of one or more sys-tems, and is generally managed by a NASA field center.

Headquarters' management concerns includenot only the engineering of the systems, but all of theother activities required to achieve the desired end.These other activities include explaining the value ofprograms and projects to Congress and enlistinginternational cooperation. The term mission is oftenused for a program pro-

The Technical Sophistication Required to doSystems Engineering Depends on the Project

• The system's goals may be simple and easy toidentify and measure—or they may be technicallycomplicated, requiring a great deal of insight aboutthe environment or technology within or with whichthe system must operate.

• The system may have a single goal—or multiplegoals. There are techniques available fordetermining the relative values of multiple goals —but sometimes goals are truly incommensurate andunquantifiable.

• The system may have users representing factionswith conflicting objectives. When there areconflicting objectives, negotiated compromises willbe required.

• Alternative system design concepts may beabundant—or they may require creative genius todevelop.

• A "back-of-the-envelope" computation may besatisfactory for prediction of how well the alternativedesign concepts would do in achievement of thegoals—or credibility may depend upon constructionand testing of hardware or software models.

• The desired ends usually include an optimizationobjective, such as "minimize life-cycle cost" or"maximize the value of returned data", so selectionof the best design may not be an easy task.

Page 16: Engineering Systems Handbook

NASA Systems Engineering HandbookFundamentals of Systems Engineering

ject's purpose; its connotations of fervor make it particu-larly suitable for such political activities, where the emo-tional content of the term is a desirable factor. Ineveryday conversation, the terms "project," "mission,"and "system" are often used interchangeably; whileimprecise, this rarely causes difficulty.

2.2 Definition of Systems Engineering

Systems engineering is a robust approach to thedesign, creation, and operation of systems. In simpleterms, the approach consists of identification andquantification of system goals, creation of alternativesystem design concepts, performance of design trades,selection and imple-

Systems Engineering per ElA/IS-632

Systems engineering is "an interdisciplinary approachencompassing the entire technical effort to evolve andverify an integrated and life-cycle balanced set of sys-tem people, product, and process solutions that satisfycustomer needs. Systems engineering encompasses(a) the technical efforts related to the development,manufacturing, verification, deployment, operations,support) disposal of, and user training for, system prod-ucts and processes; (b) the definition and managementof the system configuration; (c) the translation of thesystem definition into work breakdown structures; and(d) development of information for management deci-sion making."

mentation of the best design, verification that the designis properly built and integrated, and post-implementationassessment of how well the system meets (or met) thegoals. The approach is usually applied repeatedly andrecursively, with several increases in the resolution ofthe system baselines (which contain requirements,design details, verification procedures and standards,cost and performance estimates, and so on).

Systems engineering is performed in concert withsystem management. A major part of the system engi-neer's role is to provide information that the systemmanager can use to make the right decisions. Thisincludes identification of alternative design concepts andcharacterization of those concepts in ways that will helpthe system managers first discover their preferences,then be able to apply them astutely. An important aspectof this role is the creation of system models thatfacilitate assessment of the alternatives in variousdimensions such as cost, performance, and risk.

Application of this approach includes performanceof some delegated management duties, such asmaintaining control of the developing configuration andoverseeing the integration of subsystems.

2.3 Objective of Systems Engineering

The objective of systems engineering is to see toit that the system is designed, built, and operated so thatit accomplishes its purpose in the most cost-effectiveway possible, considering performance, cost, schedule,and risk.

A cost-effective system must provide a particularkind of balance between effectiveness and cost: thesystem must provide the most effectiveness for theresources expended or, equivalently, it must be the leastexpensive for the effectiveness it provides. Thiscondition is a weak one because there are usually manydesigns that meet the condition. Think of each possibledesign as a point in the

Cost

The cost of a system is the foregone value of the re-sources needed to design, build, and operate it. Be-cause resources come in many forms— work per-formed by NASA personnel and contractors,materials, energy, and the use of facilities andequipment such as wind tunnels, factories, offices,and computers—it is of en convenient to expressthese values in common terms by using monetaryunits (such as dollars).

Effectiveness

The effectiveness of a system is a quantitativemeasure of the degree to which the system's purposeis achieved. Effectiveness measures are usually verydependent upon system performance. For example,launch vehicle effectiveness depends on theprobability of successfully injecting a payload onto ausable trajectory. The associated system performanceattributes include the mass that can be put into aspecified nominal orbit, the trade between injectedmass and launch velocity, and launch availability.

Cost-Effectiveness

The cost-effectiveness of a system combines both thecost and the effectiveness of the system in the contextof its objectives. While it may be necessary tomeasure either or both of those in terms of severalnumbers, it is sometimes possible to combine thecomponents into a meaningful, single-valued objectivefunction for use in design optimization. Even withoutknowing how to trade effectiveness for cost, designsthat have lower cost and higher effectiveness arealways preferred.

Page 17: Engineering Systems Handbook

NASA Systems Engineering HandbookFundamentals of Systems Engineering

tradeoff space between effectiveness and cost. A graphplotting the maximum achievable effectiveness ofdesigns available with current technology as a functionof cost would in general yield a curved line such as theone shown in Figure 1. (In the figure, all the dimensionsof effectiveness are represented by the ordinate and allthe dimensions of cost by the abscissa.) In other words,the curved line represents the envelope of the currentlyavailable technology in terms of cost -effectiveness.

Points above the line cannot be achieved withcurrently available technology e that is, they do notrepresent feasible designs. (Some of those points maybe feasible in the future when further technologicaladvances have been made.) Points inside the envelopeare feasible, but are dominated by designs whosecombined cost and effectiveness lie on the envelope.Designs represented by points on the envelope arecalled cost-effective (or efficient or non-dominated)solutions.

Design trade studies, an important part of thesystems engineering process, often attempt to finddesigns that provide a better combination of the variousdimensions of cost and effectiveness. When the startingpoint for a design trade study is inside the envelope,there are alternatives that reduce costs withoutdecreasing any aspect of effectiveness. or increasesome aspect of effectiveness with

Figure 1 -- The Enveloping Surface of Non-dominatedDesigns.

out decreasing others and without increasing costs.Then, the system manager's or system engineer'sdecision is easy. Other than in the sizing of subsystems,such "win-win" design trades are uncommon, but by nomeans rare. When the alternatives in a design tradestudy, however, require trading cost for effectiveness, oreven one dimension of effectiveness for another at thesame cost, the decisions become harder.

Figure 2--Estimates of Outcomes to be Obtained fromSeveral Design Concepts Including Uncertainty.

The process of finding the most cost-effectivedesign is further complicated by uncertainty, which isshown in Figure 2 as a modification of Figure 1. Exactlywhat outcomes will be realized by a particular systemdesign cannot be known in advance with certainty, sothe projected cost and effectiveness of a design arebetter described by a probability distribution than by apoint. This distribution can be thought of as a cloudwhich is thickest at the most likely value and thinnerfarther away from the most likely point, as is shown fordesign concept A in the figure. Distributions resultingfrom designs which have little uncertainty are dense andhighly compact, as is shown for concept B. Distributionsassociated with risky designs may have significantprobabilities of producing highly undesirable outcomes,as is suggested by the presence of an additional loweffectiveness/high cost cloud for concept C. (Of course,the envelope of such clouds cannot be a sharp line suchas is shown in the figures, but must itself be ratherfuzzy. The line can now be thought of as representingthe envelope at some fixed confidence level -- that is, aprobability of x of achieving that effectiveness.)

Both effectiveness and cost may require severaldescriptors. Even the Echo balloons obtained scientificdata on the electromagnetic environment andatmospheric drag, in addition to their primary mission ascommunications satellites. Furthermore, Echo was thefirst satellite visible to the naked eye, an unquantified --but not unrecognized —aspect of its effectiveness.Costs, the expenditure of limited resources, may bemeasured in the several dimensions of funding,personnel, use of facilities, and so on. Schedule mayappear as an attribute of effectiveness or cost, or as aconstraint. Sputnik, for example, drew much

Page 18: Engineering Systems Handbook

NASA Systems Engineering HandbookFundamentals of Systems Engineering

of its effectiveness from the fact that it was a "first"; amission to Mars that misses its launch window has towait about two years for another opportunity—a clearschedule constraint. Risk results from uncertainties inrealized effectiveness, costs, timeliness, and budgets.

Sometimes, the systems that provide thehighest ratio of effectiveness to cost are the mostdesirable. How

The System Engineer's Dilemma

At each cost-effective solution:

• To reduce cost at constant risk, performancemust be reduced.

• To reduce risk at constant cost, performancemust be reduced.

• To reduce cost at constant performance, higherrisks must be accepted.

• To reduce risk at constant performance, highercosts must be accepted.

In this context, time in the schedule is often acritical resource, so that schedule behaves like a kindof cost.

ever, this ratio is likely to be meaningless or—worse—misleading. To be useful and meaningful, that ratio mustbe uniquely determined and independent of the systemcost. Further, there must be but a single measure ofeffectiveness and a single measure of cost. If thenumerical values of those metrics are obscured byprobability distributions, the ratios become uncertain aswell; then any usefulness the simple, single ratio of twonumbers might have had disappears.

In some contexts, it is appropriate to seek themost effectiveness possible within a fixed budget; inother contexts, it is more appropriate to seek the leastcost possible with specified effectiveness. In thesecases, there is the question of what level ofeffectiveness to specify or of what level of costs to fix.In practice, these may be mandated in the form ofperformance or cost requirements; it then becomesappropriate to ask whether a slight relaxation ofrequirements could produce a significantly cheaper sys-tem or whether a few more resources could produce asignificantly more effective system.

Usually, the system manager must chooseamong designs that differ in terms of numerousattributes. A variety of methods have been developedthat can be used to help managers uncover theirpreferences between attributes and to quantify theirsubjective assessments of relative value. When this canbe done, trades between attributes can be assessedquantitatively. Often, however, the attributes seem to be

truly incommensurate; managers must make theirdecisions in spite of this multiplicity.

2.4 Disciplines Related to Systems Engineering

The definition of systems engineering given inSection 2.2 could apply to the design task facing abridge designer, a radio engineer, or even a committeechair. The systems engineering process can be a part ofall of these. It cannot be the whole of the job—thebridge designer must know the properties of concreteand steel, the radio engineer must apply Maxwell'sequations, and a committee chair must understand thepersonalities of the members of the committee. In fact,the optimization of systems requires collaboration withexperts in a variety of disciplines, some of which arecompared to systems engineering in the remainder ofthis section.

The role of systems engineering differs fromthat of system management in that engineering is ananalytical, advisory and planning function, whilemanagement is the decision-making function. Veryoften, the distinction is irrelevant, as the sameindividuals may perform both roles. When no factorsenter the decision-making process other than those thatare covered by the analyses, system management maydelegate some of the management responsibility to thesystems engineering function.

Systems engineering differs from what might becalled design engineering in that systems engineeringdeals with the relationships of the thing being designedto its supersystem (environment) and subsystems,rather than with the internal details of how it is toaccomplish its objectives. The systems viewpoint isbroad, rather than deep: it encompasses the systemfunctionally from end to end and temporally fromconception to disposal.

System engineers must also rely oncontributions from the specialty engineering disciplines,in addition to the traditional design disciplines, forfunctional expertise and specialized analytic methods.These specialty engineering areas typically includereliability, maintainability, logistics, test, production,transportation, human factors, quality assurance, andsafety engineering. Specialty engineers contributethroughout the systems engineering process; part of thesystem engineer's job is to see that these functions arecoherently integrated into the project at the right timesand that they address the relevant issues. One of theobjectives for Chapter 6 is to develop an understandingof how these specialty engineers contribute to theobjective of systems engineering.

In both systems analysis and systemsengineering, the amounts and kinds of resources to bemade available for the creation of the system areassumed to be among the

Page 19: Engineering Systems Handbook

NASA Systems Engineering HandbookFundamentals of Systems Engineering

decisions to be made. Systems engineeringconcentrates on the creation of hardware and softwarearchitectures and on the development and managementof the interfaces between subsystems, while relying onsystems analysis to construct the mathematical modelsand analyze the data to evaluate alternative designs andto perform the actual design trade studies. Systemsanalysis often requires the use of tools from operationsresearch, economics, or other decision sciences, andsystems analysis curricula generally include extensivestudy of such topics as probability, statistics, decisiontheory, queueing theory, game theory, linear andnon-linear programming, and so on. In practice, manysystem engineers' academic background is richer in theengineering disciplines than in the decision sciences. Asa consequence, the system engineer is often aconsumer of systems analysis products, rather than aproducer of them. One of the major objectives forChapter 5 is to develop an understanding andappreciation of the state of that art.

Operations research and operations engineeringconfine their attention to systems whose componentsare assumed to be more or less immutable. That is, it isassumed that the resources with which the systemoperates cannot be changed, but that the way in whichthey are used is amenable to optimization. Operationsresearch techniques often provide powerful tools for theoptimization of system designs.

Within NASA, terms such as mission analysisand engineering are often used to describe all study anddesign efforts that relate to determination of what theproject's mission should be and how it should be carriedout. Sometimes the scope is limited to the study offuture projects. Sometimes the charters of organizationswith such names include monitoring the capabilities ofsystems, ensuring that important considerations havenot been overlooked, and overseeing trades betweenmajor systems— thereby encompassing operationsresearch, systems analysis, and systems engineeringactivities.

Total quality management (TQM) is theapplication of systems engineering to the workenvironment. That is, part of the total qualitymanagement paradigm is the realization that anoperating organization is a particular kind of system andshould be engineered as one. A variety of specializedtools have been developed for this application area;many of them can be recognized as establishedsystems engineering tools, but with different names. Theinjunction to focus on the satisfaction of customerneeds, for example, is even expressed in similar terms.The use of statistical process control is akin to the useof technical performance and earned valuemeasurements. Another method, qualify functiondeployment (QFD), is a technique of requirementsanalysis often used in systems engineering.

The systems approach is common to all ofthese related fields. Essential to the systems approachis the recognition that a system exists, that it is

embedded in a supersystem on which it has an impact,that it may contain subsystems, and that the system'sobjectives must be understood preferably explicitlyidentified.

2.5 The Doctrine of Successive Refinement

The realization of a system over its life cycleresults from a succession of decisions amongalternative courses of action. If the alternatives areprecisely enough defined and thoroughly enoughunderstood to be well differentiated in thecost-effectiveness space, then the system manager canmake choices among them with confidence.

The systems engineering process can bethought of as the pursuit of definition and understandingof design alternatives to support those decisions,coupled with the overseeing of their implementation. Toobtain assessments that are crisp enough to facilitategood decisions, it is often necessary to delve moredeeply into the space of possible designs than has yetbeen done, as is illustrated in Figure 3.

It should be realized, however, that this spiralrepresents neither the project life cycle, whichencompasses the

Page 20: Engineering Systems Handbook

NASA Systems Engineering HandbookFundamentals of Systems Engineering

system from inception through disposal, nor the productdevelopment process by which the system design isdeveloped and implemented, which occurs in Phases Cand D (see Chapter 3) of the project life cycle. Rather,as the intellectual process of systems engineering, it isinevitably reflected in both of them.

Figure 3 is really a double helix—each createconcepts step at the level of design engineering initiatesa ca-

As an Example of the Process of SuccessiveRefinement, Consider the Choice of Altitude for a

Space Station such as Alpha

• The first issue is selection of the general location.Alternatives include Earth orbit, one of theEarth-Moon Lagrange points, or a solar orbit. At thecurrent state of technology, cost and riskconsiderations made selection of Earth orbit an easychoice for Alpha.

• Having chosen Earth orbit, it is necessary to selectan orbit region. Alternatives include low Earth orbit(LEO), high Earth orbit and geosynchronous orbit;orbital inclination and eccentricity must also bechosen. One of many criteria considered inchoosing LEO for Alpha was the design complexityassociated with passage through the Van Allenradiation belts.

• System design choices proceed to the selection ofan altitude maintenance strategy—rules thatimplicitly determine when, where, and why to re-boost, such as "maintain altitude such that there arealways at least TBD days to reentry," "collisionavoidance maneuvers shall always increase thealtitude," "reboost only after resupply flights thathave brought fuel," "rotate the crew every TBDdays."

• A next step is to write altitude specifications. Thesechoices might consist of replacing the TBDs (valuesto be determined) in the altitude strategy withexplicit numbers.

• Monthly operations plans are eventually part of thecomplete system design. These would includescheduled reboost burns based on predictions of theaccumulated effect of drag and the details ofon-board microgravity experiments.

• Actual firing decisions are based on determinationsof the orbit which results from the momentumactually added by previous firings, the atmosphericdensity variations actually encountered, and so on.

Note that decisions at every step require thatthe capabilities offered by available technology beconsidered—often at levels of design that are moredetailed than seems necessary at first.

pabilities definition spiral moving in the oppositedirection. The concepts can never be created fromwhole cloth. Rather, they result from the synthesis ofpotential capabilities offered by the continually changingstate of technology. This process of design conceptdevelopment by the integration of lower-level elementsis a part of the systems engineering process. In fact,there is always a danger that the top-down processcannot keep up with the bottom-up process.

There is often an early need to resolve theissues (such as the system architecture) enough so thatthe system can be modeled with sufficient realism to doreliable trade studies.

When resources are expended toward theimplementation of one of several design options, theresources required to complete the implementation ofthat design decrease (of course), while there is usuallylittle or no change in the resources that would berequired by unselected alternatives. Selectedalternatives thereby become even more attractive thanthose that were not selected.

Consequently, it is reasonable to expect thesystem to be defined with increasingly better resolutionas time passes. This tendency is formalized at somepoint (in Phase B) by defining a baseline systemdefinition. Usually, the goals, objectives, and constraintsare baselined as the requirements portion of thebaseline. The entire baseline is then subjected toconfiguration control in an attempt to ensure thatsuccessive changes are indeed improvements.

As the system is realized, its particulars becomeclearer—but also harder to change. As stated above, thepurpose of systems engineering is to make sure that thedevelopment process happens in a way that leads to themost cost-effective final system. The basic idea is thatbefore those decisions that are hard to undo are made,the alternatives should be carefully assessed.

The systems engineering process is appliedagain and again as the system is developed. As thesystem is realized, the issues addressed evolve and theparticulars of the activity change.

Most of the major system decisions (goals,architecture, acceptable life-cycle cost, etc.) are madeduring the early phases of the project, so the turns of thespiral (that is, the successive refinements) do notcorrespond precisely to the phases of the system lifecycle. Much of the system architecture can be ''seen"even at the outset, so the turns of the spiral do notcorrespond exactly to development of the architecturalhierarchy, either. Rather, they correspond to thesuccessively greater resolution by which the system isdefined.

Each of the steps in the systems engineeringprocess is discussed below.

Page 21: Engineering Systems Handbook

NASA Systems Engineering HandbookFundamentals of Systems Engineering

Recognize Need/Opportunity. This step is shown inFigure 3 only once, as it is not really part of the spiralbut its first cause. It could be argued that recognition ofthe need or opportunity for a new system is anentrepreneurial activ ity, rather than an engineering one.

The end result of this step is the discovery anddelineation of the system's goals, which generallyexpress the desires and requirements of the eventualusers of the system. In the NASA context, the system'sgoals should also represent the long term interests ofthe taxpaying public.

Identify and Quantify Goals. Before it is possible tocompare the cost-effectiveness of alternative systemdesign concepts, the mission to be performed by thesystem must be delineated. The goals that aredeveloped should cover all relevant aspects ofeffectiveness, cost, schedule, and risk, and should betraceable to the goals of the supersystem. To make iteasier to choose among alternatives, the goals shouldbe stated in quantifiable, verifiable terms, insofar as thatis possible and meaningful to do.

It is also desirable to assess the constraints thatmay apply. Some constraints are imposed by the stateof technology at the time of creating or modifyingsystem design concepts. Others may appear to beinviolate, but can be changed by higher levels ofmanagement. The assumptions and other relevantinformation that underlie constraints should always berecorded so that it is possible to estimate the benefitsthat could be obtained from their relaxation.

At each turn of the spiral, higher-level goals areanalyzed. The analysis should identify the subordinateenabling goals in a way that makes them traceable tothe next higher level. As the systems engineeringprocess continues, these are documented as functionalrequirements (what must be done to achieve thenext-higher-level goals) and as performancerequirements (quantitative descriptions of how well thefunctional requirements must be done). A clearoperations concept often helps to focus the require-ments analysis so that both functional and performancerequirements are ultimately related to the original needor opportunity. In later turns of the spiral, furtherelaborations may become documented as detailedfunctional and performance specifications.

Create Alternative Design Concepts. Once it is under-stood what the system is to accomplish, it is possible todevise a variety of ways that those goals can be met.Sometimes, that comes about as a consequence ofconsidering alternative functional allocations andintegrating available subsystem design options. Ideally,as wide a range of plausible alternatives as is consistentwith the design organization's charter should be defined,keeping in mind the current stage in the process ofsuccessive refinement. When the bottom-up process isoperating, a problem for the system engineer is that thedesigners tend to become fond of the designs they

create, so they lose their objectivity; the systemengineer often must stay an "outsider" so that there ismore objectivity.

On the first turn of the spiral in Figure 3, the sub-ject is often general approaches or strategies,sometimes architectural concepts. On the next, it islikely to be functional design, then detailed design, andso on.

The reason for avoiding a premature focus on asingle design is to permit discovery of the truly bestdesign. Part of the system engineer's job is to ensurethat the design concepts to be compared take intoaccount all interface requirements. "Did you include thecabling?" is a characteristic question. When possible,each design concept should be described in terms ofcontrollable design parameters so that each representsas wide a class of designs as is reasonable. In doing so,the system engineer should keep in mind that thepotentials for change may include organizationalstructure, schedules, procedures, and any of the otherthings that make up a system. When possible,constraints should also be described by parameters.

Owen Morris, former Manager of the ApolloSpacecraft Program and Manager of Space ShuttleSystems and Engineering, has pointed out that it is oftenuseful to define design reference missions which stressall of the system's capabilities to a significant extent andwhich al1 designs will have to be able to accomplish.The purpose of such missions is to keep the designspace open. Consequently, it can be very dangerous towrite them into the system specifications, as they canhave just the opposite effect.

Do Trade Studies. Trade studies begin with an assess-ment of how well each of the design alternatives meetsthe system goals (effectiveness, cost, schedule, andrisk, both quantified and otherwise). The ability toperform these studies is enhanced by the developmentof system models that relate the design parameters tothose assessments— but it does not depend upon them.

Controlled modification and development ofdesign concepts, together with such system models,often permits the use of formal optimization techniquesto find regions of the design space that warrant furtherinvestigation— those that are closer to the optimumsurface indicated in Figure 1.

Whether system models are used or not, thedesign concepts are developed, modified, reassessed,and compared against competing alternatives in aclosed-loop process that seeks the best choices forfurther development. System and subsystem sizes areoften determined during the trade studies. The endresult is the determination of

Page 22: Engineering Systems Handbook

NASA Systems Engineering HandbookFundamentals of Systems Engineering

bounds on the relative cost-effectivenesses of thedesign alternatives, measured in terms of the quantifiedsystem goals. (Only bounds, rather than final values, arepossible because determination of the final details of thedesign is intentionally deferred. The bounds, in turn,may be derived from the probability density functions.)Increasing detail associated with the continuallyimproving resolution reduces the spread between upperand lower bounds as the process proceeds.

Select Concept. Selection among the alternativedesign concepts is a task for the system manager, whomust take into account the subjective factors that thesystem engineer was unable to quantify, in addition tothe estimates of how well the alternatives meet thequantified goals (and any effectiveness, cost, schedule,risk, or other constraints).

When it is possible, it is usually well worth thetrouble to develop a mathematical expression, called anobjective function, that expresses the values ofcombinations of possible outcomes as a single measureof cost-effectiveness, as is illustrated in Figure 4, even ifboth cost and effectiveness must be described by morethan one measure. When achievement of the goals canbe quantitatively expressed by such an objectivefunction, designs can be compared in terms of theirvalue. Risks associated with design concepts can causethese evaluations to be somewhat nebulous (becausethey are uncertain and are best described by probabilitydistributions). In this illustration, the risks are relativelyhigh for design concept A. There is little risk in eithereffectiveness or cost for concept B. while the risk of anexpensive failure is high for concept C, as is shown by

Figure 4—A Quantitative Objective Function, De"pendent on Life-Cycle Cost and All Aspects of Effec-tiveness.

the cloud of probability near the x axis with a high costand essentially no effectiveness. Schedule factors mayaffect the effectiveness values, the cost values, and therisk distributions.

The mission success criteria for systems differsignificantly. In some cases, effectiveness goals may bemuch more important than all others. Other projectsmay demand low costs, have an immutable schedule, orrequire minimization of some kinds of risks. Rarely (ifever) is it possible to produce a combined quantitativemeasure that relates all of the important factors, even ifit is expressed as a vector with several components.Even when that can be done, it is essential that theunderlying factors and relationships be thoroughlyrevealed to and understood by the system manager.The system manager must weigh the importance of theunquantifiable factors along with the quantitative dataprovided by the system engineer.

Technical reviews of the data and analyses arean important part of the decision support packagesprepared for the system manager. The decisions thatare made are generally entered into the configurationmanagement system as changes to (or elaborations of)the system baseline. The supporting trade studies arearchived for future use. An essential feature of thesystems engineering process is that trade studies areperformed before decisions are made. They can then bebaselined with much more confidence.

At this point in the systems engineering process,there is a logical branch point. For those issues forwhich the process of successive refinement hasproceeded far

Simple Interfaces are Preferred

According to Morris, NASA's former ActingAdministrator George Low, in a 1971 paper titled"What Made Apollo a Success," noted that only 100wires were needed to link the Apollo spacecraft to theSaturn launch vehicle. He emphasized the point that asingle person could fully understand the interface andcope with all the effects of a change on either side ofthe interface.

enough, the next step is to implement the decisions atthat level of resolution (that is, unwind the recursiveprocess). For those issues that are still insufficientlyresolved, the next step is to refine the developmentfurther.Increase the Resolution of the Design. One of thefirst issues to be addressed is how the system should besubdivided into subsystems. (Once that has been done,the focus changes and the subsystems become systems-- from the point of view of a system engineer. Thepartitioning process stops when the subsystems aresimple enough to be managed holistically.) As noted byMorris, "the divi-

Page 23: Engineering Systems Handbook

NASA Systems Engineering HandbookFundamentals of Systems Engineering

be managed holistically.) As noted by Morris, "thedivision of program activities to minimize thenumber and complexity of interfaces has a stronginfluence on the overal1 program cost and theability of the program to meet schedules."

Charles Leising and Arnold Ruskin have(separately) pointed out that partitioning is more artthan science, but that there are guidelinesavailable: To make interfaces clean and simple,similar functions, designs and tech nologies shouldbe grouped. Each portion of work should beverifiable. Pieces should map conveniently ontothe organizational structure. Some of the functionsthat are needed throughout the design (such aselectrical power) or throughout the organization(such as purchasing) can be centralized.Standardization—of such things as parts lists orreporting formats—is often desirable. Theaccounting system should follow (not lead) thesystem architecture. In terms of breadth,partitioning should be done essentially all at once.As with system design choices, alternative parti -tioning plans should be considered and comparedbefore implementation.

If a requirements-driven design paradigm isused for tile development of the systemarchitecture, it must be applied with care, for theuse of "shells" creates a tendency for therequirements to be treated as inviolable con straintsrather than as agents of the objectives. A goal, ob -jective or desire should never be made arequirement until its costs. are understood and thebuyer is willing to pay for it. The capability tocompute the effects of lower -level decisions on thequantified goals should be maintained throughoutthe partitioning process. That is, there should be agoals flowdown embedded in the requirementsallocation process.

The process continues with creation of avariety of alternative design concepts at the nextlevel of resolution, construction of models thatpermit prediction of how well those alternatives willsatisfy the quantified goals, and so on. It isimperative that plans for subsequent integration belaid throughout the partitioning. Integration plansinclude verification and validation activities as amatter of course.

Implement the Selected Design Decisions.When the process of successive refinement hasproceeded far enough, the next step is to reversethe partitioning process. When applied to the

system architecture, this "unwinding" of theprocess is called system integration. Conceptualsystem integration takes place in all phases of theproject life cycle. That is, when a design approachhas been selected, the approach is verified by"unwinding the process" to test whether theconcept at each physical level meets theexpectations and requirements. Physical integra-tion is accomplished during Phase D. At the finerlevels of resolution, pieces must be tested,assembled and/or integrated, and tested again.The system engineer's role includes theperformance of the delegated management du ties,such as configuration control and overseeing theintegration, verification, and validation process.

The purpose of verification of subsystemintegration is to ensure that the subsystemsconform to what was designed and interface witheach other as expected in all respects that areimportant: mechanical connections, effects oncenter of mass and products of inertia,electromagnetic interference, connectorimpedance and voltage, power con sumption, dataflow, and so on. Validation consists of ensuringthat the interfaced subsystems achieve theirintended results. While validation is even moreimportant than verification, it is usually much moredifficult to accomplish.

Perform the Mission. Eventually, the system iscalled upon to meet the need or seize theopportunity for which it was designed and built.

The system engineer continues to perform avariety of supporting functions, depending on thenature and duration of the mission. On a largeproject such as Space Station Alpha, some ofthese continuing functions include the validation ofsystem effectiveness at the operational site,overseeing the maintenance of configuration andlogistics documentation, overseeing sustainingengineering activities, compiling development andoperations "lessons reamed" documents, and, withthe help of the specialty engineering disciplines,identifying product improvement opportunities. Onsmaller systems, such as a Spacelab payload, onlythe last two may be needed.

Page 24: Engineering Systems Handbook

NASA Systems Engineering HandbookThe Project Life Cycle for Major NASA Systems

3 The Project Life Cycle for Major NASA Systems

One of the fundamental concepts used withinNASA for the management of major systems is the pro-gram/project life cycle, which consists of acategorization of everything that should be done toaccomplish a project into distinct phases, separated bycontrol gates. Phase boundaries are defined so that theyprovide more-or-less natural points for go/no-godecisions. Decisions to proceed may be qualified byliens that must be removed within a reasonable time. Aproject that fails to pass a control gate and has enoughresources may be allowed to “go back to the drawingboard"—or it may be terminated.

All systems start with the recognition of a need orthe discovery of an opportunity and proceed throughvarious stages of development to a final disposition.While the most dramatic impacts of the analysis andoptimization activities associated with systemsengineering are obtained in the early stages, decisionsthat affect millions of dollars of value or cost continue tobe amenable to the systems approach even as the endof the system lifetime approaches.

Decomposing the project life cycle into phases or-ganizes the entire process into more manageablepieces. The project life cycle should provide managerswith incremental visibility into the progress being madeat points in time that fit with the management andbudgetary environments. NASA documents governingthe acquisition of major systems (NMI 7120.4 and NHB7120.5) define the phases of the project life cycle as:

• Pre-Phase A—Advanced Studies ("find a suitableproject")

• Phase A—Preliminary Analysis ("make sure theproject is worthwhile")

• Phase B—Definition ("define the project and es-tablish a preliminary design")

• Phase C—Design ("complete the system design")Phase D — Development ("build, integrate, andverify the system, and prepare for operations")

• Phase E—Operations ("operate the system anddispose of it properly").

Phase A efforts are conducted by NASA field cen-ters; such efforts may rely, however, on pre-Phase A in-house and contracted advanced studies. The majority ofPhase B efforts are normally accomplished by industryunder NASA contract, but NASA field centers typicallyconduct parallel in-house studies in order to validate thecontracted effort and remain an informed buyer. NASAusually chooses to contract with industry for Phases Cand D, and often does so for Phase E. Phase C isnominally combined with Phase D, but when largeproduction quantities are planned, these are treatedseparately.

Alternatives to the project phases describedabove can easily be found in industry and elsewhere ingovernment. In general, the engineering developmentlife cycle is dependent on the technical nature of what'sbeing developed, and the project life cycle may need tobe tailored accordingly. Barry W. Boehm described howseveral contemporary software development processeswork; in some of these processes, the development andconstruction activities proceed in parallel, so thatattempting to separate the associated phases on a timeline is undesirable. Boehm describes a spiral, whichreflects the doctrine of successive refinement depictedin Figure 3, but Boehm's spiral describes the softwareproduct development process in particular. Hisdiscussion applies as well to the development ofhardware products as it does to software. Other exam-ples of alternative processes are the rapid prototypingand rapid development approaches. Selection of aproduct development process paradigm must be acase-dependent decision, based on the systemengineer's judgment and experience.

Sometimes, it is appropriate to perform somelong-lead-time activities ahead of the time they wouldnominally be done. Long-lead-time activities mightconsist of technology developments, prototypeconstruction and testing, or even fabrication of difficultcomponents. Doing things out of their usual sequenceincreases risk in that those activities could wind uphaving been either unnecessary or improperly specified.On the other hand, overall risk can sometimes bereduced by removal of such activities from the criticalpath.

Figure 5 (foldout, next page) details the resultingmanagement and major systems engineering productsand control gates that characterize the phases in NMI7120.4 and NHB 7120.5. Sections 3.1 to 3.6 containnarrative descriptions of the purposes, major activities,products, and control gates of the NASA project lifecycle phases. Section 3.7 provides a more concentrateddiscussion of the role of systems engineering in theprocess. Section 3.8 describes the NASA budget cyclewithin which program/project managers and systemengineers must operate.

3.1 Pre-Phase A—Advanced Studies

The purpose of this activity, which is usually per-formed more or less continually by "Advanced Projects"groups, is to uncover, invent, create, concoct and/ordevise

Page 25: Engineering Systems Handbook

NASA Systems Engineering HandbookThe Project Life Cycle for Major NASA Systems

Pre-Phase A—Advanced Studies

Purpose: To produce a broad spectrum of ideas andalternatives for missions from which new pro-grams/projects can be selected.

Major Activities and their Products:Identify missions consistent with charterIdentify and involve usersPerform preliminary evaluations of possible missionsPrepare program/project proposals, which include:

• Mission justification and objectives• Possible operations concepts• Possible system architectures• Cost, schedule, and risk estimates.

Develop master plans for existing program areasInformation Baselined:(nothing)Control Gates:Mission Concept ReviewInformal proposal reviews

a broad spectrum of ideas and alternatives for missionsfrom which new projects (programs) can be selected.Typically, this activity consists of loosely structured ex-aminations of new ideas, usually without central controland mostly oriented toward small studies. Its majorproduct is a stream of suggested projects, based on theidentification of needs and the discovery of opportunitiesthat are potentially consistent with NASA's mission.capabilities, priorities, and resources.

In the NASA environment, demands for newsystems derive from several sources. A major one is theopportunity to solve terrestrial problems that may be ad-dressed by putting instruments and other devices intospace. Two examples are weather prediction andcommunications by satellite. General improvements intechnology for use in space will continue to open newpossibilities. Such opportunities are rapidly perceived asneeds once the magnitude of their value is understood.

Technological progress makes possiblemissions that were previously impossible. Manned tripsto the moon and the taking of high resolution pictures ofplanets and other objects in the universe illustrate pastresponses to this kind of opportunity. New opportunitieswill continue to become available as our technologicalcapabilities grow.

Scientific progress also generates needs forNASA systems. As our understanding of the universearound us continues to grow, we are able to ask newand more precise questions. The ability to answer thesequestions often depends upon the changing state oftechnology.

Advanced studies may extend for several years,and may be a sequence of papers that are only looselyconnected. These studies typically focus on establishingmission goals and formulating top-level system

requirements and operations concepts. Conceptualdesigns are often offered to demonstrate feasibility andsupport programmatic estimates. The emphasis is onestablishing feasibility and desirability rather thanoptimality. Analyses and designs are accordingly limitedin both depth and number of options.

3.2 Phase A—Preliminary Analysis

The purpose of this phase is to further examinethe feasibility and desirability of a suggested new majorsystem before seeking significant funding. According toNHB 7120.5, the major products of this phase are aformal Mission Needs Statement (MNS) and one ormore credible, feasible designs and operationsconcepts. John Hodge describes this phase as "astructured version of the previous phase."

Phase A—Preliminary Analysis

Purpose: To determine the feasibility and desirabilityof a suggested new major system and itscompatibility with NASA's strategic plans.

Major Activities and their Products:Prepare Mission Needs StatementDevelop top-level requirementsDevelop corresponding evaluation criteria/metricsIdentify alternative operations and logistics conceptsIdentify project constraints and system boundariesConsider alternative design concepts, including:

feasibility and risk studies, cost and schedule estimates, and advanced technologyrequirements

Demonstrate that credible, feasible design(s) existAcquire systems engineering tools and modelsInitiate environmental impact studiesPrepare Project Definition Plan for Phase BInformation Baselined:(nothing)Control Gates:Mission Definition ReviewPreliminary Non-Advocate ReviewPreliminary Program/Project Approval Review

In Phase A, a larger team, often associated withan ad hoc program or project office, readdresses themission concept to ensure that the project justificationand practicality are sufficient to warrant a place inNASA's budget. The team's effort focuses on analyzingmission requirements and establishing a missionarchitecture. Activities

Page 26: Engineering Systems Handbook
Page 27: Engineering Systems Handbook
Page 28: Engineering Systems Handbook
Page 29: Engineering Systems Handbook

NASA Systems Engineering HandbookThe Project Life Cycle for Major NASA Systems

become formal, and the emphasis shifts towardestablishing optimality rather than feasibility. The effortaddresses more depth and considers many alternatives.Goals and objectives are solidified, and the projectdevelops more definition in the system requirements,top-level system architecture, and operations concept.Conceptual designs are developed and exhibit moreengineering detail than in advanced studies. Technicalrisks are identified in more detail and technologydevelopment needs become focused.

The Mission Needs Statement is not shown inthe sidebar as being baselined, as it is not underconfiguration control by the project. It may be underconfiguration control at the program level, as may theprogram requirements documents and the PreliminaryProgram Plan.

3.3 Phase B -- Definition

The purpose of this phase is to establish aninitial project baseline, which (according to NHB 7120.5)includes "a formal flowdown of the project-levelperformance requirements to a complete set of systemand subsystem design specifications for both flight andground elements" and "corresponding preliminarydesigns." The technical requirements should besufficiently detailed to establish firm schedule and costestimates for the project.

Actually, "the" Phase B baseline consists of acollection of evolving baselines covering technical andbusiness aspects of the project: system (and subsystem)requirements and specifications, designs, verificationand operations plans, and so on in the technical portionof the baseline, and schedules, cost projections, andmanagement plans in the business portion.Establishment of baselines implies the implementationof configuration management procedures. (See Section4.7.)

Phase B -- Definition

Purpose: To define the project in enough detail toestablish an initial baseline capable of meetingmission needs.

Major Activities and their Products:Prepare a Systems Engineering Management PlanPrepare a Risk Management PlanInitiate configuration managementPrepare engineering specialty program plansDevelop system-level cost-effectiveness modelRestate mission needs as functional requirementsIdentify science payloadsEstablish the initial system requirements and

verification requirements matrixPerform and archive trade studiesSelect a baseline design solution and a concept of

operationsDefine internal and external interface requirements(Repeat the process of successive refinement to get

"design-to" specifications and drawings,verifications plans, and interface documents tolower levels as appropriate)

Define the work breakdown structureDefine verification approach end policiesIdentify integrated logistics support requirementsEstablish technical resource estimates and firm life-cy-

cle cost estimatesDevelop statement(s) of workInitiate advanced technology developmentsRevise and publish a Project PlanReaffirm the Mission Needs StatementPrepare a Program Commitment AgreementInformation Baselined:System requirements and verification requirements

matrixSystem architecture and work breakdown structureConcept of operations“Design-to” specifications at all levelsProject plans, including schedule, resources,

acquisition strategies, and risk managementControl Gates:Non-Advocate ReviewProgram/Project Approval ReviewSystem Requirements Review(s)System Definition ReviewSystem-level Preliminary Design ReviewLower-level Preliminary Design ReviewsSafety review(s)

Page 30: Engineering Systems Handbook

NASA Systems Engineering HandbookThe Project Life Cycle for Major NASA Systems

A Credible, Feasible Design

A feasible system design is one that can be imple-mented as designed and can then accomplish the sys-tem's goals within the constraints imposed by thefiscal and operating environment. To be credible, adesign must not depend on the occurrence ofunforeseen breakthroughs in the state of the art.While a credible design may assume likelyimprovements in the state of the art, it is nonethelessriskier than one that does not.

Early in Phase B, the effort focuses onallocating functions to particular items of hardware,software, personnel, etc. System functional andperformance requirements along with architectures anddesigns become firm as system trades and subsystemtrades iterate back and forth in the effort to seek outmore cost-effective designs. (Trade studies shouldprecede—rather than follow—system design decisions.Chamberlain, Fox, and Duquette describe adecentralized process for ensuring that such trades leadefficiently to an optimum system design.) Majorproducts to this point include an accepted "functional"baseline and preliminary "design-to" baseline for thesystem and its major end items. The effort alsoproduces various engineering and management plans toprepare for managing the project's downstreamprocesses, such as verification and operations, and forimplementing engineering specialty programs.

Along the way to these products, projects aresubjected to a Non-Advocate Review, or NAR. Thisactivity seeks to assess the state of project definition interms of its clarity of objectives and the thoroughness oftechnical and management plans, technicaldocumentation, alternatives explored, and trade studiesperformed. The NAR also seeks to evaluate the costand schedule estimates, and the contingency reserve inthese estimates. The timing of this review is often drivenby the Federal budget cycle, which requires at least 16months between NASA's budget preparation forsubmission to the President's Office of Managementand Budget, and the Congressional funding for a newproject start. (See Section 3.8.) There is thus a naturaltension between the desire to have maturity in theproject at the time of the NAR and the desire to progressefficiently to final design and development.

Later in Phase B, the effort shifts to establishinga functionally complete design solution (i.e., a"design-to" baseline) that meets mission goals andobjectives. Trade studies continue. Interfaces amongthe major end items are defined. Engineering test items

may be developed and used to derive data for furtherdesign work, and project risks are reduced by successfultechnology developments and demonstrations. Phase Bculminates in a series of preliminary design reviews(PDRs), containing the system-level PDR and PDRs forlower-level end items as appropriate. The PDRs reflectthe successive refinement of requirements into designs.Design issues uncovered in the PDRs should beresolved so that final design can begin withunambiguous "design-to" specifications. From this pointon, almost all changes to the baseline are expected torepresent successive refinements, not fundamentalchanges. Prior to baselining, the system architecture,preliminary design, and operations concept must havebeen validated by enough technical analysis and designwork to establish a credible, feasible design at a lowerlevel of detail than was sufficient for Phase A.

3.4 Phase C—Design

The purpose of this phase is to establish acomplete design (“build-to" baseline) that is ready tofabricate (or code), integrate, and verify. Trade studiescontinue. Engineering test units more closely resemblingactual hardware are built and tested so as to establishconfidence that the design will function in the expectedenvironments. Engineering specialty analysis results areintegrated into the design, and the manufacturingprocess and controls are defined and validated.Configuration management continues to track andcontrol design changes as detailed interfaces aredefined. At each step in the successive refinement ofthe final design, corresponding integration andverification activities are planned in greater detail.During this phase, technical parameters, schedules, andbudgets are closely tracked to ensure that undesirabletrends (such as an unexpected growth in spacecraftmass or increase in its cost) are recognized earlyenough to take corrective action. (See Section 4.9.)

Phase C culminates in a series of critical designreviews (CDRs) containing the system-level CDR andCDRs corresponding to the different levels of thesystem hierarchy. The CDR is held prior to the start offabrication/production of end items for hardware andprior to the start of coding of deliverable softwareproducts. Typically, the sequence of CDRs reflects theintegration process that will occur in the next phase—that is, from lower-level CDRs to the system-level CDR.Projects, however, should tailor the sequencing of thereviews to meet their individual needs. The final productof this phase is a "build-to" baseline in sufficient detailthat actual production can proceed.

Page 31: Engineering Systems Handbook

NASA Systems Engineering HandbookThe Project Life Cycle for Major NASA Systems

Phase C—Design

Purpose: To complete the detailed design of the sys-tem (and its associated subsystems, including its operations systems).

Major Activities and their Products:Add remaining lower-level design specifications to the

system architectureRefine requirements documentsRefine verification plansPrepare interface documents(Repeat the process of successive refinement to get

"build-to" specifications and drawings, verification plans, and interface documents at all levels)

Augment baselined documents to reflect the growing maturity of the system: system architecture, verification requirements matrix, work breakdown structure, project plans

Monitor project progress against project plansDevelop the system integration plan and the system

operation planPerform and archive trade studiesComplete manufacturing planDevelop the end-to-end information system designRefine Integrated Logistics Support PlanIdentify opportunities for pre-planned product

improvementConfirm science payload selectionInformation Baselined:All remaining lower-level requirements and designs,

including traceability to higher levels"Build-to" specifications at all levelsControl Gates:Subsystem (and lower level) Critical Design ReviewsSystem-level Critical Design Review

3.5 Phase D—Development

The purpose of this phase is to build and verifythe system designed in the previous phase, deploy it,and prepare for operations. Activities include fabricationof hardware and coding of software, integration, andverification of the system. Other activities include theinitial training of operating personnel andimplementation of the Integrated Logistics Support Plan.For flight projects, the focus of activities then shifts topre-launch integration and launch. For large flightprojects, there may be an extended period of orbitinsertion, assembly, and initial shake-down operations.The major product is a system that has been shown tobe capable of accomplishing the purpose for which itwas created.

Phase D—DevelopmentPurpose: To build the subsystems (including the op-

erations system) and integrate them to createthe system, meanwhile developing confidence that itwill be able to meet the system requirements, then todeploy the system and ensure that it is ready foroperations.Major Activities and their Products:Fabricate (or code) the parts (i.e., the lowest-level

items in the system architecture)Integrate those items according to the integration plan

and perform verifications, yielding verified components and subsystems

(Repeat the process of successive integration to get a verified system)

Develop verification procedures at all levelsPerform system qualification verification(s)Perform system acceptance verification(s)Monitor project progress against project plansArchive documentation for verifications performedAudit "as-built" configurationsDocument Lessons LearnedPrepare operator's manualsPrepare maintenance manualsTrain initial system operators and maintainersFinalize and implement Integrated Logistics Support

PlanIntegrate with launch vehicle(s) and launch, perform

orbit insertion, etc., to achieve a deployed system

Perform operational verification(s)Information Baselined:"As-built" and "as-deployed" configuration dataIntegrated Logistics Support PlanCommand sequences for end-to-end command andtelemetry validation and ground data processingOperator's manualsMaintenance manualsControl Gates:Test Readiness Reviews (at all levels)System Acceptance ReviewSystem functional and physical configuration auditsFlight Readiness Review(s)Operational Readiness ReviewSafety reviews

3.6 Phase E—Operations

The purpose of this phase is to meet the initiallyidentified need or to grasp the initially identified opportu-nity. The products of the phase are the results of themission. This phase encompasses evolution of thesystem only insofar as that evolution does not involvemajor changes to the system architecture; changes ofthat scope

Page 32: Engineering Systems Handbook

NASA Systems Engineering HandbookThe Project Life Cycle for Major NASA Systems

Phase E—Operations

Purpose: To actually meet the initially identified need or to grasp the opportunity, then to dispose of the system in a responsible manner.

Major Activities and their Products:Train replacement operators and maintainersConduct the mission(s)Maintain and upgrade the systemDispose of the system and supporting processesDocument Lessons LearnedInformation Baselined:Mission outcomes, such as:

• Engineering data on system, subsystem and ma-terials performance

• Science data returned• High resolution photos from orbit• Accomplishment records ("firsts")• Discovery of the Van Allen belts• Discovery of volcanoes on lo.

Operations and maintenance logsProblem/failure reportsControl Gates:Regular system operations readiness reviewsSystem upgrade reviewsSafety reviewsDecommissioning Review

constitute new "needs," and the project life cycle startsover.

Phase E encompasses the problem of dealingwith the system when it has completed its mission; thetime at which this occurs depends on many factors. Fora flight system with a short mission duration, such as aSpacelab payload, disposal may require little more thande-integration of the hardware and its return to itsowner. On large flight projects of long duration, disposalmay proceed according to long-established plans, ormay begin as a result of unplanned events, such asaccidents. Alternatively, technological advances maymake it uneconomic to continue operating the systemeither in its current configuration or an improved one.

In addition to uncertainty as to when this part ofthe phase begins, the activities associated with safelydecommissioning and disposing of a system may belong and complex. Consequently, the costs and risksassociated with different designs should be consideredduring the project's earlier phases.

3.7 Role of Systems Engineering in the ProjectLife Cycle

This section presents two "idealized"descriptions of the systems engineering activities withinthe project life cycle. The first is the Forsberg and Mooz"vee" chart, which is taught at the NASAprogram/project management course. me second is the

NASA program/project life cycle process flow developedby the NASA-wide Systems Engineering ProcessImprovement Task team, in 1993/94.

3.7.1 The "Vee" Chart

Forsberg and Mooz describe what they call "thetechnical aspect of the project cycle" by a vee-shapedchart, starting with user needs on the upper left andending with a user-validated system on the upper right.Figure 7 provides a summary level overview of thoseactivities. On the left side of the vee, decomposition anddefinition activities resolve the system architecture,creating the details of the design. Integration andverification flow up and to the right as successivelyhigher levels of subsystems are verified, culminating atthe system level. This summary chart follows the basicoutline of the vee chart developed by NASA as part ofthe Software Management and Assurance Program.("CIs'' in the figure refer to the hardware and softwareconfiguration items, which are controlled by theconfiguration management system.)

Decomposition and Definition. Although not shown inFigure 7, each box in the vee represents a number ofparallel boxes suggesting that there may be manysubsystems that make up the system at that level ofdecomposition. For the top left box, the various parallelboxes represent the alternative design concepts that areinitially evaluated.

As product development progresses, a series ofbaselines is progressively established, each of which isput under formal configuration management at the timeit is approved. Among the fundamental purposes ofconfiguration management is to prevent requirementsfrom "creeping."

The left side of the core of the vee is similar tothe so-called "waterfall" or "requirements-driven design"model of the product development process. The controlgates define significant decision points in the process.Work should not progress beyond a decision point untilthe project manager is ready to publish and control thedocuments containing the decisions that have beenagreed upon at that point.

However, there is no prohibition against doingdetailed work early in the process. In fact, detailedhardware

Page 33: Engineering Systems Handbook

NASA Systems Engineering HandbookThe Project Life Cycle for Major NASA Systems

and/or software models may be required at the veryearliest stages to clarify user needs or to establishcredibility for the claim of feasibility. Early application ofinvolved technical and support disciplines is an essentialpart of this process; this is in fact implementation ofconcurrent engineering.

At each level of the vee, systems engineeringactivities include off-core processes: system design,advanced technology development, trade studies, riskmanagement, specialty engineering analysis andmodeling. This is shown on the chart as an orthagonalprocess in Figure 7(b). These activities are performed ateach level and may be repeated many times within aphase. While many kinds of studies and decisions areassociated with the off-core activities, only decisions atthe core level are put under configuration managementat the various control gates. Off-core activities,analyses, and models are used to substantiate the coredecisions and to ensure that the risks have beenmitigated or determined to be acceptable. The off-corework is not formally controlled, but the analyses, dataand results should be archived to facilitate replication atthe appropriate times and levels of detail to supportintroduction into the baseline.

There can, and should, be sufficient iterationdownward to establish feasibility and to identify andquantify risks. Upward iteration with the requirementsstatements (and with the intermediate products as well)is permitted, but should be kept to a minimum unlessthe user is still generating (or changing) requirements. Insoftware projects, upward confirmation of solutions withthe users is often necessary because user requirementscannot be adequately defined at the inception of theproject. Even for software projects, however, iterationwith user requirements should be stopped at the PDR,or cost and schedule are likely to get out of control.

Modification of user requirements after PDRshould be held for the next model or release of theproduct. If significant changes to user requirements aremade after PDR, the project should be stopped andrestarted with a new vee, reinitiating the entire process.The repeat of the process may be quicker because ofthe lessons learned the first time through, but all of thesteps must be redone.

Time and project maturity flow from left to righton the vee. Once a control gate is passed, backwarditeration is not possible. Iteration with the userrequirements, for example, is possible only vertically, asis illustrated on the vee.

Page 34: Engineering Systems Handbook

NASA Systems Engineering HandbookThe Project Life Cycle for Major NASA Systems

Integration and Verification. Ascending the right sideof the vee is the process of integration and verification.At each level, there is a direct correspondence betweenactivities on the left and right sides of the vee. This isdeliberate. The method of verification must bedetermined as the requirements are developed anddocumented at each level. This minimizes the chancesthat requirements are specified in a way that cannot bemeasured or verified.

Even at the highest levels, as user requirementsare translated into system requirements, the systemverification approach, which will prove that the systemdoes what is required, must be determined. Thetechnical demands of the verification process,represented as an orthagonal process in Figure 7(c),can drive cost and schedule, and may in fact be adiscriminator between alternative concepts. Forexample, if engineering models are to be used forverification or validation, they must be specified andcosted, their characteristics must be defined, and theirdevelopment time must be incorporated into theschedule from the beginning.

Incremental Development. If the user requirementsare too vague to permit final definition at PDR, oneapproach is to develop the project in predeterminedincremental releases. The first release is focused onmeeting a minimum set of user requirements, withsubsequent releases providing added functionality andperformance. This is a common approach in softwaredevelopment.

The incremental development approach is easyto describe in terms of the vee chart: all incrementshave a common heritage down to the first PDR. Thebalance of the product development process has aseries of displaced and overlapping vees, one for eachrelease.

3.7.2 The NASA Program/Project Life CycleProcess Flow

Another idealized description of the technical ac-tivities that occur during the NASA project life cycle isillustrated in Figure 8 (foldout, next page). In the figure,the NASA project life cycle is partitioned into tenprocess flow blocks, which are called stages in thishandbook. The stages reflect the changing nature of thework that needs to be performed as the systemmatures. These stages are related both temporally andlogically. Successive stages mark increasing systemrefinement and maturity, and require the products ofprevious stages as inputs. A transition to a new stageentails a major shift in the nature or extent of technicalactivities. Control gates assess the wisdom ofprogressing from one stage to another. (See Section4.8.3 for success criteria for specific reviews.) From theperspective of the system engineer, who must overseeand monitor the technical progress on the system,Figure 8 provides a more complete description of theactual work needed through the NASA project life cycle.

In practice, the stages do not always occursequentially. Unfolding events may invalidate or modify

goals and assumptions. This may neccessitate revisitingor modifying the results of a previous stage. The enditems comprising the system often have differentdevelopment schedules and constraints. This isespecially evident in Phases C and D where somesubsystems may be in final design while others are infabrication and integration.

The products of the technical activities supportthe systems engineering effort (e.g., requirements andspecifications, trade studies, specialty engineeringanalyses, verification results), and serve as inputs to thevarious control gates. For a detailed systemsengineering product database, database dictionary, andmaturity guidelines, see JSC-49040, NASA SystemsEngineering Process for Programs and Projects.

Several topics suggested by Figures 7 and 8merit special emphasis. These are concurrentengineering, technology insertion, and the distinctionbetween verification and validation.

Concurrent Engineering. If the project passes earlycontrol gates prematurely, it is likely to result in a needfor significant iteration of requirements and designs latein the development process. One way this can happenis by failing to involve the appropriate technical expertsat early stages, thereby resulting in the acceptance ofrequirements that cannot be met and the selection ofdesign concepts that cannot be built, tested, maintained,and/or operated.

Concurrent engineering is the simultaneousconsideration of product and process downstreamrequirements by multidisciplinary teams. Specialtyengineers from all disciplines (reliability, maintainability,human factors, safety, logistics, etc.) whose expertisewill eventually be represented in the product haveimportant contributions throughout the system life cycle.The system engineer is responsible for ensuring thatthese personnel are part of the project team at eachstage. In large projects, many integrated productdevelopment teams (PDTs) may be required. Each ofthese, in turn, would be represented on a PDT for thenext higher level in the project. In small projects,however, a small team is often sufficient as long as thesystem engineer can augment it as needed with expertsin the required technical and business disciplines.

The informational requirements of doingconcurrent engineering are demanding. One wayconcurrent engineering experts believe it can be madeless burdensome is by an automated environment. Insuch an environment, systems engineering, design andanalysis tools can easily ex

Page 35: Engineering Systems Handbook
Page 36: Engineering Systems Handbook
Page 37: Engineering Systems Handbook
Page 38: Engineering Systems Handbook
Page 39: Engineering Systems Handbook
Page 40: Engineering Systems Handbook

NASA Systems Engineering HandbookThe Project Life Cycle for Major NASA Systems

Integrated Product Development Teams

The detailed evaluation of product and processfeasibility and the identification of significantuncertainties (system risks) must be done by expertsfrom a variety of disciplines. An approach that hasbeen found effective is to establish teams for thedevelopment of the product with representatives fromall of the disciplines and processes that will eventuallybe involved. These integrated product developmentteams often have multidisciplinary (technical andbusiness) members. Technical personnel are neededto ensure that issues such as producibility, verifiability,deployability, supportability, trainability, operability,and disposability are all considered in the design. Inaddition, business (e.g., procurement! representativesare added to the team as the need arises. Continuityof support from these specialty disciplineorganizations throughout the system life-cycle ishighly desirable, though team composition andleadership can be expected to change as the systemprogresses from phase to phase.

change data, computing environments areinteroperable, and product data are readily accessibleand accurate. For more on the characteristics ofautomated environments, see for example Carter andBaker, Concurrent Engineering, 1992.

Technology Insertion. Projects are sometimes initiatedwith known technology shortfalls, or with areas for whichnew technology will result in substantial productimprovement. Technology development can be done inparallel with the project evolution and inserted as late asthe PDR. A parallel approach that is not dependent onthe development of new technology must be carriedunless high risk is acceptable. The technologydevelopment activity should be managed by the projectmanager and system engineer as a critical activity.

Verification vs. Validation. The distinction betweenverification and validation is significant: verificationconsists of proof of compliance with specifications, andmay be determined by test, analysis, demonstration,inspection, etc. (see Section 6.6). Validation consists ofproof that the system accomplishes (or, more weakly,can accomplish) its purpose. It is usually much moredifficult (and much more important) to validate a systemthan to verify it. Strictly speaking, validation can beaccomplished only at the system level, while verificationmust be accomplished throughout the entire systemarchitectural hierarchy.

3.8 Funding: The Budget Cycle

NASA operates with annual funding fromCongress. This funding results, however, from athree-year rolling process of budget formulation, budgetenactment, and finally, budget execution. A highlysimplified representation of the typical budget cycle isshown in Figure 9.

NASA starts developing its budget each Januarywith economic forecasts and general guidelines beingprovided by the Executive Branch's Office ofManagement and Budget (OMB). In early May, NASAconducts its Program Operating Plan (POP) andInstitutional Operating Plan (IOP) exercises inpreparation for submittal of a preliminary NASA budgetto the OMB. A final NASA budget is submitted to theOMB in September for incorporation into the President'sbudget transmittal to Congress, which generally occursin January. This proposed budget is then subjected toCongressional review and approval, culminating in thepassage of bills authorizing NASA to obligate funds inaccordance with Congressional stipulations andappropriating those funds. The Congressional

Page 41: Engineering Systems Handbook

NASA Systems Engineering HandbookThe Project Life Cycle for Major NASA Systems

process generally lasts through the summer. In recentyears, however, final bills have often been delayed pastthe start of the fiscal year on October 1. In those years,NASA has operated on continuing resolutions byCongress.

With annual funding, there is an implicit fundingcontrol gate at the beginning of every fiscal year. While

these gates place planning requirements on the projectand can make significant replanning necessary, they arenot part of an orderly systems engineering process.Rather, they constitute one of the sources of uncertaintythat affect project risks and should be consided inproject planning.

Page 42: Engineering Systems Handbook

NASA Systems Engineering HandbookManagement Issues in Systems Engineering

4 Management Issues in Systems Engineering

This chapter provides more specific informationon the systems engineering products and approachesused in the project life cycle just described. Theseproducts and approaches are the system engineer'scontribution to project management, and are designedto foster structured ways of managing a complex set ofactivities.

4.1 Harmony of Goals, Work Products, andOrganizations

When applied to a system, the doctrine ofsuccessive refinement is a "divide-and-conquer"strategy. Complex systems are successively divided intopieces that are less complex, until they are simpleenough to be conquered. This decomposition results inseveral structures for describing the product system andthe producing system ("the system that produces thesystem"). These structures play important roles insystems engineering and project management. Many ofthe remaining sections in this chapter are devoted todescribing some of these key structures.

Structures that describe the product systeminclude, but are not limited to, the requirements tree,system architecture, and certain symbolic informationsuch as system drawings, schematics, and databases.The structures that describe the producing systeminclude the project's work breakdown, schedules, costaccounts, and organization. These structures providedifferent perspectives on their common raison d'etre: thedesired product system. Creating a fundamentalharmony among these structures is essential forsuccessful systems engineering and project man-agement; this harmony needs to be established in somecases by one-to-one correspondence between two struc-tures, and in other cases, by traceable links acrossseveral structures. It is useful, at this point, to give someillustrations of this key principle.

System requirements serve two purposes in thesystems engineering process: first, they represent ahierarchical description of the buyer's desired productsystem as understood by the product development team(PDT). The interaction between the buyer and systemengineer to develop these requirements is one way the"voice of the buyer" is heard. Determining the rightrequirements— that is, only those that the informedbuyer is willing to pay for—is an important part of thesystem engineer's job. Second, system requirementsalso communicate to the design engineers what todesign and build (or code). As these requirements areallocated, they become inexorably linked to the systemarchitecture and product breakdown, which consists ofthe hierarchy of system, segments, elements,subsystems, etc. (See the sidebar on system termi-nology on page 3.)

The Work Breakdown Structure (WBS) is also atree-like structure that contains the pieces of worknecessary to complete the project. Each task in theWBS should be traceable to one or more of the systemrequirements. Schedules, which are structured asnetworks, describe the time-phased activities that resultin the product system in the WBS. The cost accountstructure needs to be directly linked to the work in theWBS and the schedules by which that work is done.(See Sections 4.3 through 4.5.)

The project's organization structure describesthe clusters of personnel assigned to perform the work.These organizational structures are usually trees.Sometimes they are represented as a matrix of twointerlaced trees, one for line responsibilities, the otherfor project responsibilities. In any case, theorganizational structure should allow identification ofresponsibility for each WBS task.

Project documentation is the product ofparticular WBS tasks. There are two fundamentalcategories of project documentation: baselines andarchives. Each category contains information about boththe product system and the producing system. Thebaseline, once established, contains informationdescribing the current state of the product system andproducing system resulting from all decisions that havebeen made. It is usually organized as a collection ofhierarchical tree structures, and should exhibit asignificant amount of cross-reference linking. Thearchives contain all of the rest of the project'sinformation that is worth remembering, even if onlytemporarily. The archives should contain allassumptions, data, and supporting analyses that arerelevant to past, present, and future decisions.Inevitably, the structure (and control) of the archives ismuch looser than that of the baseline, though crossreferences should be maintained where feasible. (SeeSection 4.7.)

The structure of reviews (and their associatedcontrol gates) reflect the time-phased activitiesassociated with the realization of the product systemfrom its product breakdown. The status reporting andassessment structure provides information on theprogress of those same activities. On the financial side,the status reporting and assessment structure should bedirectly linked to the WBS, schedules, and costaccounts. On the technical side, it should be linked tothe product breakdown and/or requirements tree. (SeeSections 4.8 and 4.9.)

Page 43: Engineering Systems Handbook

NASA Systems Engineering HandbookManagement Issues in Systems Engineering

4.2 Managing the Systems Engineering Process:The Systems Engineering Management Plan

Systems engineering management is atechnical function and discipline that ensures thatsystems engineering and all other technical functionsare properly applied.

Each project should be managed in accordancewith a project life cycle that is carefully tailored to theproject's risks. While the project manager concentrateson managing the overall project life cycle, theproject-level or lead system engineer concentrates onmanaging its technical aspect (see Figure 7 or 8). Thisrequires that the system engineer perform or cause tobe performed the necessary multiple layers ofdecomposition, definition? integration, verification andvalidation of the system, while orchestrating andincorporating the appropriate concurrent engineering.Each one of these systems engineering functions re-quires application of technical analysis skills and tech-niques.

The techniques used in systems engineeringmanagement include work breakdown structures,network scheduling, risk management, requirementstraceability and reviews, baselines, configurationmanagement, data management, specialty engineeringprogram planning, definition and readiness reviews,audits, design certification, and status reporting andassessment.

The Project Plan defines how the project will bemanaged to achieve its goals and objectives withindefined programmatic constraints. The SystemsEngineering Management Plan (SEMP) is thesubordinate document that defines to all projectparticipants how the project will be technically managedwithin the constraints established by the Project Plan.The SEMP communicates to all participants how theymust respond to pre-established management practices.For instance, the SEMP should describe the means forboth internal and external (to the project) interfacecontrol. The SEMP also communicates how the systemsengineering management techniques noted aboveshould be applied.

4.2.1 Role of the SEMP

The SEMP is the rule book that describes to allparticipants how the project will be technically managed.The responsible NASA field center should have a SEMPto describe how it will conduct its technicalmanagement, and each contractor should have a SEMPto describe how it will manage in accordance with bothits contract and NASA's technical managementpractices. Since the SEMP is project- andcontract-unique, it must be updated for each significantprogrammatic change or it will become outmoded andunused, and the project could slide into an uncontrolledstate. The NASA field center should have its SEMPdeveloped before attempting to prepare an initial cost

estimate, since activities that incur cost, such as tech-nical risk reduction, need to be identified and describedbeforehand. The contractor should have its SEMPdeveloped during the proposal process (prior to costingand pricing) because the SEMP describes the technicalcontent of the project, the potentially costly riskmanagement activities, and the verification andvalidation techniques to be used, all of which must beincluded in the preparation of project cost estimates.

The project SEMP is the senior technicalmanagement document for the project; all othertechnical control documents, such as the InterfaceControl Plan, Change Control Plan. Make-or-BuyControl Plan, Design Review Plan, Technical Audit Plan,depend on the SEMP and must comply with it. TheSEMP should be comprehensive and describe how afully integrated engineering effort will be managed andconducted.

4.2.2 Contents of the SEMP

Since the SEMP describes the project's technicalmanagement approach, which is driven by the type ofproject, the phase in the project life cycle, and thetechnical development risks. it must be specificallywritten for each project to address these situations andissues. While the specific content of the SEMP istailored to the project, the recommended content islisted below.

Part I—Technical Project Planning and Control. Thissection should identify organizational responsibilitiesand authority for systems engineering management,including control of contracted engineering; levels ofcontrol established for performance and designrequirements, and the control method used; technicalprogress assurance methods; plans and schedules fordesign and technical program/project reviews; andcontrol of documentation.

This section should describe:

• The role of the project office The role of theuser

• The role of the Contracting Office TechnicalRepresentative (COTR)

• The role of systems engineering The role ofdesign engineering

• The role of specialty engineering• Applicable standards• Applicable procedures and training• Baseline control process

Page 44: Engineering Systems Handbook

NASA Systems Engineering HandbookManagement Issues in Systems Engineering

• Change control process• Interface control process• Control of contracted (or subcontracted)

engineering• Data control process Make-or-buy control process• Parts, materials, and process control• Quality control• Safety control• Contamination control• Electromagnetic interference and electromagnetic

compatibility (EMI/EMC)• Technical performance measurement process• Control gates• Internal technical reviews• Integration control• Verification control• Validation control.

Part II—Systems Engineering Process. This sectionshould contain a detailed description of the process tobe used, including the specific tailoring of the process tothe requirements of the system and project; theprocedures to be used in implementing the process;in-house documentation; the trade study methodology;the types of mathematical and or simulation models tobe used for system cost-effectiveness evaluations; andthe generation of specifications.

This section should describe the:

• System decomposition process• System decomposition format• System definition process• System analysis and design process• Requirements allocation process• Trade study process• System integration process• System verification process• System qualification process• System acceptance process• System validation process• Risk management process• Life-cycle cost management process• Specification and drawing structure• Configuration management process• Data management process• Use of mathematical models• Use of simulations• ools to be used.

Part III—Engineering Specialty Integration. This sec-tion of the SEMP should describe the integration andcoordination of the efforts of the specialty engineeringdisciplines into the systems engineering process duringeach iteration of that process. Where there is potentialfor overlap of specialty efforts, the SEMP should definethe relative responsibilities and authorities of each.

This section should contain, as needed, theproject's approach to:

• Concurrent engineering• The activity phasing of specialty disciplines• The participation of specialty disciplines• The involvement of specialty disciplines• The role and responsibility of specialty disciplines• The participation of specialty disciplines in system

decomposition and definition• The role of specialty disciplines in verification and

validation• Reliability• Maintainability• Quality assurance• Integrated logistics• Human engineering• Safety• Producibility• Survivability/vulnerability• Environmental assessment• Launch approval.

4.2.3 Development of the SEMP

The SEMP must be developed concurrently withthe Project Plan. In developing the SEMP, the technicalapproach to the project, and hence the technical aspectof the project life cycle, are developed. This becomesthe keel of the project that ultimately determines theproject's length and cost. The development of theprogrammatic and technical management approachesrequires that the key project personnel develop anunderstanding of the work to be performed and therelationships among the various parts of that work. (SeeSections 4.3 and 4.4 on Work Breakdown Structuresand network schedules, respectively.)

The SEMP's development requires contributionsfrom knowledgeable programmatic and technicalexperts from all areas of the project that cansignificantly influence the project's outcome. Theinvolvement of recognized experts is needed toestablish a SEMP that is credible to the project managerand to secure the full commitment of the project team.

Page 45: Engineering Systems Handbook

NASA Systems Engineering HandbookManagement Issues in Systems Engineering

4.2.4 Managing the Systems Engineering Process:Summary

The systems engineering organization, andspecifically the project-level system engineer, isresponsible for managing the project through thetechnical aspect of the project life cycle. Thisresponsibility includes management of thedecomposition and definition sequence, andmanagement of the integration, verification, andvalidation sequence. Attendant with this management isthe requirement to control the technical baselines of theproject. Typically, these baselines are the: “functional,"''design to,” "build-to'' (or "code-to"), "as-built" (or "as-coded"), and ''as-deployed." Systems engineering mustensure an efficient and logical progression throughthese baselines.

Systems engineering is responsible for systemdecomposition and design until the "design-to" specifica-tions of all lower-level configuration items have beenproduced. Design engineering is then responsible fordeveloping the ''build-to" and "code-to" documentationthat complies with the approved "design-to" baseline.Systems engineering audits the design and codingprocess and the design engineering solutions forcompliance to all higher level baselines. In performingthis responsibility, systems engineering must ensure anddocument requirements traceability.

Systems engineering is also responsible for theoverall management of the integration, verification, andvalidation process. In this role, systems engineeringcon-

SEMP Lessons Learned from DoD Experience

• A well-managed project requires a coordinatedSystems Engineering Management Plan that isused through the project cycle.

• A SEMP is a living document that must be up-dated as the project changes and kept consistentwith the Project Plan.

• A meaningful SEMP must be the product of ex-perts from all areas of the project.

• Projects with little or insufficient systems engi-neering discipline generally have major problems.

• Weak systems engineering, or systems engi-neering placed too low in the organization, cannotperform the functions as required.

• The systems engineering effort must be skillfullymanaged and well communicated to all projectparticipants.

• The systems engineering effort must be respon-sive to both the customer and the contractor in-terests.

ducts Test Readiness Reviews and ensures that onlyverified configuration items are integrated into the nexthigher assembly for further verification. Verification iscontinued to the system level, after which systemvalidation is conducted to prove compliance with userrequirements.

Systems engineering also ensures thatconcurrent engineering is properly applied through theproject life cycle by involving the required specialtyengineering disciplines. The SEMP is the guidingdocument for these activities.

4.3 The Work Breakdown Structure

A Work Breakdown Structure (WBS) is ahierarchical breakdown of the work necessary tocomplete a project. The WBS should be aproduct-based, hierarchical division of deliverable itemsand associated services. As such, it should contain theproject's Product Breakdown Structure (PBS), with thespecified prime product(s) at the top, and the systems,segments, subsystems, etc. at successive lower levels.At the lowest level are products such as hardware items,software items, and information items (documents,databases, etc.) for which there is a cognizant engineeror manager. Branch points in the hierarchy should showhow the PBS elements are to be integrated. The WBS isbuilt from the PBS by adding, at each branch point ofthe PBS, any necessary service elements such asmanagement, systems engineering, integration andverification (I&V), and integrated logistics support (ILS).If several WBS elements require similar equipment orsoftware, then a higher level WBS element might bedefined to perform a block buy or a development activity(e.g., "System Support Equipment"). Figure 10 showsthe relationship between a .system. a PBS, and a WBS.

A project WBS should be carried down to thecost account level appropriate to the risks to bemanaged. The appropriate level of detail for a costaccount is determined by management's desire to havevisibility into costs, balanced against the cost ofplanning and reporting. Contractors may have aContract WBS (CWBS), which is appropriate to thecontractor's needs to control costs. A summary CWBS,consisting of the upper levels of the full CWBS, isusually included in the project WBS to report costs tothe contracting organization.

WBS elements should be identified by title andby a numbering system that performs the followingfunctions:

• Identifies the level of the WBS element• Identifies the higher level element into which the

WBS element will be integrated• Shows the cost account number of the element.

Page 46: Engineering Systems Handbook

NASA Systems Engineering HandbookManagement Issues in Systems Engineering

A WBS should also have a companion WBSdictionary that contains each element's title,identification number, objective, description, and anydependencies (e.g., receivables) on other WBSelements. This dictionary provides a structured projectdescription that is valuable for

Figure 10 -- The Relationship Between a System, aProduct Breakdown Structure, and a Work BreakdownStructure.

orienting project members and other interested parties.It fully describes the products and/or services expectedfrom each WBS element.

This section provides some techniques fordeveloping a WBS, and points out some mistakes toavoid. Appendix B.2 provides an example of a WBS foran airborne telescope that follows the principles ofproduct-based WBS development.

4.3.1 Role of the WBS

A product-based WBS is the organizingstructure for:

• Project and technical planning and scheduling• Cost estimation and budget formulation. (In

particular, costs collected in a product-basedWBS can be compared to historical data. This isidentified as a primary objective by DoDstandards for WBSs.)

• Defining the scope of statements of work andspecifications for contract efforts

• Project status reporting, including schedule, cost,workforce, technical performance, andintegrated cost/schedule data (such as EarnedValue and estimated cost at completion)

• Plans, such as the SEMP, and otherdocumentation products, such as specificationsand drawings.

It provides a logical outline and vocabulary thatdescribes the entire project, and integrates informationin a consistent way. If there is a schedule slip in oneelement of a WBS, an observer can determine whichother WBS elements are most likely to be affected. Costimpacts are more accurately estimated. If there is adesign change in one element of the WBS, an observercan determine which other WBS elements will mostlikely be affected, and these elements can be consultedfor potential adverse impacts.

4.3.2 Techniques for Developing the WBS

Developing a successful project WBS is likely torequire several iterations through the project life cyclesince it is not always obvious at the outset what the fullextent of the work may be. Prior to developing apreliminary WBS, there should be some development ofthe system architecture to the point where a preliminaryPBS can be created.

The PBS and associated WBS can then bedeveloped level by level from the top down. In thisapproach, a project-level system engineer finalizes thePBS at the pro-

Page 47: Engineering Systems Handbook

NASA Systems Engineering HandbookManagement Issues in Systems Engineering

ject level, and provides a draft PBS for the next lowerlevel. The WBS is then derived by adding appropriateservices such as management and systems engineeringto that lower level. This process is repeated recursivelyuntil a WBS exists down to the desired cost accountlevel.

An alternative approach is to define all levels ofa complete PBS in one design activity, and thendevelop the complete WBS. When this approach istaken, it is necessary to take great care to develop thePBS so that all products are included, and allassembly/integration and verification branches arecorrect. The involvement of people who will beresponsible for the lower level WBS elements isrecommended.

A WBS for a Multiple Delivery Project. There areseveral terms for projects that provide multipledeliveries, such as: rapid development, rapidprototyping, and incremental delivery. Such projectsshould also have a product-based WBS, but there willbe one extra level in the WBS hierarchy, immediatelyunder the final prime product(s), which identifies each

delivery. At any one point in time there will be bothactive and inactive elements in the WBS.A WBS for an Operational Facility. A WBS formanaging an operational facility such as a flightoperations center is analogous to a WBS for developinga system. The difference is that the products in the PBSare not necessarily completed once and then integrated,but are produced on a routine basis. A PBS for anoperational facility might consist largely of informationproducts or service products provided to externalcustomers. However, the general concept of ahierarchical breakdown of products and/or serviceswould still apply.

The rules that apply to a development WBSalso apply to a WBS for an operational facility. Thetechniques for developing a WBS for an operationalfacility are the same, except that services such asmaintenance and user support are added to the PBS,and services such as systems engineering, integration,and verification may not be needed.

4.3.3 Common Errors in Developing a WBS

There are three common errors found in WBSs:

Page 48: Engineering Systems Handbook

NASA Systems Engineering HandbookManagement Issues in Systems Engineering

• Error 1: The WBS describes functions, not prod-ucts. This makes the project manager the onlyone formally responsible for products.

• Error 2: The WBS has branch points that are notconsistent with how the WBS elements will beintegrated. For instance, in a flight operationssystem with a distributed architecture, there istypically software associated with hardware itemsthat will be integrated and verified at lower levelsof a WBS. It would then be inappropriate toseparate hardware and software as if they wereseparate systems to be integrated at the systemlevel. This would make it difficult to assignaccountability for integration and to identify thecosts of integrating and testing components of asystem.

• Error 3: The WBS is inconsistent with the PBS.This makes it possible that the PBS will not befully implemented, and generally complicates themanagement process.

Some examples of these errors are shown inFigure 11. Each one prevents the WBS fromsuccessfully performing its roles in project planning andorganizing. These errors are avoided by using the WBSdevelopment techniques described above.

4.4 Scheduling

Products described in the WBS are the result ofactivities that take time to complete. An orderly andefficient systems engineering process requires thatthese activities take place in a way that respects theunderlying time precedence relationships among them.This is accomplished by creating a network schedule,which explicitly take s into account the dependencies ofeach activity on other activities and receivables fromoutside sources. This section discusses the role ofscheduling and the techniques for building a completenetwork schedule.

4.4.1 Role of Scheduling

Scheduling is an essential component ofplanning and managing the activities of a project. Theprocess of creating a network schedule can lead to amuch better understanding of what needs to be done,how long it will take, and how each element of theproject WBS might affect other elements. A completenetwork schedule can be used to calculate how long itwill take to complete a project, which activitiesdetermine that duration (i.e., critical path activities), andhow much spare time (i.e., float) exists for all the otheractivities of the project. (See sidebar on critical path andfloat calculation). An understanding of the project'sschedule is a prerequisite for accurate projectbudgeting.

Keeping track of schedule progress is anessential part of controlling the project, because costand technical problems often show up first as scheduleproblems. Because network schedules show how eachactivity affects other activities, they are essential forpredicting the consequences of schedule slips oraccelerations of an activity on the entire project.Network scheduling systems also help managersaccurately assess the impact of both technical andresource changes on the cost and schedule of a project.

4.4.2 Network Schedule Data and GraphicalFormats

Network schedule data consist of:

• Activities• Dependencies between activities (e.g., where an

activity depends upon another activity for areceivable)

• Products or milestones that occur as a result ofone or more activities

• Duration of each activity.

A work flow diagram (WFD) is a graphicaldisplay of the first three data items above. A networkschedule contains all four data items. When creating anetwork schedule, graphical formats of these data arevery useful. Two general types of graphical formats,shown in Figure 12, are used. One hasactivities-on-arrows, with products and dependencies atthe beginning and end of the arrow. This is the typicalformat of the Program Evaluation and ReviewTechnique (PERT) chart. The second, calledprecedence diagrams, has boxes that representactivities; dependencies are then shown by arrows. Dueto its simpler visual format and reduced requirements oncomputer resources, the precedence diagram hasbecome more common in recent years.

The precedence diagram format allows forsimple depiction of the following logical relationships:

• Activity B begins when Activity A begins (Start-Start, or SS)

• Activity B begins only after Activity A ends (Fin-ish-Start, or FS)

• Activity B ends when Activity A ends(Finish-Finish, or FF).

Page 49: Engineering Systems Handbook

NASA Systems Engineering HandbookManagement Issues in Systems Engineering

Each of these three activity relationships may bemodified by attaching a lag (+ or-) to the relationship, asshown in Figure 12.

It is possible to summarize a number oflow-level activities in a precedence diagram with asingle activity. This is commonly referred to ashammocking. One takes the initial low-level activity,and attaches a summary activity to it using the firstrelationship described above. The summary activity isthen attached to the final low-level activity using thethird relationship described above. Unless one ishammocking, the most common relationship used inprecedence diagrams is the second one mentionedabove. The activity-on-arrow format can represent theidentical time-precedence logic as a precedencediagram by creating artificial events and activities asneeded.

4.4.3 Establishing a Network Schedule

Scheduling begins with project-level scheduleobjectives for delivering the products described in theupper levels of the WBS. To develop network schedulesthat are consistent with the project's objectives, thefollowing six steps are applied to each cost account atthe lowest available level of the WBS.

Step 1: Identify activities and dependenciesneeded to complete each WBS element. Enoughactivities should be identified to show exact scheduledependencies between activities and other WBSelements. It is not uncommon to have about 100activities identified for the first year of a

Critical Path and Float Calculation

The critical path is the sequence of activities that willtake the longest to accomplish. Activities that are noton the critical path have a certain amount of time thatthey can be delayed until they, too are on a criticalpath. This time is called float. There are two types offloat, path float and free float. Path float is where asequence of activities collectively have float. If thereis a delay in an activity in this sequence, then the pathfloat for all subsequent activities is reduced by thatamount. Free float exists when a delay in an activitywill have no effect on any other activity. For example,if activity A can be finished in 2 days, and activity Brequires 5 days, and activity C requires completion ofboth A and B. then A would have 3 days of free float.

Float is valuable. Path float should beconserved where possible, so that a reserve exists forfuture activities. Conservation is much less importantfor free float.

To determine the critical path, there is first a"forward pass" where the earliest start time of eachactivity is calculated. The time when the last activitycan be completed becomes the end point for thatschedule. Then there is a "backward pass", where thelatest possible start point of each activity is calculated,assuming that the last activity ends at the end pointpreviously calculated. Float is the time differencebetween the earliest start time and the latest start timeof an activity. Whenever this is zero, that activity is ona critical Path.

WBS element that will require 10 work-years per year.Typically, there is more schedule detail for the currentyear, and much less detail for subsequent years. Eachyear, schedules are updated with additional detail for thecurrent year. This first step is most easily accomplishedby:

• Ensuring that the cost account WBS is extendeddownward to describe all significant products!,including documents, reports, hardware andsoftware items

• For each product, listing the steps required forits generation and drawing the process as awork flow diagram

• Indicating the dependencies among theproducts, and any integration and verificationsteps within the work package.

Step 2: Identify and negotiate externaldependencies. External dependencies are anyreceivables from outside of the cost account, and anydeliverables that go outside of the cost account.Informal negotiations should

Page 50: Engineering Systems Handbook

NASA Systems Engineering HandbookManagement Issues in Systems Engineering

occur to ensure that there is agreement with respect tothe content, format, and labeling of products that moveacross cost account boundaries. This step is designed toensure that lower level schedules can be integrated.

Step 3: Estimate durations of all activities. As-sumptions behind these estimates (workforce,availability of facilities, etc.) should be written down forfuture reference.

Step 4: Enter the schedule data for the WBSelement into a suitable computer program to obtain anetwork schedule and an estimate of the critical path forthat element. (There are many commercially availablesoftware packages for this function.) This step enablesthe cognizant engineer, team leader, and/or systemengineer to review the schedule logic. It is not unusualat this point for some iteration of steps 1 to 4 to berequired in order to obtain a satisfactory schedule. Oftentoo, reserve will be added to critical path activities, oftenin the form of a dummy activity, to ensure that schedulecommitments can be met for this WBS element.

Step 5: Integrate schedules of lower level WBSelements, using suitable software, so that alldependencies between WBS elements are correctlyincluded in a project network. It is important to includethe impacts of holidays, weekends, etc. by this point.The critical path for the project is discovered at this stepin the process.

Step 6: Review the workforce level and fundingprofile over time, and make a final set of adjustments tologic and durations so that workforce levels and fundinglevels are reasonable. Adjustments to the logic and thedurations of activities may be needed to converge to theschedule targets established at the project level. Thismay include adding more activities to some WBSelement, deleting redundant activities, increasing theworkforce for some activities that are on the criticalpath, or finding ways to do more activities in parallel,rather than in series. If necessary, the project leveltargets may need to be adjusted, or the scope of theproject may need to be reviewed. Again, it is goodpractice to have some schedule reserve, or float, as partof a risk mitigation strategy.

The product of these last steps is a feasiblebaseline schedule for each WBS element that isconsistent with the activities of all other WBS elements,and the sum of all these schedules is consistent withboth the technical scope and the schedule goals for theproject. There should be enough float in this integratedmaster schedule so that schedule and associated costrisk are acceptable to the project and to the project'scustomer. Even when this is done, time estimates formany WBS elements will have been underestimated, orwork on some WBS elements will not start as early ashad been originally assumed due to late arrival ofreceivables. Consequently, replanning is almost alwaysneeded to meet the project's goals.

4.4.4 Reporting Techniques

Summary data about a schedule is usuallydescribed in Gantt charts. A good example of a Ganttchart is shown in Figure 13. (See sidebar on Gantt chartfeatures.) Another type of output format is a table thatshows the float and recent changes in float of keyactivities. For example, a project manager may wish toknow precisely how much schedule reserve has beenconsumed by critical path activities, and whetherreserves are being consumed or are being preserved inthe latest reporting period. This table providesinformation on the rate of change of schedule reserve.

4.4.5 Resource Leveling

Good scheduling systems provide capabilities toshow resource requirements over time, and to makeadjustments so that the schedule is feasible with respectto resource constraints over time. Resources mayinclude workforce level, funding profiles, importantfacilities, etc. Figure 14 shows an example of anunleveled resource profile. The objective is to move thestart dates of tasks that have float to points where theresource profile is feasible. If that is not sufficient, thenthe assumed task durations for resource-intensiveactivities should be reexamined and, accordingly, theresource levels changed.

4.5 Budgeting and Resource Planning

Budgeting and resource planning involves theestablishment of a reasonable project baseline budget,and the capability to analyze changes to that baselineresulting from technical and/or schedule changes. Theproject's WBS, baseline schedule, and budget should beviewed by the system engineer as mutually dependent,reflecting the technical content, time, and cost ofmeeting the project's goals and objectives.

The budgeting process needs to take intoaccount whether a fixed cost cap or cost profile exists.When no such cap or profile exists, a baseline budget isdeveloped from the WBS and network schedule. Thisspecifically involves combining the project's workforceand other resource needs with the appropriate workforcerates and other financial and programmatic factors toobtain cost element estimates. These elements of costinclude:

Page 51: Engineering Systems Handbook

NASA Systems Engineering HandbookManagement Issues in Systems Engineering

Desirable Features in Gantt Charts

The Gantt chart shown in Figure 13 (below) illustrates the following desirable features:

• A heading that describes the WBS element, the responsible manager, the date of the baseline used, and thedate that status was reported.

• A milestone section in the main body (lines 1 and 2)• An activity section in the main body. Activity data shown includes:

a. WBS elements (lines 3, 5, 8, 12, 16, and 20)b. Activities (indented from WBS elements)c. Current plan (shown as thick bars)d. Baseline plan (same as current plan, or if different, represented by thin bars under the thick bars)e. Status line at the appropriate datef. Slack for each activity (dashed lines above the current plan bars)g. Schedule slips from the baseline (dashed lines below the milestone on line 12)

• A note section, where the symbols in the main body can be explained.

This Gantt chart shows only 23 lines, which is a summary of the activities currently being worked for this WBSelement. It is appropriate to tailor the amount of detail reported to those items most pertinent at the time of statusreporting.

Page 52: Engineering Systems Handbook

NASA Systems Engineering HandbookManagement Issues in Systems Engineering

Figure 14 -- An Example of an Unleveled ResourceProfile.

• Direct labor costs Overhead costs Other directcosts (travel, data processing, etc.) Subcontractcosts

• Material costs• General and administrative costs• Cost of money (i.e., interest payments, if

applicable)• Fee (if applicable)• Contingency.

When there is a cost cap or a fixed cost profile,there are additional logic gates that must be satisfiedbefore the system engineer can complete the budgetingand planning process. A determination needs to bemade whether the WBS and network schedule arefeasible with respect to mandated cost caps and/or costprofiles. If not, the system engineer needs torecommend the best approaches for either stretchingout a project (usually at an increase in the total cost), ordescoping the project's goals and objectives,requirements, design, and/or implementation approach.(See sidebar on schedule slippage.)

Whether a cost cap or fixed cost profile exists, itis important to control costs after they have beenbaselined. An important aspect of cost control is projectcost and schedule status reporting and assessment,methods for which are discussed in Section 4.9.1 of thishandbook. Another is cost and schedule risk planning,such as developing risk avoidance and work-aroundstrategies. At the project level, budgeting and resourceplanning must also ensure that an adequate level ofcontingency funds are in

Assessing the Effect of Schedule Slippage

Certain elements of cost, called fixed costs, aremainly time related, while others, called variablecosts, are mainly product related. If a project'sschedule is slipped, then the fixed costs of completingit increase. The variable costs remain the same intotal (excluding inflation adjustments), but aredeferred downstream, as in the figure below.

To quickly assess the effect of a simpleschedule slippage:

• Convert baseline budget plan from nominal(real-year) dollars to constant dollars

• Divide baseline budget plan into fixed andvariable costs

• Enter schedule slip implementation• Compute new variable costs including any

work-free disruption costs• Repeat last two steps until an acceptable imple-

mentation is achieved• Compute new fixed costs• Sum new fixed and variable costs• Convert from constant dollars to nominal

(real-year) dollars.

cluded to deal with unforeseen events. Some riskmanagement methods are discussed in Section 4.6.

4.6 Risk Management

Risk management comprises purposeful thoughtto the sources, magnitude, and mitigation of risk, andactions directed toward its balanced reduction. As such,risk management is an integral part of projectmanagement, and contributes directly to the objectivesof systems engineering.

NASA policy objectives with regard to projectrisks are expressed in NMI 8070.4A, Risk ManagementPolicy. These are to:

Page 53: Engineering Systems Handbook

NASA Systems Engineering HandbookManagement Issues in Systems Engineering

Figure 15 -- Risk Management Structure Diagram.

• Provide a disciplined and documented approachto risk management throughout the project lifecycle

• Support management decision making byproviding integrated risk assessments (i.e.,taking into account cost, schedule, performance,and safety concerns)

• Communicate to NASA management thesignificance of assessed risk levels and thedecisions made with respect to them.

There are a number of actions the systemengineer can take to effect these objectives. Principalamong them is planning and completing awell-conceived risk management program. Such aprogram encompasses several related activities duringthe systems engineering process. The structure of theseactivities is shown in Figure 15.

Risk

The term risk has different meanings depending onthe context. Sometimes it simply indicates the degreeof l variability in the outcome or result of a particularaction. In the context of risk management during thesystems engineering process, the term denotes acombination of both the likelihood of variousoutcomes and their distinct consequences. The focus,moreover, is generally on undesired or unfavorableoutcomes such as the risk of a technical failure, or therisk of exceeding a cost target.

The first is planning the risk managementprogram, which should be documented in a riskmanagement program plan. That plan, which elaborateson the SEMP, contains:

• The project's overall risk policy and objectives• The programmatic aspects, of the risk

management activities (i.e., responsibilities,resources, schedules and milestones, etc.)

• A description of the methodologies, processes, andtools to be used for risk identification andcharacterization, risk analysis, and riskmitigation and tracking

• A description of the role of risk management withrespect to reliability analyses, formal reviews,and status reporting and assessment

• Documentation requirements for each riskmanagement product and action.

The level of risk management activities shouldbe consistent with the project's overall risk policyestablished in conjunction with its NASA Headquartersprogram office. At present, formal guidelines for theclassification of projects with respect to overall riskpolicy do not exist; such guidelines exist only for NASApayloads. These are promulgated in NMI 8010.1A,Classification of NASA Pay-loads, Attachment A, whichis reproduced as Appendix B.3.

With the addition of data tables containing theresults of the risk management activities, the riskmanagement program plan grows into the project's RiskManagement Plan (RMP). These data tables shouldcontain the project's identified significant risks. For eachsuch risk, these data tables should also contain therelevant characterization and analysis results, anddescriptions of the related mitigation and tracking plans(including any descope options and/or requiredtechnology developments). A sample RMP outline isshown as Appendix B.4.

The technical portion of risk managementbegins with the process of identifying and characterizingthe project's risks. The objective of this step is tounderstand

Page 54: Engineering Systems Handbook

NASA Systems Engineering HandbookManagement Issues in Systems Engineering

what uncertainties the project faces, and which amongthem should be given greater attention. This isaccomplished by categorizing (in a consistent manner)uncertainties by their likelihood of occurrence (e.g.,high, medium, or low), and separately, according to theseverity of their consequences. This categorizationforms the basis for ranking uncertainties by their relativeriskiness. Uncertainties with both high likelihood andseverely adverse consequences are ranked higher thanthose without these characteristics, as Figure 16suggests. The primary methods used in this process arequalitative; hence in systems engineering literature, thisstep is sometimes called qualitative risk assessment.The output of this step is a list of significant risks (byphase) to be given specific management attention.

In some projects, qualitative methods areadequate for making risk management decisions; inothers, these methods are not precise enough tounderstand the magnitude of the problem, or to allocatescarce risk reduction resources. Risk analysis is theprocess of quantifying both the likelihood of occurrenceand consequences of potential future events (or "statesof nature" in some texts). The system engineer needs todecide whether risk identification and characterizationare adequate, or whether the increased precision of riskanalysis is needed for some uncertainties. In makingthat determination, the system engineer needs tobalance the (usually) higher cost of risk analysis againstthe value of the additional information.

Risk mitigation is the formulation, selection, andexecution of strategies designed to economically reducerisk. When a specific risk is believed to be intolerable,risk analysis and mitigation are often performediteratively, so that the effects of alternative mitigationstrategies can be actively explored before one ischosen. Tracking the effectivity of these strategies isclosely allied with risk mitigation. Risk mitigation is oftena challenge because

efforts and expenditures to reduce one type of risk mayincrease another type. (Some have called this thesystems engineering equivalent of the HeisenbergUncertainty Principle in quantum mechanics.) The ability(or necessity) to trade one type of risk for anothermeans that the project manager and the systemengineer need to understand the system-wide effects ofvarious strategies in order to make a rational allocationof resources.

Several techniques have been developed foreach of these risk management activities. The principalones, which are shown in Table 1, are discussed inSections 4.6.2 through 4.6.4. The system engineerneeds to choose the techniques that best fit the uniquerequirements of each project.

A risk management program is neededthroughout the project life cycle. In keeping with thedoctrine of successive refinement, its focus, however,moves from the "big picture" in the early phases of theproject life cycle (Phases A and B) to more specificissues during design and development (Phases C andD). During operations (Phase E), the focus changesagain. A good risk management pro- gram is alwaysforward-looking. In other words, a risk managementprogram should address the project's on-going riskissues and future uncertainties. As such, it is a naturalpart of concurrent engineering. The RMP should beupdated throughout the project life cycle.

4.6.1 Types of Risks

There are several ways to describe the varioustypes of risk a project manager/system engineer faces.Traditionally, project managers and system engineershave attempted to divide risks into three or four broadcategories — namely, cost, schedule, technical, and,sometimes, safety (and/or hazard) risks. More recently,others have entered the lexicon, including the categoriesof organizational, management, acquisition,supportability, political, and programmatic risks. Thesenewer categories reflect

Page 55: Engineering Systems Handbook

NASA Systems Engineering HandbookManagement Issues in Systems Engineering

the expanded set of concerns of project managers andsystem engineers who must operate in the currentNASA environment. Some of these newer categoriesalso represent supersets of other categories. Forexample, the Defense Systems Management College(DSMC) Systems Engineering Management Guidewraps "funding, schedule, contract relations, andpolitical risks" into the broader category of programmaticrisks. While these terms are useful in informaldiscussions, there appears to be no formal taxonomyfree of ambiguities. One reason, mentioned above, isthat often one type of risk can be exchanged foranother. A second reason is that some of thesecategories move together, as for example, cost risk andpolitical risk (e.g., the risk of project cancellation).

Another way some have categorized risk is bythe degree of mathematical predictability in itsunderlying uncertainty. The distinction has been madebetween an uncertainty that has a known probabilitydistribution, with known or estimated parameters, andone in which the underlying probability distribution iseither not known, or its parameters cannot be objectivelyquantified.

An example of the first kind of uncertaintyoccurs in the unpredictability of the spares upmassrequirement for alternative Space Station Alphadesigns. While the requirement is stochastic in anyparticular logistics cycle, the probability distribution canbe estimated for each design from reliability theory andempirical data. Examples of the second kind ofuncertainty occur in trying to predict whether a Shuttleaccident will make resupply of Alpha impossible for aperiod of time greater than x months, or whether life onMars exists.

Modem subjectivist (also known as Bayesian)probability theory holds that the probability of an eventis the degree of belief that a person has that it will occur,given his/her state of information. As that informationimproves (e.g., through the acquisition of data orexperience), the subjectivist's estimate of a probabilityshould converge to that estimated as if the probabilitydistribution were known. In the examples of the previousparagraph, the only difference is the probabilityestimator's perceived state of information.Consequently, subjectivists find the distinction betweenthe two kinds of uncertainty of little or no practicalsignificance. The implication of the subjectivist's viewfor risk management is that, even with little or no data,the system engineer's subjective probability estimatesform a valid basis for risk decision making.

4.6.2 Risk Identification and CharacterizationTechniques

A variety of techniques are available for riskidentification and characterization. The thoroughnesswith which this step is accomplished is an importantdeterminant of the risk management program's success.

Expert Interviews. When properly conducted, expert in-terviews can be a major source of insight andinformation on the project's risks in the expert's area ofknowledge One key to a successful interview is inidentifying an ex pert who is close enough to a risk issueto understand it thoroughly, and at the same time, able(and willing) to step back and take an objective view ofthe probabilities and consequences. A second key tosuccess is advanced preparation on the part of theinterviewer. This means having a list of risk issues to becovered in the interview, developing a workingknowledge of these issues as they apply to the project,and developing methods for capturing the informationacquired during the interview.

Initial interviews may yield only qualitative infor-mation, which should be verified in follow-up rounds.Expert interviews are also used to solicit quantitativedata and information for those risk issues thatqualitatively rank high. These interviews are often themajor source of inputs to risk analysis models built usingthe techniques described in Section 4.6.3.

Independent Assessment. This technique can takeseveral forms. In one form, it can be a review of projectdocumentation, such as Statements of Work, acquisitionplans, verification plans, manufacturing plans, and theSEMP. In another form, it can be an evaluation of theWBS for completeness and consistency with theproject's schedules. In a third form, an independentassessment can be an independent cost (and/orschedule) estimate from an outside organization.

Risk Templates. This technique consists of examiningand then applying a series of previously developed risktemplates to a current project. Each template generallycovers a particular risk issue, and then describesmethods for avoiding or reducing that risk. Themost-widely recognized series of templates appears inDoD 4245.7-M, Transition from Development toProduction ...Solving the Risk Equation. Many of therisks and risk responses described are based on lessonsreamed from DoD programs, but are general enough tobe useful to NASA projects. As a general caution, risktemplates cannot provide an exhaustive list of riskissues for every project, but they are a useful input torisk identification.

Page 56: Engineering Systems Handbook

NASA Systems Engineering HandbookManagement Issues in Systems Engineering

Lessons Learned. A review of the lessons learnedfiles, data, and reports from previous similar projectscan produce insights and information for riskidentification on a new project. For technical riskidentification, as an example, it makes sense toexamine previous projects of similar function,architecture, or technological approach. The lessonslearned from the Infrared Astronomical Satellite (IRAS)project might be useful to the Space Infrared TelescopeFacility (SIRTF) project, even though the latter's degreeof complexity is significantly greater. The key to ap-plying this technique is in recognizing what aspects areanalogous in two projects, and what data are relevant tothe new project. Even if the documented lessonslearned from previous projects are not applicable at thesystem level, there may be valuable data applicable atthe subsystem or component level.

FMECAs, FMEAs, Digraphs, and Fault Trees. FailureModes, Effects, and Criticality Analysis (FMECA),Failure Modes and Effects Analysis (FMEA), digraphs,and fault trees are specialized techniques for safety(and/or hazard) risk identification and characterization.These techniques focus on the hardware componentsthat make up the system. According to MIL-STD-1629A,FMECA is "an ongoing procedure by which eachpotential failure in a system is analyzed to determine theresults or effects thereof on the system, and to classifyeach potential failure mode according to its severity."Failures are generally classified into four seventycategories:

• Category I—Catastrophic failure (possible deathor system loss)

• Category II—Critical failure (possible major in-jury or system damage)

• Category III—Major failure (possible minorinjury or mission effectiveness degradation)

• Category IV — Minor failure (requires systemmaintenance, but does not pose a hazard topersonnel or mission effectiveness).

A complete FMECA also includes an estimate ofthe probability of each potential failure. These prob-abilities are usually based, at first, on subjectivejudgment or experience factors from similar kinds ofhardware components, but may be refined fromreliability data as the system development progresses.An FMEA is similar to an FMECA, but typically there isless emphasis on the severity classification portion ofthe analysis.

Digraph analysis is an aid in determining faulttolerance, propagation, and reliability in large,interconnected systems. Digraphs exhibit a networkstructure and resemble a schematic diagram. Thedigraph technique permits the integration of data from anumber of individual FMECAs/FMEAs, and can betranslated into fault trees, described in Section 6.2, ifquantitative probability estimates am needed.

4.6.3 Risk Analysis Techniques

The tools and techniques of risk analysis relyheavily on the concept and "laws" (actually, axioms andtheorems) of probability. The system engineer needs tobe familiar with these in order to appreciate the fullpower and limitations of these techniques. The productsof risk analyses are generally quantitative probabilityand consequence estimates for various outcomes, moredetailed understanding of the dominant risks, andimproved capability for allocating risk reductionresources.

Decision Analysis. Decision analysis is one techniqueto help the individual decision maker deal with acomplex set of uncertainties. Using thedivide-and-conquer approach common to much ofsystems engineering, a complex uncertainty isdecomposed into simpler ones, which are then treatedseparately. The decomposition continues until it reachesa level at which either hard information can be broughtto bear, or intuition can function effectively. Thedecomposition can be graphically represented as adecision tree. The branch points, called nodes, in adecision tree represent either decision points or chanceevents. Endpoints of the tree are the potentialoutcomes. (See the sidebar on a decision tree examplefor Mars exploration.)

In most applications of decision analysis, theseoutcomes are generally assigned dollar values. Fromthe probabilities assigned at each chance node and thedollar value of each outcome, the distribution of dollarvalues (i.e., consequences) can be derived for each setof decisions. Even large complex decision trees can berepresented in currently available decision analysissoftware. This software can also calculate a variety ofrisk measures.

In brief, decision analysis is a technique thatallows:

• A systematic enumeration of uncertainties andencoding of their probabilities and outcomes

• An explicit characterization of the decisionmaker's attitude toward risk, expressed in termsof his, her risk aversion

• A calculation of the value of "perfectinformation," thus setting a normative upperbound on information-gathering expenditures

• Sensitivity testing on probability estimates andoutcome dollar values.

Page 57: Engineering Systems Handbook

NASA Systems Engineering HandbookManagement Issues in Systems Engineering

Probabilistic Risk Assessment (PRA). A PRA seeksto measure the risk inherent in a system's design andoperation by quantifying both the likelihood of variouspossible accident sequences and their consequences. Atypical PRA application is to determine the riskassociated with a specific nuclear power plant. WithinNASA, PRAs are used to demonstrate, for example, therelative safety of launching spacecraft containing RTGs(Radioisotope Thermoelectric Generators).

The search for accident sequences is facilitatedby event trees, which depict initiating events andcombinations of system successes and failures, andfault trees, which depict ways in which the systemfailures represented in an event tree can occur. Whenintegrated, an event tree and its associated fault tree(s)can be used to calculate the probability of each accidentsequence. The structure and

Probabilistic Risk Assessment Pitfalls

Risk is generally defined in a probabilistic risk assess-ment (PRA) as the expected value of a consequencefunction—that is:

R = Σ PS CS S

where PS is the probability of outcome s, and CS is theconsequence of outcome s. To attach probabilities tooutcomes, event trees and fault trees are developed.These techniques have been used since 1953, but bythe late 1970s, they were under attack by PRApractitioners. The reasons include the following:

• Fault trees are limiting because a complete setof failures is not definable.

• Common cause failures could not be capturedproperly. An example of a common cause fail-ure is one where all the valves in a system havea defect so that their failures are not truly inde-pendent.

• PRA results are sometimes sensitive to simplechanges in event tree assumptions

• Stated criteria for accepting different kinds ofrisks are often inconsistent, and therefore notappropriate for allocating risk reduction re-sources.

• Many risk-related decisions are driven by per-ceptions, not necessarily objective risk asdefined by the above equation. Perceptions ofconsequences tend to grow faster than the con-sequences themselves—that is, several smallaccidents are not perceived as strongly as onelarge one, even if fatalities are identical.

• There are difficulties in dealing with incommen-surables, as for example, lives vs. dollars.

mathematics of these trees is similar to that for decisiontrees. The consequences of each accident sequence aregenerally measured both in terms of direct economiclosses and in public health effects. (See sidebar on PRApitfalls.)

Doing a PRA is itself a major effort, requiring anumber of specialized skills other than those providedby reliability engineers and human factors engineers.PRAs also require large amounts of system design dataat the component level, and operational proceduresdata. For additional information on PRAs, the systemengineer can reference the PRA Procedures Guide(1983) by the American Nuclear Society and Institute ofElectrical and Electronic Engineers (IEEE).

Probabilistic Network Schedules. Probabilisticnetwork schedules, such as PERT (Program Evaluationand Review Technique), permit the duration of eachactivity to be treated as a random variable. By supplyingPERT with the minimum, maximum, and most likelyduration for each activity, a probability distribution canbe computed for project completion time. This can thenbe used to determine, for example, the chances that aproject (or any set of tasks in the network) will becompleted by a given date. In this probabilistic setting,however, a unique critical path may not exist. Somepractitioners have also cited difficulties in obtainingmeaningful input data for probabilistic networkschedules. A simpler alternative to a full probabilisticnetwork schedule is to perform a Monte Carlo simulationof activity durations along the project's critical path.(See Section 5.4.2.)

Probabilistic Cost and Effectiveness Models. Thesemodels offer a probabilistic view of a project's cost andeffectiveness outcomes. (Recall Figure 2.) Thisapproach explicitly recognizes that single point valuesfor these variables do not adequately represent the riskconditions inherent in a project. These kinds of modelsare discussed more completely in Section 5.4.

4.6.4 Risk Mitigation and Tracking Techniques

Risk identification and characterization and riskanalysis provide a list of significant project risks that re-quire further management attention and/or action.Because risk mitigation actions are generally notcostless, the system engineer, in makingrecommendations to the project manager, must balancethe cost (in resources and time) of such actions againsttheir value to the project. Four responses to a specificrisk are usually available: (1) deliberately do nothing,and accept the risk, (2) share the risk

Page 58: Engineering Systems Handbook

NASA Systems Engineering HandbookManagement Issues in Systems Engineering

with a co-participant, (3) take preventive action to avoidor reduce the risk, and (4) plan for contingent action.

The first response is to accept a specific riskconsciously. (This response can be accompanied byfurther risk information gathering and assessments.)Second, a risk can sometimes be shared with aco-participant—that is, with a international partner or acontractor. In this situation, the goal is to reduce NASA'srisk independent of what happens to total risk, whichmay go up or down. There are many ways to sharerisks, particularly cost risks, with contractors. Theseinclude various incentive contracts and warranties. Thethird and fourth responses require that additionalspecific planning and actions be undertaken.

Typical technical risk mitigation actions includeadditional (and usually costly) testing of subsystems and

systems, designing in redundancy, and building a fullengineering model. Typical cost risk mitigation actionsinclude using off-the-shelf hardware and, according toFigure 6, providing sufficient funding during Phases Aand B. Major supportability risk mitigation actionsinclude providing sufficient initial spares to meet thesystem's availability goal and a robust resupplycapability (when transportation is a significant factor).For those risks that cannot be mitigated by a design ormanagement approach, the system engineer shouldrecommend the establishment of reasonable financialand schedule contingencies, and technical margins.

Whatever strategy is selected for a specific risk,it and its underlying rationale should be documented in arisk mitigation plan, and its effectivity should be tracked

Page 59: Engineering Systems Handbook

NASA Systems Engineering HandbookManagement Issues in Systems Engineering

through the project life cycle, as required by NMI8070.4A. The techniques for choosing a (preferred) riskmitigation strategy are discussed in Chapter 5, whichdeals with the larger role of trade studies and systemmodeling in general. Some techniques for planning andtracking are briefly mentioned here.

Watchlists and Milestones. A watchlist is acompilation of specific risks, their projectedconsequences, and early indicators of the start of theproblem. The risks on the watchlist are those that wereselected for management attention as a result ofcompleted risk management activities. A typicalwatchlist also shows for each specific risk a triggeringevent or missed milestone (for example, a delay in thedelivery of long lead items), the related area of impact(production schedule), and the risk mitigation strategy,to be used in response. The watchlist is periodicallyreevaluated and items are added, modified, or deletedas appropriate. Should the triggering event occur, theprojected consequences should be updated and the riskmitigation strategy revised as needed.

Contingency Planning, Descope Planning, andParallel Development. These techniques are generallyused in conjunction with a watchlist. The focus is ondeveloping credible hedges and work-arounds, whichare activated upon a triggering event. To be credible,hedges often require that additional resources beexpended, which provide a return only if the triggeringevent occurs. In this sense, these techniques andresources act as a form of project insurance. (The termcontingency here should not be confused with the usewithin NASA of the same term for project-heldreserves.)

Critical Items/Issues Lists. A Critical Items/Issues List(CIL) is similar to a watchlist, and has been extensivelyused on the Shuttle program to track items withsignificant system safety consequences. An example isshown as Appendix B.5.

C/SCS and TPM Tracking. Two very important risktracking techniques—cost and schedule control systems(C/SCS) and Technical Performance Measure (TPM)tracking—are discussed in Sections 4.9.1 and 4.9.2,respectively.

4.6.5 Risk Management: Summary

Uncertainty is a fact of life in systemsengineering. To deal with it effectively, the risk managerneeds a disciplined approach. In a project setting, agood-practice approach includes efforts to:

• Plan, document, and complete a riskmanagement program

• Identify and characterize risks for each phase ofthe project; high risks, those for which thecombined effects of likelihood andconsequences are significant, should be givenspecific management attention. Reviewsconducted throughout in the project life cycleshould help to force out risk issues.

• Apply qualitative and quantitative techniques tounderstand the dominant risks and to improvethe allocation of risk reduction resources; thismay include the development of project-specificrisk analysis models such as decision trees andPRAs.

• Formulate and execute a strategy to handleeach risk, including establishment, whereappropriate, of reasonable financial andschedule contingencies and technical margins

• Track the effectivity of each risk mitigation strat-egy.

Good risk management requires a team effort -that is, system engineers and managers at all levels ofthe project need to be involved. However, riskmanagement responsibilities must be assigned tospecific individuals. Successful risk managementpractices often evolve into in stitutional policy.

4.7 Configuration Management

Configuration management is the discipline ofidentifying and formalizing the functional and physicalcharacteristics of a configuration item at discrete pointsin the product evolution for the purpose of maintainingthe integrity of the product system and controllingchanges to the baseline. The baseline for a projectcontains all of the technical requirements and relatedcost and schedule requirements that are sufficientlymature to be accepted and placed under change controlby the NASA project manager. The project baselineconsists of two parts: the technical baseline and thebusiness baseline. The system engineer is responsiblefor managing the technical baseline and ensuring that itis consistent with the costs and schedules in thebusiness baseline. Typically, the project control officemanages the business baseline.

Configuration management requires the formalagreement of both the buyer and the seller to proceedaccording to the up-to-date, documented projectrequirements (as they exist at that phase in the projectlife cycle), and to

Page 60: Engineering Systems Handbook

NASA Systems Engineering HandbookManagement Issues in Systems Engineering

change the baseline requirements only by a formalconfiguration control process. The buyer might be aNASA pro gram office or an external funding agency.For example, the buyer for the GOES project is NOAA,and the seller is the NASA GOES project office.management must be enforced at all levels; in the nextlevel for this same example, the NASA GOES projectoffice is the buyer and the seller is the contractor, theLoral GOES project office. Configuration managementis established through program/project requirementsdocumentation and, where applicable, through thecontract Statement of Work.

Configuration management is essential toconduct an orderly development process, to enable themodification of an existing design, and to provide forlater replication of an existing design. Configurationmanagement often provides the information needed totrack the technical progress of the project since itmanages the project's configuration documentation.(See Section 4.9.2 on Technical PerformanceMeasures.) The project's approach to configurationmanagement and the methods to be used should bedocumented in the project's Configuration ManagementPlan. A sample outline for this plan is illustrated inAppendix B.6. The plan should be tailored to eachproject's specific needs and resources, and kept currentfor the entire project life cycle.

4.7.1 Baseline Evolution

The project-level system engineer is responsiblefor ensuring the completeness and technical integrity ofthe technical baseline. The technical baseline includes:

• Functional and performance requirements (orspecifications) for hardware, software,information items, and processes

• Interface requirements• Specialty engineering requirements• Verification requirements• Data packages, documentation, and drawing

trees• Applicable engineering standards.

The project baseline evolves in discrete stepsthrough the project life cycle. An initial baseline may beestablished when the top-level user requirementsexpressed in the Mission Needs Statement are placedunder configuration control. At each interphase controlgate, increased technical detail is added to the maturingbaseline. For a typical project, there are five sequentialtechnical baselines:

• Functional baseline at System RequirementsReview (SRR)

• "Design-to" baseline at Preliminary DesignReview (PDR)

• "Build-to" (or "code-to") baseline at the CriticalDesign Review (CDR)

• "As-built" (or ''as-coded,,) baseline at theSystem Acceptance Review (SAR)

• "As-deployed" baseline at OperationalReadiness Review (ORR).

The evolution of the five baselines is illustratedin Figure 17. As discussed in Section 3.7.1, onlydecisions made along the core of the "vee" in Figure 7are put under configuration control and included in theapproved baseline. Systems analysis, risk management,and development test activities (off the core of the vee)must begin early and continue throughout thedecomposition process of the project life cycle to provethat the core-level decisions are sound. These earlydetailed studies and tests must be documented andretained in the project archives, but they are not part ofthe technical baseline.

4.7.2 Techniques of Configuration Management

The techniques of configuration managementinclude configuration (or baseline) identification,configura-

Page 61: Engineering Systems Handbook

NASA Systems Engineering HandbookManagement Issues in Systems Engineering

tion control, configuration verification, and configurationaccounting (see Figure 18).

Configuration Identification. Configurationidentification of a baseline is accomplished by creatingand formally releasing documentation that describes thebaseline to be used, and how changes to that baselinewill be accounted for, controlled, and released. Suchdocumentation includes requirements (product, process,and material), specifications, drawings, and codelistings. Configuration documentation is not formallyconsidered part of the technical baseline until approvedby control gate action of the buyer.

An important part of configuration identificationis the physical identification of individual configurationitems using part numbers, serial numbers, lot numbers,version numbers, document control numbers, etc.

Configuration Control. Configuration control is theprocess of controlling changes to any approved baselineby formal action of a configuration control board (CCB).This area of configuration management is usually themost visible to the system engineer. In largeprograms/projects, configuration control is accomplishedby a hierarchy of configuration control boards, reflectingmultiple levels of control. Each configuration controlboard has its own areas of control and responsibilities,which are specified in the Configuration ManagementPlan.

Typically, a configuration control board meets toconsider change requests to the business or technicalbaseline of the program/project. The program/projectmanager is usually the board chair, who is the soledecision maker. The configuration manager acts as theboard secretary, who skillfully guides the process andrecords the official events of the process. In aconfiguration control board forum, a number of issuesshould be addressed:

• What is the proposed change?• What is the reason for the change?

• What is the design impact?• What is the effectiveness or performance

impact?• What is the schedule impact?• What is the program/project life -cycle cost

impact?• What is the impact of not making the change?• What is the risk of making the change?• What is the impact on operations?• What is the impact to support equipment and

services?• What is the impact on spares requirements?• What is the effectivity of the change?• What documentation is affected by the change?• Is the buyer supportive of the change?

Configuration Control Board Conduct

Objective: To review evaluations, and then approveor disapprove proposed changes to theproject's technical or business baselines.

Participants: Project manager (chair), project-levelsystem engineer, managers of each affectedorganization, configuration manager(secretary), presenters.

Format: Presenter covers recommended changeand discusses related system impact. Thepresentation is reviewed by the systemengineer for completeness prior topresentation.

Decision: The CCB members discuss the ChangeRequest (CR) and formulate a decision.Project manager agrees or overrides. Thesecretary prepares a CCB directive, whichrecords and directs the CR's disposition.

Page 62: Engineering Systems Handbook

NASA Systems Engineering HandbookManagement Issues in Systems Engineering

A review of this information should lead to awell-informed decision. When this information is notavailable to the configuration control board, unfoundeddecisions are made, often with negative consequencesto the program or project.

Once a baseline is placed under configurationcontrol, any change requires the approval of theconfiguration control board. The project manager chairsthe configuration control board, while the systemengineer or configuration manager is responsible forreviewing all material for completeness before it ispresented to the board, and for ensuring that all affectedorganizations are represented in the configurationcontrol board forum.

The system engineer should also ensure thatthe active approved baseline is communicated in atimely manner to all those relying on it. Thiscommunication keeps project teams apprised as to thedistinction between what is frozen under formal changecontrol and what can still be decided withoutconfiguration control board approval.

Configuration control is essential at both thecontractor and NASA field center levels. Changesdetermined to be Class I to the contractor must bereferred to the NASA project manager for resolution.This process is described in Figure 19. The use of apreliminary Engineering Change Proposal (ECP) toforewarn of an impending change provides the projectmanager with sufficient preliminary information todetermine whether the contractor should spend NASAcontract funds on a formal ECP. This technique isdesigned to save significant contract dollars.

Class 1 changes affect the approved baselineand hence the product version identification. Class 2changes are editorial changes or internal changes not"visible" to the external interfaces. Class 2 changes aredispositioned by the contractor's CCB and do not requirethe NASA project manager's approval.

Overly formalized systems can become soburdensome that members of the project team may tryto circumvent the process. It is essential that theformality of the change process be appropriately tailoredto the needs of each project. However, there mustalways be effective configuration control on everyproject.

For software projects, it is routine to use versioncontrol for both pre-release and post-release deliverablesystems. It is equally important to maintain version con-for hardware-only systems.

Approved changes on a development projectthat has only one deliverable obviously are onlyapplicable to that one deliverable item. However, forprojects that have multiple deliverables of "identical"design, changes may become effective on the second orsubsequent production articles. In such a situation, theconfiguration control board must decide the effectivity ofthe change, and the configuration control system mustmaintain version control and identification of the"as-built" configuration for each article. Incrementalimplementation of changes is common in projects thathave a deliberate policy of introducing product orprocess improvements. As an example, the original1972 plan held that each of the Space Shuttle or

Page 63: Engineering Systems Handbook

NASA Systems Engineering HandbookManagement Issues in Systems Engineering

biters would be identical. In reality, each of the orbitersis different, driven primarily by the desire to achieve theoriginal payload requirement of 65,000 pounds. Properversion control documentation has been essential to thesparing, fielding, and maintenance of the operationalfleet.

Configuration Verification. Configuration verificationis the process of verifying that resulting products (e.g.,hardware and software items) conform to the intentionsof the designers and to the standards established bypreceding approved baselines, and that baselinedocumentation is current and accurate. Configurationverification is accomplished by two types of control gateactivity: audits and technical reviews. (See Section 4.8.4for additional information on two important examples:the Physical Configuration Audit and the DesignCertification Review.) Each of these serves to reviewand challenge the data presented for conformance tothe previously approved baseline.

Configuration Accounting. Configuration accounting(sometimes called configuration status accounting) isthe task of maintaining, correlating, releasing, reporting,and storing configuration data. Essentially a datamanagement function, configuration accounting ensuresthat official baseline data is retained, available, anddistribution-controlled for project use. It also performsthe important function of tracking the status of eachchange from inception through implementation. Aproject's change status system should be capable ofidentifying each change by its unique changeidentification number (e.g., ECRs, CRs, RlDs, waivers,deviations, modification kits) and report its currentstatus.

The Role of the Configuration Manager. Theconfiguration manager is responsible for the applicationof these techniques. In doing so, the configurationmanager performs the following functions:

• Conceives and manages the configurationmanagement system, and documents it in theConfiguration Management Plan

• Acts as secretary of the configuration controlboard (controls the change approval process)

• Controls changes to baseline documentationControls release of baseline documentation

• Initiates configuration verification audits.

4.7.3 Data Management

For any project, proper data management isessential for successful configuration management.Before a project team can produce a tangible product, itmust produce descriptions of the system using words,drawings, schematics, and numbers (i.e., symbolicinformation). There are several vital characteristics thesymbolic information must have. First the information

must be shareable. Whether it is in electronic or paperform, the data must be readily available, in the mostrecently approved version, to all members of the projectteam.

Second, symbolic information must be durable.This means that it must be recalled accurately everytime and represent the most current version of thebaseline. The baseline information cannot change ordegrade with repeated access of the database or paperfiles, and cannot degrade with time. This is a non-trivialstatement, since poor data management practices (e.g.,allowing someone to borrow the only copy of adocument or drawing) can allow controlled informationto become lost. Also, the material must be retained forthe life of the program/project (and possibly beyond),and a complete set of documentation for each baselinechange must be retained.

Third, the symbolic information must betraceable upward and downward. A database must bedeveloped and maintained to show the parentage of anyrequirement. The database must also be able to displayall children derived from a given requirement. Finally,traceability must be provided to reports that documenttrade study results and other decisions that played a keyrole in the flowdown of requirements. The datamanagement function therefore encompasses managingand archiving supporting analyses and trade study data,and keeping them convenient for configurationmanagement and general project use.

4.8 Reviews, Audits, and Control Gates

The intent and policy for reviews, audits, andcontrol gates should be developed during Phase A anddefined in the Program/Project Plan. The specificimplementation of these activities should be consistentwith the types of reviews and audits described in thissection, and with the NASA Program/Project Life Cyclechart (see Figure 5) and the NASA Program/Project LifeCycle Process Flow chart (see Figure 8). However, thetiming of reviews, audits, and control gates should betailored to each specific project.

4.8.1 Purpose and Definitions

The purpose of a review is to furnish the forumand process to provide NASA management and theircontractors assurance that the most satisfactoryapproach, plan or

Page 64: Engineering Systems Handbook

NASA Systems Engineering HandbookManagement Issues in Systems Engineering

design has been selected, that a configuration item hasbeen produced to meet the specified requirements, orthat a configuration item is ready. Reviews (technical ormanagement) are scheduled to communicate anapproach, demonstrate an ability to meet requirements,or establish status. Reviews help to develop a betterunderstanding among task or project participants, opencommunication channels, alert participants andmanagement to problems, and open avenues forsolutions.

The purpose of an audit is to provide NASAmanagement and its contractors a thorough examinationof adherence to program/project policies, plans,requirements, and specifications. Audits are thesystematic examination of tangible evidence todetermine adequacy, validity, and effectiveness of theactivity or documentation under review. An audit mayexamine documentation of policies and procedures, aswell as verify adherence to them.

The purpose of a control gate is to provide ascheduled event (either a review or an audit) that NASAmanagement will use to make program or projectgo/no-go decisions. A control gate is a managementevent in the pro-

Project Termination

It should be noted that project termination, whileusually disappointing to project personnel, may be aproper reaction to changes in external conditions or toan improved understanding of the system's projectedcost-effectiveness.

ject life cycle that is of sufficient importance to be identi-fied, defined, and included in the project schedule. It re-quires formal examination to evaluate project status andto obtain approval to proceed to the next managementevent according to the Program/Project Plan.

4.8.2 General Principles for Reviews

Review Boards. The convening authority, which super-vises the manager of the activity being reviewed,normally appoints the review board chair. Unless thereare compelling technical reasons to the contrary, thechair should not be directly associated with the projector task under review. The convening authority alsonames the review board members. The majority of themembers should not be directly associated with theprogram or project under review.

Internal Reviews. During the course of a project ortask, it is necessary to conduct internal reviews thatpresent technical approaches, trade studies, analyses,and problem areas to a peer group for evaluation andcomment. The timing, participants, and content of thesereviews is normally defined by the project manager orthe manager of the performing organization. Internal

reviews are also held prior to participation in a formalcontrol gate review.

Internal reviews provide an excellent means forcontrolling the technical progress of the project. Theyalso should be used to ensure that all interested partiesare involved in the design and development early onand throughout the process. Thus, representatives fromareas such as manufacturing and quality assuranceshould attend the internal reviews as active participants.They can then, for example, ensure that the design isproducible and that quality is managed through theproject life cycle.

In addition, some organizations utilize a RedTeam. This is an internal, independent, peer-levelreview conducted to identify any deficiencies in requestsfor proposals, proposal responses, documentation, orpresentation material prior to its release. The project ortask manager is responsible for establishing the RedTeam membership and for deciding which of theirrecommendations are to be implemented.

Review Presentation Material. Presentations usingexisting documentation such as specifications, drawings,analyses, and reports may be adequate. Copies of anyprepared materials (such as viewgraphs) should beprovided to the review board and meeting attendees.Background information and review presentationmaterial of use to board members should be distributedto the members early enough to enable them toexamine it prior to the review. For major reviews, thistime may be as long as 30 calendar days.

Review Conduct. All reviews should consist of oralpresentations of the applicable project requirements andthe approaches, plans, or designs that satisfy thoserequirements. These presentations normally are givenby the cognizant design engineer or his/her immediatesupervisor.

It is highly recommended that in addition to thereview board, the review audience include projectpersonnel (NASA and contractor) not directly associatedwith the design being reviewed. This is required to utilizetheir cross-discipline expertise to identify any designshortfalls or recommend design improvements. Thereview audience should also include non-projectspecialists in the area under review, and specialists inproduction/fabrication, testing, quality assurance,reliability, and safety. Some reviews may also requirethe presence of both the contractor's and NASA'scontracting officers.

Prior to and during the review, board membersand review attendees may submit requests for action orengineering change requests (ECRs) that document aconcern,

Page 65: Engineering Systems Handbook

NASA Systems Engineering HandbookManagement Issues in Systems Engineering

deficiency, or recommended improvement in thepresented approach, plan, or design. Following thereview, these are screened by the review board toconsolidate them, and to ensure that the chair andcognizant manager(s) understand the intent of therequests. It is the responsibility of the review board toensure that adequate closure responses for each of theaction requests are obtained.

Post Review Report. The review board chair has theresponsibility to develop, where necessary, a consensusof the findings of the board, including an assessment ofthe risks associated with problem areas, and developrecommendations for action. The chair submits, on atimely basis, a written report, includingrecommendations for action, to the convening authoritywith copies to the cognizant managers.

Standing Review Boards. Standing review boards areselected for projects or tasks that have a high level ofactivity, visibility, and/or resource requirements.Selection of board members by the convening authorityis generally made from senior field center technical andmanagement staff. Supporting members or advisorsmay be added to the board as required bycircumstances. If the review board is to function overthe life of a project, it is advisable to select extra boardmembers and rotate active assignments to cover needs.

4.8.3 Major Control Gates

This section describes the purpose, timing,objectives, success criteria, and results of the majorcontrol gates in the NASA project life cycle. Thisinformation is intended to provide guidance to projectmanagers and system engineers, and to illustrate theprogressive maturation of review activities and systemsengineering products. The checklists provided below aidin the preparation of specific review entry and exitcriteria, but do not take their place. To minimize extrawork, review material should be keyed to projectdocumentation.

Mission Concept Review.Purpose—The Mission Concept Review (MCR)

affirms the mission need, and examines the proposedmission's objectives and the concept for meeting thoseobjectives. It is an internal review that usually occurs atthe cognizant NASA field center.

Timing—Near the completion of a missionfeasibility study.

Objectives—The objectives of the review areto:

• Demonstrate that mission objectives arecomplete and understandable

• Confirm that the mission concepts demonstratetechnical and programmatic feasibility ofmeeting the mission objectives

• Confirm that the customer's mission need isclear and achievable

• Ensure that prioritized evaluation criteria areprovided for subsequent mission analysis.

Criteria for Successful Completion —The fol-lowing items compose a checklist to aid in determiningreadiness of MCR product preparation:

• Are the mission objectives clearly defined andstated? Are they unambiguous and internallyconsistent?

• Will satisfaction of the preliminary set ofrequirements provide a system which will meetmission objectives?

• Is the mission feasible? Has there been asolution identified which is technically feasible?Is the rough cost estimate within an acceptablecost range?

• Have the concept evaluation criteria to be usedin candidate system evaluation been identifiedand prioritized?

• Has the need for the mission been clearly identi-fied?

• Are the cost and schedule estimates credible?• Was a technology search done to identify

existing assets or products that could satisfy themission or parts of the mission?

Results of Review—A successful MCRsupports the determination that the proposed missionmeets the customer need, and has sufficient quality andmerit to support a field center management decision topropose further study to the cognizant NASA ProgramAssociate Administrator (PAA) as a candidate Phase Aeffort.

Mission Definition Review.Purpose—The Mission Definition Review

(MDR) examines the functional and performancerequirements defined for the system and the preliminaryprogram/project plan, and assures that the requirementsand the selected architecture/design will satisfy themission.

Timing—Near the completion of the missiondefinition stage.

Objectives—The objectives of the review areto:

Page 66: Engineering Systems Handbook

NASA Systems Engineering HandbookManagement Issues in Systems Engineering

• Establish that the allocation of the functionalsystem requirements is optimal for missionsatisfaction with respect to requirements tradesand evaluation criteria that were internallyestablished at MCR

• Validate that system requirements meet missionobjectives

• Identify technology risks and the plans tomitigate those risks

• Present refined cost, schedule, and personnelresource estimates.

Criteria for Successful Completion —The fol-lowing items compose a checklist to aid in determiningreadiness of MDR product preparation:

• Do the defined system requirements meet themission objectives expressed at the start of theprogram/project?

• Are the system-level requirements complete,consistent, and verifiable? Have preliminaryallocations been made to lower levels?

• Have the requirements trades converged on anoptimal set of system requirements? Do thetrades address program/project cost andschedule constraints as wel1 as missiontechnical needs? Do the trades cover a broadspectrum of options? Have the trades identifiedfor this set of activities been completed? Havethe remaining trades been identified to selectthe final system design?

• Are the upper levels of the system PBScompletely defined?

• Are the decisions made as a result of the tradesconsistent with the evaluation criteriaestablished at the MCR?

• Has an optimal final design converged to a fewalternatives?

• Have technology risks been identified and havemitigation plans been developed?

Results of Review—A successful MDRsupports the decision to further develop the systemarchitecture/design and any technology needed toaccomplish the mission. The results reinforce themission's merit and provide a basis for the systemacquisition strategy.

System Definition Review.Purpose—The System Definition Review

(SDR) examines the proposed systemarchitecture/design and the flowdown to all functionalelements of the system.

Timing—Near the completion of the systemdefinition stage. It represents the culmination of effortsin system requirements analysis and allocation.

Objectives—The objectives of the SDR are to:

• Demonstrate that the architecture/design isacceptable. that requirements allocation iscomplete, and that a system that fulfills themission objectives can be built within theconstraints posed

• Ensure that a verification concept andpreliminary verification program are defined

• Establish end item acceptance criteria• Ensure that adequate detailed information exists

to support initiation of further development oracquisition efforts.

Criteria for Successful Completion —The fol-lowing items compose a checklist to aid in determiningreadiness of SDR project preparation:

• Will the top-level system design selected meetthe system requirements, satisfy the missionobjectives, and address operational needs?

• Can the top-level system design selected bebuilt within cost constraints and in a timelymanner? Are the cost and schedule estimatesvalid in view of the system requirements andselected architecture?

• Have all the system-level requirements beenallocated to one or more lower levels?

• Have the major design issues for the elementsand subsystems been identified? Have majorrisk areas been identified with mitigation plans?

• Have plans to control the development anddesign process been completed?

• Is a development verification/test plan in placeto provide data for making informed designdecisions? Is the minimum end item productperformance documented in the acceptancecriteria?

• Is there sufficient information to supportproposal efforts? Is there a complete validatedset of requirements with sufficient systemdefinition to support the cost and scheduleestimates?

Results of Review—As a result of successfulcompletion of the SDR, the system and its operation arewell enough understood to warrant design andacquisition of the end items. Approved specifications forthe system, its segments, and preliminary specificationsfor the design of appropriate functional elements may bereleased. A configuration management plan isestablished to control

Page 67: Engineering Systems Handbook

NASA Systems Engineering HandbookManagement Issues in Systems Engineering

design and requirement changes. Plans to control andintegrate the expanded technical process are in place.

Preliminary Design Review. The Preliminary DesignReview (PDR) is not a single review but a number ofreviews that includes the system PDR and PDRsconducted on specific Configuration Items (CIs).

Purpose—The PDR demonstrates that thepreliminary design meets all system requirements withacceptable risk. It shows that the correct design optionhas been selected, interfaces identified, and verificationmethods have been satisfactorily described. It alsoestablishes the basis for proceeding with detailed design.

Timing—After completing a full functionalimplementation.

Objectives—The objectives of the PDR are to:

• Ensure that all system requirements have beenallocated, the requirements are complete, andthe flowdown is adequate to verify systemperformance

• Show that the proposed design is expected tomeet the functional and performancerequirements at the Cl level

• Show sufficient maturity in the proposed designapproach to proceed to final design

• Show that the design is verifiable and that therisks have been identified, characterized, andmitigated where appropriate.

Criteria for Successful Completion —The fol-lowing items compose a checklist to aid in determiningreadiness of PDR product preparation:

• Can the proposed preliminary design beexpected to meet all the requirements within theplanned cost and schedule?

• Have all external interfaces been identified?• Have all the system and segment requirements

been allocated down to the CI level?• Are all Cl "design-to" specifications complete

and ready for formal approval and release?• Has an acceptable operations concept been

developed?• Does the proposed design satisfy requirements

critical to human safety and mission success?• Do the human factors considerations of the pro-

posed design support the intended end users'ability to operate the system and perform themission effectively?

• Have the production, verification, operations,and other specialty engineering organizationsreviewed the design?

• Is the proposed design producible? Have longlead items been considered?

• Do the specialty engineering program plans anddesign specifications provide sufficientguidance, constraints, and system requirementsfor the design engineers to execute the design?

• Is the reliability analysis based on a soundmethodology, and does it allow for realisticlogistics planning and life-cycle cost analysis?

• Are sufficient project reserves and scheduleslack available to proceed further?

Results of Review — As a result of successfulcompletion of the PDR, the "design-to" baseline is ap-proved. It also authorizes the project to proceed to finaldesign.

Critical Design Review. The Critical Design Review(CDR) is not a single review but a number of reviewsthat start with specific Cls and end with the system CDR.

Purpose—The CDR discloses the complete sys-tem design in full detail, ascertains that technicalproblems and design anomalies have been resolved,and ensures that the design maturity justifies thedecision to initiate fabrication/manufacturing, integration,and verification of mission hardware and software.

Timing—Near the completion of the final designstage.

Objectives—The objectives of the CDR are to:

• Ensure that the "build-to" baseline contains de-tailed hardware and software specifications thatcan meet functional and performancerequirements

• Ensure that the design has been satisfactorilyaudited by production, verification, operations,and other specialty engineering organizations

• Ensure that the production processes andcontrols are sufficient to proceed to thefabrication stage

• Establish that planned Quality Assurance (QA)activities will establish perceptive verificationand screening processes for producing a qualityproduct

• Verify that the final design fulfills thespecifications established at PDR.

Criteria for Successful Completion —The fol-lowing items compose a checklist to aid in determiningreadiness of CDR product preparation:

Page 68: Engineering Systems Handbook

NASA Systems Engineering HandbookManagement Issues in Systems Engineering

• Can the proposed final design be expected tomeet all the requirements within the plannedcost and schedule?

• Is the design complete? Are drawings ready tobegin production? Is software product definitionsufficiently mature to start coding?

• Is the "build-to" baseline sufficiently traceable toassure that no orphan requirements exist?

• Do the design qualification results from softwareprototyping and engineering item testing,simulation, and analysis support the conclusionthat the system will meet requirements?

• Are all internal interfaces completely definedand compatible? Are external interfacescurrent?

• Are integrated safety analyses complete? Dothey show that identified hazards have beencontrolled, or have those remaining risks whichcannot be controlled been waived by theappropriate authority?

• Are production plans in place and reasonable?• Are there adequate quality checks in the

production process?• Are the logistics support analyses adequate to

identify integrated logistics support resourcerequirements?

• Are comprehensive system integration andverification plans complete?

Results of Review — As a result of successfulcompletion of the CDR, the "build-to" baseline, produc-tion, and verification plans are approved. Approveddrawings are released and authorized for fabrication. Italso authorizes coding of deliverable software(according to the "build-to" baseline and codingstandards presented in the review), and systemqualification testing and integration. All open issuesshould be resolved with closure actions and schedules.

System Acceptance Review.Purpose—The System Acceptance Review

(SAR) examines the system, its end items anddocumentation, and test data and analyses that supportverification. It also ensures that the system hassufficient technical maturity to authorize its shipment toand installation at the launch site or the intendedoperational facility.

Timing—Near the completion of the systemfabrication and integration stage.

Objectives—The objectives of the SAR are to:

• Establish that the system is ready to bedelivered and accepted under DD-250

• Ensure that the system meets acceptancecriteria that were established at SDR

• Establish that the system meets requirementsand will function properly in the expected

operational environments as reflected in the testdata, demonstrations, and analyses

• Establish an understanding of the capabilitiesand operational constraints of the ''as-built''system, and that the documentation deliveredwith the system is complete and current.

Criteria for Successful Completion —The fol-lowing items compose a checklist to aid in determiningreadiness of SAR product preparation:

• Are tests and analyses complete? Do theyindicate that the system will function properly inthe expected operational environments?

• Does the system meet the criteria described inthe acceptance plans?

• Is the system ready to be delivered (flight itemsto the launch site and non-flight items to theintended operational facility for installation)?

• Is the system documentation complete andaccurate?

• Is it clear what is being bought?

Results of Review — As a result of successfulcompletion of the SAR, the system is accepted by thebuyer, and authorization is given to ship the hardware tothe launch site or operational facility, and to install soft-ware and hardware for operational use.

Flight Readiness Review.Purpose —The Flight Readiness Review (FRR)

examines tests, demonstrations, analyses, and auditsthat determine the system's readiness for a safe andsuccessful launch and for subsequent flight operations.It also ensures that all flight and ground hardware,software, personnel, and procedures are operationallyready.

Timing—After the system has been configuredfor launch.

Objectives—The objectives of the FRR are to:

• Receive certification that flight operations cansafely proceed with acceptable risk

• Confirm that the system and support elementsare properly configured and ready for launch

• Establish that all interfaces are compatible andfunction as expected

Page 69: Engineering Systems Handbook

NASA Systems Engineering HandbookManagement Issues in Systems Engineering

• Establish that the system state supports alaunch ''go" decision based on go/no-go criteria.

Criteria for Successful Completion —The fol-lowing items compose a checklist to aid in determiningreadiness of FRR product preparation:

• Is the launch vehicle ready for launch?• Is the space vehicle hardware ready for safe

launch and subsequent flight with a highprobability for achieving mission success?

• Are all flight and ground software elementsready to support launch and flight operations?

• Are all interfaces checked out and found to befunctional?

• Have all open items and waivers beenexamined and found to be acceptable?

• Are the launch and recovery environmentalfactors within constraints?

Results of Review — As a result of successfulFRR completion, technical and procedural maturityexists for system launch and flight authorization. and insome cases initiation of system operations.

Operational Readiness Review.Purpose — The Operational Readiness Review

(ORR) examines the actual system characteristics andthe procedures used in its operation, and ensures thatall flight and ground hardware, software, personnel,procedures, and user documentation reflect thedeployed state of the system accurately.

Timing—When the system and its operationaland support equipment and personnel are ready toundertake the mission.

Objectives—The objectives of the ORR are to:

• Establish that the system is ready to transitioninto an operational mode through examinationof available ground and flight test results,analyses, and operational demonstrations

• Confirm that the system is operationally andlogistically supported in a satisfactory mannerconsidering all modes of operation and support(normal, contingency, and unplanned)

• Establish that operational documentation iscomplete and represents the systemconfiguration and its planned modes ofoperation

• Establish that the training function is in placeand has demonstrated capability to support allaspects of system maintenance, preparation,operation, and recovery.

Criteria for Successful Completion —The fol-lowing items compose a checklist to aid in determiningreadiness of ORR product preparation:

• Are the system hardware, software, personnel,and procedures in place to support operation?

• Have all anomalies detected during prelaunch,launch, and orbital flight been resolved, docu-mented, and incorporated into existingoperational support data?

• Are the changes necessary to transition thesystem from flight test to an operationalconfiguration ready to be made?

• Are all waivers closed?• Are the resources in place, or financially

planned and approved to support the systemduring its operational lifetime?

Results of Review — As a result of successfulORR completion, the system is ready to assume normaloperations and any potential hazards due to launch orflight operations have been resolved through use ofredundant design or changes in operational procedures.

Decommissioning Review.Purpose — The Decommissioning Review (DR)

confirms that the reasons for decommissioning are validand appropriate, and examines the current systemstatus and plans for disposal.

Timing—When major items within the systemare no longer needed to complete the mission.

Objectives—The objectives of the DR are to:

• Establish that the state of the mission and orsystem requires decommissioning/disposal.Possibilities include no further mission need,broken degraded system elements, or phase outof existing system assets due to a pendingupgrade

• Demonstrate that the plans fordecommissioning, disposal, and any transitionare correct, current and appropriate for currentenvironmental constraints and systemconfiguration

• Establish that resources are in place to supportdisposal plans

• Ensure that archival plans have been completedfor essential mission and project data.

Page 70: Engineering Systems Handbook

NASA Systems Engineering HandbookManagement Issues in Systems Engineering

Criteria for Successful Completion —The fol-lowing items compose a checklist to aid in determiningreadiness of DR product preparation:

• Are reasons for decommissioning/disposal welldocumented?

• Is the disposal plan completed and compliantwith local, state, and federal environmentalregulations?

• Does the disposal plan address the dispositionof existing hardware, software, facilities, andprocesses?

• Have disposal risks been addressed?• Have data archival plans been defined?• Are sufficient resources available to complete

the disposal plan?• Is a personnel transition plan in place?

Results of Review—A successful DRcompletion assures that the decommissioning anddisposal of system items and processes are appropriateand effective.

4.8.4 Interim Reviews

Interim reviews are driven by programmaticand/or NASA Headquarters milestones that are notnecessarily supported by the major reviews. They areoften multiple review processes that provide importantinformation for major NASA reviews, programmaticdecisions, and commitments. Program/project tailoringdictates the need for and scheduling of these reviews.

Requirements Reviews. Prior to the PDR, the missionand system requirements must be thoroughly analyzed,allocated, and validated to assure that the project caneffectively understand and satisfy the mission need.Specifically, these interim requirements reviews confirmwhether:

• The proposed project supports a specific NASAprogram deficiency

• In-house or industry-initiated efforts should beemployed in the program realization

• The proposed requirements meet objectives• The requirements will lead to a reasonable

solution• The conceptual approach and architecture are

credibly feasible and affordable.

These issues, as well as requirementsambiguities, are resolved or resolution actions areassigned. Interim requirements reviews alleviate the riskof excess design and analysis burdens too far into thelife cycle.

Safety Reviews. Safety reviews are conducted toensure compliance with NHB 1700.1B, NASA Safety

Policy and Requirements Document, and are approvedby the program/project manager at the recommendationof the system safety manager. Their purpose,objectives, and general schedule are contained inappropriate safety management plans. Safety reviewsaddress possible hazards associated with systemassembly, test, operation, and support. Specialconsideration is given to possible operational andenvironmental hazards related to the use of nuclear andother toxic materials. (See Section 6.8.) Early reviewswith field center safety personnel should be held toidentify and understand any problems areas, and tospecify the requirements to control them.

Software Reviews. Software reviews are scheduled bythe program/project manager for the purpose ofensuring that software specifications and associatedproducts are well understood by both program/projectand user personnel. Throughout the development cycle,the pedigree, maturity, limitations, and schedules ofdelivered preproduction items, as well as the ComputerSoftware Configuration Items (CSCI), are of criticalimportance to the project's engineering, operations, andverification organizations.

Readiness Reviews. Readiness reviews are conductedprior to commencement of major events that commitand expose critical program/project resources to risk.These reviews define the risk environment and addressthe capability to satisfactorily operate in thatenvironment.

Mission Requirements Review.Purpose — The Mission Requirements Review

(MRR) examines and substantiates top-levelrequirements analysis products and assesses theirreadiness for external review.

Timing—Occurs (as required) following thematuration of the mission requirements in the missiondefinition stage.

Objectives—The objectives of the review areto:

• Confirm that the mission concept satisfies thecustomer's needs

• Confirm that the mission requirements supportidentification of external and long-lead supportrequirements (e.g., DoD, international, facilityresources)

• Determine the adequacy of the analysisproducts to support development of thepreliminary Phase B approval package.

Page 71: Engineering Systems Handbook

NASA Systems Engineering HandbookManagement Issues in Systems Engineering

Criteria for Successful Completion—The fol-lowing items compose a checklist to aid in determiningreadiness of MRR product preparation:

• Are the top-level mission requirementssufficiently defined to describe objectives inmeasurable parameters? Are assumptions andconstraints defined and quantified?

• Is the mission and operations concept adequateto support preliminary program/projectdocumentation development, including theEngineering Master Plan/Schedule, Phase BProject Definition Plan, technology assessment,initial Phase B/C/D resource requirements, andacquisition strategy development? Areevaluation criteria sufficiently defined?

• Are measures of effectiveness established?• Are development and life-cycle cost estimates

realistic?• Have specific requirements been identified that

are high risk/high cost drivers, and have optionsbeen described to relieve or mitigate them?

Results of Review—Successful completion ofthe MRR provides confidence to submit information forthe Preliminary Non-Advocate Review and subsequentsubmission of the Mission Needs Statement forapproval.

System Requirements Review.Purpose — The System Requirements Review

(SRR) demonstrates that the product development teamunderstands the mission (i.e., project-level) andsystem-level requirements.

Timing—Occurs (as required) following the for-mation of the team.

Objectives—The objectives of the review areto:

• Confirm that the system-level requirementsmeet the mission objectives

• Confirm that the system-level specifications ofthe system are sufficient to meet the projectobjectives.

Criteria for Successful Completion —The fol-lowing items compose a checklist to aid in determiningreadiness of SRR project preparation:

• Are the allocations contained in the systemspecifications sufficient to meet missionobjectives?

• Are the evaluation criteria established andrealistic?

• Are measures of effectiveness established andrealistic?

• Are cost estimates established and realistic?• Has a system verification concept been

identified?• Are appropriate plans being initiated to support

projected system development milestones?• Have the technology development issues been

identified along with approaches to theirsolution?

Results of Review—Successful completion ofthe SRR freezes program/project requirements andleads to a formal decision by the cognizant ProgramAssociate Administrator (PAA) to proceed with proposalrequest preparations for project implementation.

System Safety Review.Purpose—System Safety Review(s) (SSR) pro-

vides early identification of safety hazards, and ensuresthat measures to eliminate, reduce, or control the riskassociated with the hazard are identified and executedin a timely, cost-effective manner.

Timing—Occurs (as needed) in multiple phasesof the project cycle.

Objectives—The objectives of the reviews areto:

• Identify those items considered as critical from asafety viewpoint

• Assess alternatives and recommendations tomitigate or eliminate risks and hazards

• Ensure that mitigation/elimination methods canbe verified.

Criteria for Successful Completion —The fol-lowing items comprise a checklist to aid in determiningreadiness of SSR product preparation:

• Have the risks been identified, characterized,and quantified if needed?

• Have design/procedural options been analyzed,and quantified if needed to mitigate significantrisks?

• Have verification methods been identified forcandidate options?

Result of Review—A successful SSR results inthe identification of hazards and their causes in the pro-posed design and operational modes, and specificmeans of eliminating, reducing, or controlling thehazards. The methods of safety verification will also beidentified prior to PDR. At CDR, a safety baseline isdeveloped.

Software Specification Review.Purpose — The Software Specification Review

(SoSR) ensures that the software specification set issufficiently mature to support preliminary design efforts.

Page 72: Engineering Systems Handbook

NASA Systems Engineering HandbookManagement Issues in Systems Engineering

Timing—Occurs shortly after the start ofpreliminary design.

Objectives—The review objectives are to:

• Verify that all software requirements from thesystem specification have been allocated toCSCls and documented in the appropriatesoftware specifications

• Verify that a complete set of functional,performance, interface, and verificationrequirements for each CSCI has beendeveloped

• Ensure that the software requirement set is bothcomplete and understandable.

Criteria for Successful Completion —The fol-lowing items comprise a checklist to aid in determiningthe readiness of SoSR product preparation:

• Are functional CSCI descriptions complete andclear?

• Are the software requirements traceable to thesystem specification?

• Are CSCI performance requirements completeand unambiguous? Are execution time andstorage requirements realistic?

• Is control and data flow between CSCIsdefined?

• Are all software-to-software andsoftware-to-hardware interfaces defined?

• Are the mission requirements of the system andassociated operational and supportenvironments defined? Are milestone schedulesand special delivery requirements negotiatedand complete?

• Are the CSCI specifications complete withrespect to design constraints, standards, qualityassurance, testability, and delivery preparation?

Results of Review—Successful completion ofthe SoSR results in release of the softwarespecifications based upon their developmentrequirements and guidelines, and the start of preliminarydesign activities.

Test Readiness Review.Purpose—The Test Readiness Review (TRR)

ensures that the test article hardware/software, testfacility, ground support personnel, and test proceduresare ready for testing, and data acquisition, reduction,and control.

Timing—Held prior to the start of a formal test.The TRR establishes a decision point to proceed withplanned verification (qualification and/or acceptance)testing of CIs, subsystems, and/or systems.

Objectives—The objectives of the review areto:

• Confirm that in-place test plans meetverification requirements and specifications

• Confirm that sufficient resources are allocatedto the test effort

• Examine detailed test procedures forcompleteness and safety during test operations

• Determine that critical test personnel are test-and safety-certified

• Confirm that test support software is adequate,pertinent, and verified.

Criteria for Successful Completion —The fol-lowing items comprise a checklist to aid in determiningthe readiness of TRR product preparation:

• Have the test cases been reviewed andanalyzed for expected results? Are resultsconsistent with test plans and objectives?

• Have the test procedures been "dry run"? Dothey indicate satisfactory operation?

• Have test personnel received training in testoperations and safety procedures? Are theycertified?

• Are resources available to adequately supportthe planned tests as well as contingencies,including failed hardware replacement?

• Has the test support software beendemonstrated to handle test configurationassignments, and data acquisition, reduction,control, and archiving?

Results of Review—A successful TRR signifiesthat test and safety engineers have certified thatpreparations are complete, and that the project managerhas authorized formal test initiation.

Production Readiness Review.Purpose — The Production Readiness Review

(ProRR) ensures that production plans, facilities, andpersonnel are in place and ready to begin production.

Timing—After design certification and prior tothe start of production.

Objectives—The objectives of the review areto:

• Ascertain that all significant productionengineering problems encountered duringdevelopment are resolved

• Ensure that the design documentation isadequate to support manufacturing/fabrication

• Ensure that production plans and preparationsare adequate to begin manufacturing/fabrication

• Establish that adequate resources have beenallocated to support end item production.

Page 73: Engineering Systems Handbook

NASA Systems Engineering HandbookManagement Issues in Systems Engineering

Criteria for Successful Completion —The fol-lowing items comprise a checklist to aid in determiningthe readiness of ProRR product preparation:

• Is the design certified? Have incomplete designelements been identified?

• Have risks been identified and characterized.and mitigation efforts defined?

• Has the bill of materials been reviewed andcritical parts been identified?

• Have delivery schedules been verified?• Have altemative sources been identified?• Have adequate spares been planned and

budgeted?• Are the facilities and tools sufficient for end item

production? Are special tools and testequipment specified in proper quantities?

• Are personnel qualified?• Are drawings certified?• Is production engineering and planning mature

for cost-effective production?• Are production processes and methods

consistent with quality requirements? Are theycompliant with occupational safety,environmental, and energy conservationregulations?

Results of Review—A successful ProRRresults in certification of production readiness by theproject manager and involved specialty engineeringorganizations. All open issues should be resolved withclosure actions and schedules.

Design Certification Review.Purpose — The Design Certification Review

(DCR) ensures that the qualification verificationsdemonstrated design compliance with functional andperformance requirements.

Timing — Follows the system CDR, and afterqualification tests and all modifications needed to imple-ment qualification-caused corrective actions have beencompleted.

Objectives—The objectives of the review areto:

• Confirm that the verification results metfunctional and performance requirements, andthat test plans and procedures were executedcorrectly in the specified environments

• Certify that traceability between test article andproduction article is correct, including name,identification number, and current listing of allwaivers

• Identify any incremental tests required orconducted due to design or requirementschanges made since test initiation, and resolveissues regarding their results.

Criteria for Successful Completion —The fol-lowing items comprise a checklist to aid in determiningthe readiness of DCR product preparation:

• Are the pedigrees of the test articles directlytraceable to the production units?

• Is the verification plan used for this articlecurrent and approved?

• Do the test procedures and environments usedcomply with those specified in the plan?

• Are there any changes in the test articleconfiguration or design resulting from the as-runtests? Do they require design or specificationchanges, and/or retests?

• Have design and specification documents beenaudited?

• Do the verification results satisfy functional andperformance requirements?

• Do the verification, design, and specificationdocumentation correlate?

Results of Review—As a result of a successfulDCR, the end item design is approved for production. Allopen issues should be resolved with closure actions andschedules.

Functional and Physical Configuration Audits. ThePhysical Configuration Audit (also known as a configura-tion inspection) verifies that the physical configuration ofthe product corresponds to the "build-to" (or ''code-to")documentation previously approved at the CDR. TheFunctional Configuration Audit verifies that theacceptance test results are consistent with the testrequirements previously approved at the PDR and CDR.It ensures that the test results indicate performancerequirements were met, and test plans and procedureswere executed correctly. It should also documentdifferences between the test unit and production unit,including any waivers.

4.9 Status Reporting and Assessment

An important part of systems engineeringplanning is determining what is needed in time,resources, and people to realize the system that meetsthe desired goals and objectives. Planning functions,such as WBS preparation,

Page 74: Engineering Systems Handbook

NASA Systems Engineering HandbookManagement Issues in Systems Engineering

scheduling, and fiscal resource requirements planning,were discussed in Sections 4.3 through 4.5. Projectmanagement, however, does not end with planning;project managers need visibility into the progress ofthose plans in order to exercise proper managementcontrol. This is the purpose of the status reporting andassessing processes. Status reporting is the process ofdetermining where the project stands in dimensions ofinterest such as cost, schedule, and technicalperformance. Assessing is the analytical process thatconverts the output of the reporting process into a moreuseful form for the project manager -- namely, what arethe future implications of current trends? Lastly, themanager must decide whether that future is acceptable,and what changes, if any, in current plans are needed.Planning, status reporting, and assessing are systemsengineering and/or program control functions; decisionmaking is a management one.

These processes together form the feedbackloop depicted in Figure 20. This loop takes place on acontinual basis throughout the project life cycle.

This loop is applicable at each level of theproject hierarchy. Planning data, status reporting data,and assessments flow up the hierarchy with appropriateaggregation at each level; decisions cause actions to betaken down the hierarchy. Managers at each leveldetermine (consistent with policies established at thenext higher level of the project hierarchy) how often, andin what form, reporting data and assessments should bemade. In establishing these status reporting andassessment requirements, some principles of goodpractice are:

• Use an agreed-upon set of well-defined statusreporting variables

• Report these core variables in a consistentformat at all project levels

• Maintain historical data for both trendidentification and cross-project analyses

• Encourage a logical process of rolling up statusreporting variables, (e.g., use the WBS forobligations/costs status reporting and PBS formass status reporting)

• Support assessments with quantitative riskmeasures

• Summarize the condition of the project by usingcolor-coded (red, yellow, and green) alert zonesfor all core reporting variables.

Regular, periodic (e.g., monthly) tracking of thecore status reporting variables is recommended, throughsome status reporting variables should be tracked moreoften when there is rapid change or cause for concern.Key reviews, such as PDRs and CDRs, are points atwhich status reporting measures and their trends shouldbe carefully scrutinized for early warning signs ofpotential problems. Should there be indications thatexisting trends, if allowed to continue, will yield anunfavorable outcome, replanning should begin as soonas practical.

This section provides additional information onstatus reporting and assessment techniques for costsand schedules, technical performance, and systemsengineering process metrics.

4.9.1 Cost and Schedule Control Measures

Status reporting and assessment on costs andschedules provides the project manager and systemengineer visibility into how well the project is trackingagainst its planned cost and schedule targets. From amanagement point of view, achieving these targets is ona par with meeting the technical performancerequirements of the system. It is useful to think of costand schedule status reporting and assessment asmeasuring the performance of the "system thatproduces the system."

NHB 9501.2B, Procedures for ContractorReporting of Correlated Cost and Performance Data,provides specific requirements for cost and schedulestatus reporting and assessment based on a project'sdollar value and period of performance Generally, theNASA Form 533 series of reports is applicable to NASAcost-type (i.e., cost reimbursement and fixed-priceincentive) contracts. However, on larger contracts(>$25M), which require Form 533P, NHB 9501.2Ballows contractors to use their own reporting systems inlieu of 533P reporting. The project manager/systemengineer may choose to evaluate the completeness andquality of these reporting systems against criteriaestablished by the project manager/system engineer'sown field center, or against the DoD's Cost/ScheduleCost System Criteria (C/SCSC). The latter are widelyaccepted by industry and government, and a variety oftools exist for their implementation.

Page 75: Engineering Systems Handbook

NASA Systems Engineering HandbookManagement Issues in Systems Engineering

Assessment Methods. The traditional method of costand schedule control is to compare baselined cost andschedule plans against their actual values. In programcontrol terminology, a difference between actualperformance and planned costs or schedule status iscalled a variance.

Figure 21 illustrates two kinds of variances andsome related concepts. A properly constructed WorkBreakdown Structure (WBS) divides the project workinto discrete tasks and products. Associated with eachtask and product (at any level in the WBS) is a scheduleand a budgeted (i.e., planned) cost. The Budgeted Costof Work Scheduled (BCWSt) for any set of WBSelements is the budgeted cost of all work on tasks andproducts in those elements scheduled to be completedby time t. The Budgeted Cost of Work Performed(BCWPt) is a statistic representing actual performance.BCWPt, also called Earned Value (EVt), is the budgetedcost for tasks and products that have actually beenproduced (completed or in progress) at time t in theschedule for those WBS elements. The difference,BCWPt -BCWSt, is called the schedule variance at timet.

The Actual Cost of Work Performed (ACWPt) isa third statistic representing the funds that have beenexpended up to time t on those WBS elements. Thedifference between the budgeted and actual costs,BCWPt ACWPt, is called the cost variance at time t.Such variances may indicate that the cost Estimate atCompletion (EACt) of the project is different from thebudgeted cost. These types of variances enable aprogram analyst to estimate the EAC at any point in theproject life cycle. (See sidebar on computing EAC.)

If the cost and schedule baselines and thetechnical scope of the work are not fully integrated, thencost and schedule variances can still be calculated, but

the incomplete linkage between cost data and scheduledata makes it very difficult (or impossible) to estimatethe current cost EAC of the project.

Control of Variances and the Role of the SystemEngineer. When negative variances are large enoughto represent a significant erosion of reserves, thenmanagement attention is needed to either correct thevariance, or to replan the project. It is important toestablish levels of variance at which action is to betaken. These levels are generally lower when cost andschedule baselines do not support Earned Valuecalculations.

The first action taken to control an excessivenegative variance is to have the cognizant manager orsystem engineer investigate the problem, determine itscause, and recommend a solution. There are a numberof possible reasons why variance problems occur:

• A receivable was late or was unsatisfactory forsome reason

• A task is technically very difficult and requiresmore resources than originally planned

• Unforeseeable (and unlikely to repeat) eventsoccurred, such as illness, fire, or other calamity.

Computing the Estimate at Completion

EAC can be estimated at any point in theproject. The appropriate formula depends upon thereasons associated for any variances that may exist.If a variance exists due to a one-time event, such asan accident, then EAC = BUDGET + ACWP -BCWP where BUDGET is original planned cost atcompletion. If a variance exists for systemicreasons, such as a general underestimate ofschedule durations, or a steady redefinition ofrequirements, then the variance is assumed tocontinue to grow over time, and the equation is:EAC = BUDGET x (ACWP / BCWP).

If there is a growing number of liens, actionitems, or significant problems that will increase thedifficulty of future work, the EAC might grow at agreater rate than estimated by the above equation.Such factors could be addressed using riskmanagement methods described in Section 4.6.

In a large project, a good EAC is the result ofa variance analysis that may use of a combinationof these estimation methods on different parts of theWBS. A rote formula should not be used as asubstitute for understanding the underlying causesof variances.

Page 76: Engineering Systems Handbook

NASA Systems Engineering HandbookManagement Issues in Systems Engineering

Although the identification of variances islargely a program control function, there is an importantsystems engineering role in their control. That rolearises because the correct assessment of why anegative variance is occurring greatly increases thechances of successful control actions. This assessmentoften requires an understanding of the cost, schedule,and technical situation that can only be provided by thesystem engineer.

4.9.2 Technical Performance Measures

Status reporting and assessment of the system'stechnical performance measures (TPMs) complementscost and schedule control. By tracking the system'sTPMs, the project manager gains visibility into whetherthe delivered system will actually meet its performancespecifications (requirements). Beyond that, trackingTPMs ties together a number of basic systemsengineering activities—that is, a TPM tracking programforges a relationship among systems analysis, functionaland performance requirements definition, andverification and validation activities:

• Systems analysis activities identify the keyperformance or technical attributes thatdetermine system effectiveness; trade studiesperformed in systems analysis help quantify thesystem's performance requirements.

• Functional and performance requirementsdefinition activities help identify verification andvalidation requirements.

• Verification and validation activities result inquantitative evaluation of TPMs.

• "Out-of-bounds" TPMs are signals to replanfiscal, schedule, and people resources;sometimes new systems analysis activities needto be initiated.

Tracking TPMs can begin as soon as a baselinedesign has been established, which can occur early inPhase B. A TPM tracking program should begin not laterthan the start of Phase C. Data to support the full set ofselected TPMs may, however, not be available untillater in the project life cycle.

Selecting TPMs. In general, TPMs can be generic(attributes that are meaningful to each ProductBreakdown Structure (PBS) element, like mass orreliability) or unique (attributes that are meaningful onlyto specific PBS elements). The system engineer needsto decide which generic and unique TPMs are worthtracking at each level of the PBS. The system engineershould track the measure of system effectiveness (whenthe project maintains such a measure) and the principalperformance or technical attributes that determine it, astop-level TPMs. At lower levels of the PBS, TPMs worthtracking can be identified through the functional andperformance requirements levied on each individual

system, segment, etc. (See sidebar on high-levelTPMs.)

In selecting TPMs, the system engineer shouldfocus on those that can be objectively measured duringthe project life cycle. This measurement can be donedirectly by testing, or indirectly by a combination oftesting and analysis. Analyses are often the only meansavailable to determine some high-level TPMs such assystem reliability, but the data used in such analysesshould be based on demonstrated values to themaximum practical extent. These analyses can beperformed using the same measurement methods ormodels used during trade studies. In TPM tracking,however, instead of using estimated (or desired)performance or technical attributes, the models are

Examples of High-Level TPMs for PlanetarySpacecraft and Launch Vehicles

High-level technical performance measures ( TPMs)for planetary spacecraft include:

• End-of-mission (EOM:) dry mass• Injected mass (includes EOM dry mass,

baseline mission plus reserve propellant,other consumables and upper stage adaptormass)

• Consumables at EOM• Power demand (relative to supply)• Onboard data processing memory demand• Onboard data processing throughput time

Onboard data bus capacity• Total pointing error.

Mass and power demands by spacecraftsubsystems and science instruments may be trackedseparately as well. For launch vehicles, high-levelTPMs include:

• Total vehicle mass at launch• Payload mass (at nominal altitude or orbit)• Payload volume• Injection accuracy• Launch reliability• In-flight reliability• For reusable vehicles, percent of value recov -

ered For expendable vehicles, unit productioncost at the nth unit. (See sidebar on LearningCurve Theory.)

Page 77: Engineering Systems Handbook

NASA Systems Engineering HandbookManagement Issues in Systems Engineering

exercised using demonstrated values. As the project lifecycle proceeds through Phases C and D, themeasurement of TPMs should become increasinglymore accurate because of the availability of more"actual" data about the system.

Lastly, the system engineer should select thoseTPMs that must fall within well-defined (quantitative)limits for reasons of system effectiveness or missionfeasibility. Usually these limits represent either a firmupper or lower bound constraint. A typical example ofsuch a TPM for a spacecraft is its injected mass, whichmust not exceed the capability of the selected launchvehicle. Tracking injected mass as a high-level TPM ismeant to ensure that this does not happen.

Assessment Methods. The traditional method ofassessing a TPM is to establish a time-phased plannedprofile for it, and then to compare the demonstratedvalue against that profile. The planned profile representsa nominal "trajectory" for that TPM taking into account anumber of factors. These factors include thetechnological maturity of the system, the plannedschedule of tests and demonstrations, and any historicalexperience with similar or related systems. As anexample, spacecraft dry mass tends to grow duringPhases C and D by as much as 25 to 30 percent. Aplanned profile for spacecraft dry mass may try tocompensate for this growth with a lower initial value.The final value in the planned profile usually eitherintersects or is asymptotic to an allocated requirement(or specification). The planned profile method is thetechnical performance measurement counterpart to theEarned Value method for cost and schedule controldescribed earlier.

A closely related method of assessing a TPMrelies on establishing a time-phased marginrequirement for it, and comparing the actual marginagainst that requirement. The margin is generallydefined as the difference between a TPM'sdemonstrated value and its allocated requirement. Themargin requirement may be expressed as a percentageof the allocated requirement. The margin requirementgenerally declines through Phases C and D, reaching orapproaching zero at their completion.

Depending on which method is chosen, thesystem engineer's role is to propose reasonable plannedprofiles or margin requirements for approval by thecognizant manager. The value of either of thesemethods is that they allow management by exception --that is, only deviations from planned profiles or marginsbelow requirements signal potential future problemsrequiring replanning. If this occurs, then new cost,schedule, and/or technical changes should be proposed.Technical changes may imply some new plannedprofiles. This is illustrated for a hypothetical TPM in

Figure 22(a). In this example, a significant dem-

Page 78: Engineering Systems Handbook

NASA Systems Engineering HandbookManagement Issues in Systems Engineering

onstrated variance (i.e., unanticipated growth) in theTPM during design and development of the systemresulted in replanning at time t. The replanning took theform of an increase in the allowed final value of theTPM (the "allo-

An Example of the Risk Management Methodfor Tracking Spacecraft Mass

During Phases C and D, a spacecraft's injectedmass can be considered an uncertain quantity.Estimates of each subsystem's and eachinstrument's mass fare' however, made periodicallyby the design engineers. These estimates changeand become more accurate as actual parts andcomponents are built and integrated intosubsystems and instruments. Injected mass canalso change during Phases C and D as the quantityof propellant is fine-tuned to meet the missiondesign requirements. Thus at each point duringdevelopment, the spacecraft's injected mass isbetter represented as a probability distribution ratherthan as a single point.

The mechanics of obtaining a probabilitydistribution for injected mass typically involvemaking estimates of three points -- the lower andupper bounds and the most likely injected massvalue. These three values can be combined intoparameters that completely define a probabilitydistribution like the one shown in the figure below

The launch vehicle's "guaranteed" payloadcapability, designated the "LV Specification," isshown as a bold vertical line. The area under theprobability curve to the left of the bold vertical linerepresents the probability that the spacecraft'sinjected mass will be less than or equal to thelaunch vehicle's payload capability. If injected massis a TPM being tracked using the risk managementmethod, this probability could be plotted in a displaysimilar to Figure 22(c).If this probability were nearly one, then the projectmanager might consider adding more objectives to themission in order to take advantage of the "largemargin" that appears to exist. In the above figure,however, the probability is significantly less than one.Here, the project manager might consider descopingthe project, for example by removing an instrument orotherwise changing mission objectives. The projectmanager could also solve the problem by requesting alarger launch vehicle!

cation"). A new planned profile was then established totrack the TPM over the remaining time of the TPMtracking program.

The margin management method of assessingis illustrated for the same example in Figure 22(b). Thereplanning at time t occurred when the TPM fellsignificantly below the margin requirement. The newhigher allocation for the TPM resulted in a higher marginrequirement, but it also immediately placed the marginin excess of that requirement.

Both of these methods recognize that the finalvalue of the TPM being tracked is uncertain throughoutmost of Phases C and D. The margin managementmethod attempts to deal with this implicitly byestablishing a margin requirement that reduces thechances of the final value exceeding its allocation to alow number, for example five percent or less. A thirdmethod of reporting and assessing deals with this riskexplicitly. The risk management method is illustrated forthe same example in Figure 22(c). The replanning attime t occurred when the probability of the final TPMvalue being less than the allocation fell precipitously intothe red alert zone. The new higher allocation for theTPM resulted in a substantial improvement in thatprobability.

The risk management method requires anestimate of the probability distribution for the final TPMvalue. (See sidebar on tracking spacecraft mass.) Earlyin the TPM tracking program, when the demonstratedvalue is based on indirect means of estimation, thisdistribution typically has a larger statistical variance thanlater, when it is based on measured data, such as a testresult. When a TPM stays along its planned profile (orequivalently, when its margin remains above thecorresponding margin requirement), the narrowing of thestatistical distribution should allow the TPM to remain inthe green alert zone (in Figure 22(c)) despite its growth.The three methods represent different ways to assessTPMs and communicate that information tomanagement, but whichever is chosen, the pattern ofsuccess or failure should be the same for all three.

Relationship of TPM Tracking Program to the SEMP .The SEMP is the usual document for describing theproject's TPM tracking program. This description shouldinclude a master list of those TPMs to be tracked, andthe measurement and assessment methods to beemployed. If analytical methods and models are used tomeasure certain high-level TPMs, then these need to beidentified. The reporting frequency and timing ofassessments should be specified as well. In determiningthese, the system engineer must balance the project'sneeds for accurate, timely, and effective TPM trackingagainst the cost of the TPM

Page 79: Engineering Systems Handbook

NASA Systems Engineering HandbookManagement Issues in Systems Engineering

tracking program. The TPM tracking program plan,which elaborates on the SEMP, should specify eachTPM's allocation, time-phased planned profile or marginrequirement, and alert zones, as appropriate to theselected assessment method.

4.9.3 Systems Engineering Process Metrics

Status reporting and assessment of systemsengineering process metrics provides additional visibilityinto the performance of the "system that produces thesystem." As such, these metrics supplement the costand schedule control measures discussed in Section4.9.1.

Systems engineering process metrics try toquantify the effectiveness and productivity of thesystems engineering process and organization. Within asingle project, tracking these metrics allows the systemengineer to better understand the health and progress ofthat project. Across projects (and over time), thetracking of systems engineering process metrics allowsfor better estimation of the cost and time of performingsystems engineering functions. It also allows thesystems engineering organization to demonstrate itscommitment to the TQM principle of continuousimprovement.

Selecting Systems Engineering Process Metrics.Generally, systems engineering process metrics fall intothree categories -- those that measure the progress ofthe systems engineering effort, those that measure thequality of that process, and those that measure itsproductivity. Different levels of systems engineeringmanagement are generally interested in differentmetrics. For example, a project manager or lead systemengineer may focus on metrics dealing with systemsengineering staffing, project risk management progress,and major trade study progress. A subsystem systemengineer may focus on subsystem requirements andinterface definition progress and verification proceduresprogress. It is useful for each system engineer to focuson just a few process metrics. Which metrics should betracked depends on the system engineer's role in thetotal systems engineering effort. The systemsengineering process metrics worth tracking also changeas the project moves through its life cycle.

Collecting and maintaining data on the systemsengineering process is not without cost. Status reportingand assessment of systems engineering process metricsdivert time and effort from the process itself. Thesystem engineer must balance the value of eachsystems engineering process metric against itscollection cost. The value of these metrics arises fromthe insights they provide into the process that cannot beobtained from cost and schedule control measuresalone. Over time, these metrics can also be a source ofhard productivity data, which are invaluable indemonstrating the potential returns from investment insystems engineering tools and training.

Examples and Assessment Methods. Table 2 listssome systems engineering process metrics to beconsidered. This list is not intended to be exhaustive.Because some of these metrics allow for differentinterpretations, each NASA field center needs to definethem in a common-sense way that fits its ownprocesses. For example, each field center needs todetermine what it meant by a completed versus anapproved requirement, or whether these terms are evenrelevant. As part of this definition, it is important torecognize that not all requirements, for example, needbe lumped together. It may be more useful to track thesame metric separately for each of several differenttypes of requirements.

Quality-related metrics should serve to indicatewhen a part of the systems engineering process isoverloaded and/or breaking down. These metrics can bedefined and tracked in several different ways. Forexample, requirements volatility can be quantified asthe number of

Page 80: Engineering Systems Handbook

NASA Systems Engineering HandbookManagement Issues in Systems Engineering

newly identified requirements, or as the number ofchanges to already-approved requirements. As anotherexample, Engineering Change Request (ECR)processing could be tracked by comparing cumulativeECRs opened versus cumulative ECRs closed, or byplotting the age profile of open ECRs, or by examiningthe number of ECRs opened last month versus the totalnumber open. The system engineer should apply his/herown judgment in picking the status reporting andassessment method.

Productivity-related metrics provide anindication of systems engineering output per unit ofinput. Although more sophisticated measures of inputexist, the most common is the number of systemsengineering hours dedicated to a particular function oractivity. Because not all systems

engineering hours cost the same, an appropriateweighing scheme should be developed to ensurecomparability of hours across systems engineeringpersonnel.

Displaying schedule-related metrics can beaccomplished in a table or graph of planned quantitiesvs. actuals. With quality- and productivity-relatedmetrics, trends are generally more important thanisolated snapshots. The most useful kind of assessmentmethod allows comparisons of the trend on a currentproject with that for a successfully completed project ofthe same type. The latter provides a benchmark againstwhich the system engineer can judge his/her ownefforts.

Page 81: Engineering Systems Handbook

NASA Systems Engineering HandbookSystems Analysis and Modeling Issues

5 Systems Analysis and Modeling Issues

The role of systems analysis and modeling is toproduce rigorous and consistent evaluations so as tofoster better decisions in the systems engineeringprocess. By helping to progress the system designtoward an optimum, systems analysis and modelingcontribute to the objective of systems engineering. This

Systems Analysis

Gene Fisher defines systems analysis as "inquiry toassist decision makers in choosing preferred futurecourses of action by (1) systematically examining andreexamining the relevant objectives, and alternativepolicies and strategies for achieving them; and (2)comparing quantitatively where possible the economiccosts, effectiveness, and risks of thealternaternatives.”

Is accomplished primarily by performing trade studies ofplausible alternatives. The purpose of this chapter is todescribe the trade study process, the methods used intrade studies to quantify system effectiveness and cost,and the pitfalls to avoid.

5.1 The Trade Study Process

The trade study process is a critical part of thesystems engineering spiral described in Chapter 2. Thissection discusses the steps of the process in greaterdetail. Trade studies help to define the emerging systemat each level of resolution. One key message of thissection is that to be effective, the process requires theparticipation of many skills and a unity of effort to movetoward an optimum system design.

Figure 23 shows the trade study process insimplest terms, beginning with the step of defining thesystem's

Page 82: Engineering Systems Handbook

NASA Systems Engineering HandbookSystems Analysis and Modeling Issues

goals and objectives, and identifying the constraints itmust meet. In the early phases of the project life cycle,the goals, objectives, and constraints are usually statedin general operational terms. In later phases of theproject life cycle, when the architecture and, perhaps,some aspects of the design have already been decided,the goals and objectives may be stated as performancerequirements that a segment or subsystem must meet.

At each level of system resolution, the systemengineer needs to understand the full implications of thegoals, objectives, and constraints in order to formulatean appropriate system solution. This step isaccomplished by performing a functional analysis.Functional analysis is the systematic process ofidentifying, describing, and relating the functions asystem must perform in order to fulfil1 its goals andobjectives. In the early phases of the project life cycle,the functional analysis deals with the top-level functionsthat need to be performed by the system, where theyneed to be performed, how often, under whatoperational concept and environmental conditions, andso on. The functional analysis needs only to proceed toa level of decomposition that enables the trade study todefine the system architecture. In later phases of theproject life cycle, the functional analysis proceeds towhatever level of decomposition is needed to fullydefine the system design and interfaces. (See sidebaron functional analysis techniques.)

Closely related to defining the goals andobjectives, and performing a functional analysis, is thestep of defining the measures and measurementmethods for system effectiveness (when this ispractical), system performance or technical attributes,and system cost. (These variables are collectively calledoutcome variables, in keeping with the discussion inSection 2.3. Some systems engineering books refer tothese variables as decision criteria, but this term shouldnot be confused with selection rule, described below.Sections 5.2 and 5.3 discuss the concepts of systemcost and system effectiveness, respectively, in greaterdetail.) This step begins the analytical portion of thetrade study process, since it suggests the involvementof those familiar with quantitative methods.

For each measure, it is important to address thequestion of how that quantitative measure will be com-puted—that is, which measurement method is to beused. One reason for doing this is that this step thenexplicitly identifies those variables that are important inmeeting the system's goals and objectives.

Evaluating the likely outcomes of variousalternatives in terms of system effectiveness, theunderlying performance or technical attributes, and costbefore actual fabrication and/or programming usuallyrequires the use of a mathematical model or series ofmodels of the system. So a second reason forspecifying the measurement methods is that thenecessary models can be identified.

Sometimes these models are already availablefrom previous projects of a similar nature; other times,

they need to be developed. In the latter case, definingthe measurement methods should trigger the necessarysystem modeling activities. Since the development ofnew models can take a considerable amount of time andeffort, early identification is needed to ensure they willbe ready for formal use in trade studies.

Defining the selection rule is the step of explicitlydetermining how the outcome variables will be used tomake a (tentative) selection of the preferred alternative.As an example, a selection rule may be to choose thealternative with the highest estimated systemeffectiveness that

Functional Analysis Techniques

Functional analysis is the process of identifying, de-scribing, and relating the functions a system must per-form in order to fulfill its goals and objectives. Func-tional analysis is logically structured as a top-downhierarchical decomposition of those functions, andserves several important roles in the systemsengineering process:

• To draw out all the requirements the systemmust meet

• To help identify measures for systemeffectiveness and its underlying performanceor technical attributes at all levels

• To weed out from further consideration intrade studies those alternatives that cannotmeet the system's goals and objectives

• To provide insights to the system-level (andbelow) model builders, whose mathematicalmodels will be used in trade studies toevaluate the alternatives.

Several techniques are available to dofunctional analysis. The primary functional analysistechnique is the Functional Flow Block Diagram(FFBD). These diagrams show the network of actionsthat lead to the fulfillment of a function. Although theFFBD network shows the logical sequence of “what"must happen, it does not ascribe a time duration tofunctions or between functions. To understand time-critical requirements, a Time Line Analysis (TLA) isused. A TLA can be applied to such diverseoperational functions as spacecraft commandsequencing and launch vehicle processing. A thirdtechnique is the N2 diagram, which is a matrix displayof functional interactions, or data flows, at a particularhierarchical level. Appendix B.7 provides furtherdiscussion and examples of each of these techniques.

Page 83: Engineering Systems Handbook

NASA Systems Engineering HandbookSystems Analysis and Modeling Issues

costs less than x dollars (with some given probability),meets safety requirements, and possibly meets otherpolitical or schedule constraints. Defining the selectionrule is essentially deciding how the selection is to bemade. This step is independent from the actualmeasurement of system effectiveness, systemperformance or technical attributes, and system cost.

Many different selection rules are possible. Theselection rule in a particular trade study may depend onthe context in which the trade study is beingconducted—in particular, what level of system designresolution is being addressed. At each level of thesystem design, the selection rule generally should bechosen only after some guidance from the next higherlevel. The selection rule for trade studies at lower levelsof the system design should be in consonance with thehigher level selection rule.

Defining plausible alternatives is the step ofcreating some alternatives that can potentially achievethe goals and objectives of the system. This stepdepends on understanding (to an appropriately detailedlevel) the system's functional requirements andoperational concept. Running an alternative through anoperational time line or reference mission is a usefulway of determining whether it can plausibly fulfill theserequirements. (Sometimes it is necessary to createseparate behavioral models to determine how thesystem reacts when a certain stimulus or control isapplied, or a certain environment is encountered. Thisprovides insights into whether it can plausibly fulfilltime-critical and safety requirements.) Defining plausiblealternatives also requires an understanding of thetechnologies available, or potentially available, at thetime the system is needed. Each plausible alternativeshould be documented qualitatively in a descriptionsheet. The format of the description sheet should, at aminimum, clarify the allocation of required systemfunctions to that alternative's lower-level architectural ordesign components (e.g.. subsystems).

One way to represent the trade studyalternatives under consideration is by a trade tree.During Phase A trade studies, the trade tree shouldcontain a number of alternative high-level systemarchitectures to avoid a premature focus on a singleone. As the systems engineering process proceeds,branches of the trade tree containing unattractivealternatives will be "pruned," and greater detail in termsof system design will be added to those branches thatmerit further attention. The process of pruning unat-tractive early alternatives is sometimes known as doing"killer trades." (See sidebar on trade trees.)

Given a set of plausible alternatives, the nextstep is to collect data on each to support the evaluationof the measures by the selected measurement methods.If models are to be used to calculate some of thesemeasures, then obtaining the model inputs providessome impetus and direction to the data collectionactivity. By providing data, engineers in such disciplinesas reliability, maintainability, producibility, integrated

logistics, software, testing, operations, and costing havean important supporting role in trade studies. The datacollection activity, however, should be orchestrated bythe system engineer. The results of this step should be aquantitative description of each alternative toaccompany the qualitative.

Test results on each alternative can beespecially useful. Early in the systems engineeringprocess, performance and technical attributes aregenerally uncertain and must be estimated. Data frombreadboard and brassboard testbeds can provideadditional confidence that the range of values used asmodel inputs is correct. Such confidence is alsoenhanced by drawing on data collected on related pre-viously developed systems.

The next step in the trade study process is toquantify the outcome variables by computing estimatesof system effectiveness, its underlying systemperformance or technical attributes, and system cost. Ifthe needed data have been collected, and themeasurement methods (for example, models) are inplace, then this step is, in theory, mechanical. Inpractice, considerable skill is often needed to getmeaningful results.

In an ideal world, all input values would be pre-cisely known, and models would perfectly predictoutcome variables. This not being the case, the systemengineer should supplement point estimates of theoutcome variables for each alternative with computed orestimated uncertainty ranges. For each uncertain keyinput, a range of values should be estimated. Using thisrange of input values, the sensitivity of the outcomevariables can be gauged, and their uncertainty rangescalculated. The system engineer may be able to obtainmeaningful probability distributions for the outcomevariables using Monte Carlo simulation (see Section5.4.2), but when this is not feasible, the system engineermust be content with only ranges and sensitivities.

This essentially completes the analytical portionof the trade study process. The next steps can bedescribed as the judgmental portion. Combining theselection rule with the results of the analytical activityshould enable the system engineer to array thealternatives from most preferred to least, in essencemaking a tentative selection.

This tentative selection should not be acceptedblindly. In most trade studies, there is a need to subjectthe results to a "reality check" by considering a numberof questions. Have the goals, objectives, and constraintstruly been met? Is the tentative selection heavilydependent on a particular set of input values to themeasurement methods, or does it hold up under a rangeof reasonable input values? (In the latter case, thetentative selection is

Page 84: Engineering Systems Handbook

NASA Systems Engineering HandbookSystems Analysis and Modeling Issues

said to be robust.) Are there sufficient data to back upthe tentative selection? Are the measurement methodssufficiently discriminating to be sure that the tentativeselection is really better than other alternatives? Havethe subjective aspects of the problem been fullyaddressed?

If the answers support the tentative selection,then the system engineer can have greater confidencein a recommendation to proceed to a further resolutionof the system design, or to the implementation of thatdesign. The estimates of system effectiveness, itsunderlying performance or technical attributes, andsystem cost generated during the trade study processserve as inputs to that further resolution. The analyticalportion of the trade study process often provide themeans to quantify the performance or technical (andcost) attributes that the system's lower levels mustmeet. These can be formalized as performancerequirements.

If the reality check is not met, the trade studyprocess returns to one or more earlier steps. Thisiteration may result in a change in the goals, objectives,and constraints, a new alternative, or a change in theselection rule, based on the new information generatedduring the trade study. The reality check may, at times,lead instead to a decision to first improve the measuresand measurement methods (e.g., models) used inevaluating the alternatives, and then to repeat theanalytical portion of the trade study process.

5.1.1 Controlling the Trade Study Process

There are a number of mechanisms forcontrolling the trade study process. The most importantone is the Systems Engineering Management Plan(SEMP). The SEMP specifies the major trade studiesthat are to be performed during each phase of theproject life cycle. It

An Example of a Trade Tree for a Mars Rover

The figure below shows part of a trade tree for a robotic Mars rover system, whose goal is to find a suitable mannedlanding site. Each layer represents some aspect of the system that needs to be treated in a trade study to determinethe best alternative. Some alternatives have been eliminated a priori because of technical feasibility, launch vehicleconstraints, etc. The total number of alternatives is given by the number of end points of the tree. Even with just afew layers, the number of alternatives can increase quickly. (This tree has already been pruned to eliminatelow-autonomy, large rovers.) As the systems engineering process proceeds, branches of the tree with unfavorabletrade study outcomes are discarded. The remaining branches are further developed by identifying more detailedtrade studies that need to be made. A whole family of (implicit) alternatives can be represented in a trade tree by acontinuous variable. In this example, rover speed or range might be so represented. By treating a variable this way,mathematical optimization techniques can be applied. Note that a trade tree is, in essence, a decision tree withoutchance nodes. (See the sidebar on decision trees.)

Page 85: Engineering Systems Handbook

NASA Systems Engineering HandbookSystem Analysis and Modeling Issues

Trade Study Reports

Trade study reports should be prepared for each tradestudy. At a minimum, each trade study report shouldidentify:

• The system issue under analysis• System goals and objectives (or

requirements, as appropriate to the level ofresolution), and constraints

• The measures and measurement methods(models) used

• All data sources used• The alternatives chosen for analysis• The computational results, including

uncertainty ranges and sensitivity analysesperformed

• The selection rule used• The recommended alternative.

Trade study reports should be maintainedas part of the system archives so as to ensuretraceability of decisions made through the systemsengineering process. Using a generally consistentformat for these reports also makes it easier to reviewand assimilate them into the formal change controlprocess.

should also spell out the general contents of trade studyreports, which form part of the decision supportpackages (i.e., documentation submitted in conjunctionwith formal reviews and change requests).

A second mechanism for controlling the tradestudy process is the selection of the study team leadersand members. Because doing trade studies is part artand part science, the composition and experience of theteams is an important determinant of the study's ultimateusefulness. A useful technique to avoid premature focuson a specific technical designs is to include in the studyteam individuals with differing technology backgrounds.

Another mechanism is limiting the number ofalternatives that are to be carried through the study.This number is usually determined by the time andresources available to do the study because the workrequired in defining additional alternatives and obtainingthe necessary data on them can be considerable.However, focusing on too few or too similar alternativesdefeats the purpose of the trade study process.

A fourth mechanism for controlling the tradestudy process can be exercised through the use (andmisuse) of models. Lastly, the choice of the selectionrule exerts a considerable influence on the results of thetrade study process. These last two issues are discussedin Sections 5.1.2 and 5.1.3, respectively.

5.1.2 Using Models

Models play important and diverse roles insystems engineering. A model can be defined in severalways, including:

• An abstraction of reality designed to answerspecific questions about the real world

• An imitation, analogue, or representation of areal world process or structure; or

• A conceptual, mathematical, or physical tool toassist a decision maker.

Together, these definitions are broad enough toencompass physical engineering models used in theverification of a system design, as well as schematicmodels like a functional flow block diagram andmathematical (i.e., quantitative) models used in thetrade study process. This section focuses on the last.

The main reason for using mathematical modelsin trade studies is to provide estimates of systemeffectiveness, performance or technical attributes, andcost from a set of known or estimable quantities.Typically, a collection of separate models is needed toprovide all of these outcome variables. The heart of anymathematical model is a set of meaningful quantitativerelationships among its inputs and outputs. Theserelationships can be as simple as adding up constituentquantities to obtain a total, or as complex as a set ofdifferential equations describing the trajectory of aspacecraft in a gravitational field. Ideally, therelationships express causality, not just correlation.

Types of Models. There are a number of ways mathe-matical models can be usefully categorized. One way isaccording to its purpose in the trade study process—thatis, what system issue and what level of detail the modeladdresses, and with which outcome variable or variablesthe model primarily deals. Other commonly used waysof categorizing mathematical models focus on specificmodel attributes such as whether a model is:

• Static or dynamic• Deterministic or probabilistic (also called

stochastic)• Descriptive or optimizing.

These terms allow model builders and modelusers to enter into a dialogue with each other about thetype of model used in a particular analysis or tradestudy. No hierarchy is implied in the above list; none ofthe above dichotomous categorizations stands abovethe others.

Page 86: Engineering Systems Handbook

NASA Systems Engineering HandbookSystems Analysis and Modeling Issues

Another taxonomy can be based on the degreeof analytic tractability. At one extreme on this scale, an"analytic" model allows a closed-form solution for a out-come variable of interest as a function of the modelinputs. At the other extreme, quantification of a outcomevariable of interest is at best ordinal, while in the middleare many forms of mathematical simulation models.

Mathematical simulations are a particularlyuseful type of model in trade studies. These kinds ofmodels have been successfully used in dealingquantitatively with large complex systems problems inmanufacturing, transportation, and logistics. Simulationmodels are used for these problems because it is notpossible to "solve" the system's equations analytically toobtain a closed-form solution, yet it is relatively easy toobtain the desired results (usually the system's behaviorunder different assumptions) using the sheercomputational power of current computers.

Linear, nonlinear, integer and dynamic program-ming models are another important class of models intrade studies because they can optimize an objectivefunction representing an important outcome variable (forexample, system effectiveness) for a whole class ofimplied alternatives. Their power is best applied insituations where the system's objective function andconstraints are well understood, and these constraintscan be written as a set of equalities and inequalities.

Pitfalls in Using Models. Models always embody as-sumptions about the real world they purport to represent,and they always leave something out. Moreover, theyare usually capable of producing highly accurate resultsonly when they are addressing rigorously quantifiablequestions in which the "physics" is well understood as,for example, a load dynamics analysis or a circuitanalysis.

In dealing with system issues at the top level,however, this is seldom the case. There is often asignificant difference between the substantive systemcost-effectiveness issues and questions, and thequestions that are mathematically tractable from amodeling perspective. For example, the program/projectmanager may ask: "What's the best space station wecan build in the current budgetary environment?" Thesystem engineer may try to deal with that question bytranslating it into: "For a few plausible station designs,what does each provide its users, and how much doeseach cost?" When the system engineer then turns to amodel (or models) for answers, the results may only besome approximate costs and some user resourcemeasures based on a few engineering relationships. Themodel has failed to adequately address even the systemengineer's more limited question, much less theprogram/project manager's. Compounding this sense ofmodel incompleteness is the recognition that themodel's relationships are often chosen for theirmathematical convenience, rather than a demonstratedempirical validity. Under this situation, the model mayproduce insights, but it cannot provide definitive

answers to the substantive questions on its own. Oftentoo, the system engineer must make an engineeringinterpretation of model results and convey them to theproject manager or other decision maker in a way thatcaptures the essence of the original question.

As mentioned earlier, large complex problemsoften require multiple models to deal with differentaspects of evaluating alternative system architectures(and designs). It is not unusual to have separate modelsto deal with costs and effectiveness, or to have ahierarchy of models—i.e., models to deal with lowerlevel engineering issues that provide useful results tosystem-level mathematical models. This situation itselfcan have built-in pitfalls.

One such pitfall is that there is no guaranteethat all of the models work together the way the systemengineer intends or needs. One submodel's specializedassumptions may not be consistent with the largermodel it feeds. Optimization at the subsystem level maynot be consistent with system-level optimization.Another such pitfall occurs when a key effectivenessvariable is not represented in the cost models. Forexample, if spacecraft reliability is a key variable in thesystem effectiveness equation, and if that reliabilitydoes not appear as a variable in the spacecraft costmodel, then there is an important disconnect. This isbecause the models allow the spacecraft designer to be-lieve it is possible to boost the effectiveness withincreased reliability without paying any apparent costpenalty. When the models fail to treat such importantinteractions, the system engineer must ensure thatothers do not reach false conclusions regarding costsand effectiveness.

Characteristics of a Good Model. In choosing a model(or models) for a trade study, it is important to recognizethose characteristics that a good model has. This list in-cludes:

• Relevance to the trade study being performed• Credibility in the eye of the decision maker• Responsiveness• Transparency• User friendliness.

Both relevance and credibility are crucial to theacceptance of a model for use in trade studies.Relevance is determined by how well a modeladdresses the substantive cost-effectiveness issues inthe trade study. A model's credibility results from thelogical consistency of its mathematical relationships,and a history of successful (i.e., cor-

Page 87: Engineering Systems Handbook

NASA Systems Engineering HandbookSystems Analysis and Modeling

rect) predictions. A history of successful predictionslends credibility to a model, but full validation—proofthat the model's prediction is in accord with reality—isvery difficult to attain since observational evidence onthose predictions is generally very scarce. While it iscertainly advantageous to use tried-and-true models thatare often left as the legacy of previous projects, this isnot always possible. Systems that address newproblems often require that new models be developedfor their trade studies. In that case, full validation is outof the question, and the system engineer must becontent with models that have logical consistency andsome limited form of outside, independent cor-roboration.

Responsiveness of a model is a measure of itspower to distinguish among the different alternativesbeing considered in a trade study. A responsive lunarbase cost model, for example, should be able todistinguish the costs associated with different systemarchitectures or designs, operations concepts, orlogistics strategies.

Another desirable model characteristic istransparency, which occurs when the model'smathematical relationships, algorithms, parameters,supporting data, and inner workings are open to theuser. The benefit of this visibility is in the traceability ofthe model's results. Not everyone may agree with theresults, but at least they know how they were derived.Transparency also aids in the acceptance process. It iseasier for a model to be accepted when itsdocumentation is complete and open for comment.Proprietary models often suffer from a lack of ac-ceptance because of a lack of transparency.

Upfront user friendliness is related to the easewith which the system engineer can learn to use themodel and prepare the inputs to it. Backend userfriendliness is related to the effort needed to interpretthe model's results and to prepare trade study reportsfor the tentative selection using the selection rule.

5.1.3 Selecting the Selection Rule

The analytical portion of the trade study processserves to produce specific information on systemeffectiveness, its underlying performance or technicalattributes, and cost (along with uncertainty ranges) for afew alternative system architectures (and later, systemdesigns). These data need to be brought together sothat one alternative may be selected. This step isaccomplished by applying the selection rule to the dataso that the alternatives may be ranked in order ofpreference.

The structure and complexity of real world deci-sions in systems engineering often make this ranking adifficult task. For one, securing higher effectivenessalmost always means incurring higher costs and/orfacing greater uncertainties. In order to choose amongalternatives with different levels of effectiveness andcosts, the system engineer must understand how much

of one is worth in terms of the other. An explicitcost-effectiveness objective function is seldom availableto help guide the selection decision, as any systemengineer who has had to make a budget-induced systemdescope decision will attest.

A second, and major, problem is that anexpression or measurement method for systemeffectiveness may not be possible to construct, eventhough its underlying performance and technicalattributes are easily quantified. These underlyingattributes are often the same as the technicalperformance measures (TPMs) that are tracked duringthe product development process to gauge whether thesystem design will meet its performance requirements.In this case, system effectiveness may, at best, haveseveral irreducible dimensions.

What selection rule should be used has beenthe subject of many books and articles in the decisionsciences —management science, operations researchand economics. A number of selection rules areapplicable to NASA trade studies. Which one should beused in a particular trade study depends on a number offactors:

• The level of resolution in the system design Thephase of the project life cycle

• Whether the project maintains an overall systemeffectiveness model

• How much less-quantifiable, subjective factorscontribute to the selection

• Whether uncertainty is paramount, or can effec-tively be treated as a subordinate issue

• Whether the alternatives consist of a fewqualitatively different architectures designs, ormany similar ones that differ only in somequantitative dimensions.

This handbook can only suggest some selectionrules for NASA trade studies, and some generalconditions under which each is applicable; definitiveguidance on which to use in each and every case hasnot been attempted.

Table 3 first divides selection rules according tothe importance of uncertainty in the trade study. Thisdivision is reflective of two different classes of decisionproblems —decisions to be made under conditions ofcertainty, and decisions to be made under conditions ofuncertainty. Uncertainty is an inherent part of systemsengineering, but the distinction may be best explainedby reference to Figure 2, which is repeated here asFigure 24. In the former class, the measures of systemeffectiveness, performance or tech-

Page 88: Engineering Systems Handbook

NASA Systems Engineering HandbookSystems Analysis and Modeling Issues

nical attributes, and system cost for the alternatives inthe trade study look like those for alternative B. In thelatter class, they look like those for alternative C. Whenthey look like those for alternative A, conditions ofuncertainty should apply, but often are not treated thatway.

The table further divides each of the aboveclasses of decision problems into two further categories:those that apply when cost and effectiveness measuresare scalar quantities, and thus suffice to guide thesystem engineer to the best alternative, and those thatapply when cost and effectiveness cannot berepresented as scalar quantities.

Selection Rules When Uncertainty Is Subordinate,or Not Considered. Selecting the alternative thatmaximizes net benefits (benefits minus costs) is the ruleused in most cost-benefit analyses. Cost-benefitanalysis applies, however, only when the return on aproject can be measured in the same units as the costs,as, for example, in its classical application of evaluatingwater resource projects.

Another selection rule is to choose thealternative that maximizes effectiveness for a given levelof cost. This rule is applicable when systemeffectiveness and system cost can be unambiguouslymeasured, and the appropriate level of cost is known.Since the purpose of the selection rule is to compareand rank the alternatives, practical application requiresthat each of the alternatives be placed on an equal costbasis. For certain types of trade studies, this does notpresent a problem. For example, changing system size

or output, or the number of platforms or instruments,may suffice. In other types of trade studies, this may notbe possible.

A related selection rule is to choose thealternative that minimizes cost for a given level ofeffectiveness. This rule presupposes that systemeffectiveness and system cost can be unambiguouslymeasured, and the appropriate level of effectiveness isknown. Again, practical application requires that each ofthe alternatives be put on an equal effectiveness basis.This rule is dual to the one above in the following sense:For a given level of cost, the same alternative would bechosen by both rules; similarly, for a given level ofeffectiveness, the same alternative would be chosen byboth rules.

When it is not practical to equalize the cost orthe effectiveness of competing alternatives, and costcaps or effectiveness floors do not rule out allalternatives save one, then it is necessary to form,either explicitly or implicitly, a cost-effectivenessobjective function like the one shown in Figure 4(Section 2.5). The cost-effectiveness objective functionprovides a single measure of worth for all combinationsof cost and effectiveness. When this selection rule isapplied, the alternative with the highest value of thecost-effectiveness objective function is chosen.

Another group of selection rules is needed whencost and/or effectiveness cannot be represented asscalar quantities. To choose the best alternative, amulti-objective selection rule is needed. Amulti-objective rule seeks to select the alternative that,in some sense, represents the best balance amongcompeting objectives. To accomplish this, eachalternative is measured (by some quantitative

Page 89: Engineering Systems Handbook

NASA Systems Engineering HandbookSystems Analysis and Modeling Issues

method) in terms of how well it achieves each objective.For example, the objectives might be national prestige,upgrade or expansion potential, science data return, lowcost, and potential for international partnerships. Eachalternative's "scores" against the objectives are thencombined in a value function to yield an overall figure ofmerit for the alternative. The way the scores arecombined should reflect the decision maker's preferencestructure. The alternative that maximizes the valuefunction (i.e., with the highest figure of merit) is thenselected. In essence, this selection rule recasts amulti-objective decision problem into one involving asingle, measurable objective.

One way, but not the only way, of forming thefigure of merit for each alternative is to linearly combineits scores computed for each of the objectives—that is,compute a weighted sum of the scores.MSFC-HDBK-1912, Systems Engineering (Volume 2)recommends this selection rule. The weights used incomputing the figure of merit can be assigned a priori ordetermined using Multi Attribute Utility Theory (MAUT).Another technique of forming a figure of merit is theAnalytic Hierarachy Process (AHP). Severalmicrocomputer-based commercial software packagesare available to automate either MAUT or AHP. If thewrong weights, objectives, or attributes are chosen ineither technique, the entire process may obscure thebest alternative. Also, with either technique, the indi-vidual evaluators may tend to reflect the institutionalbiases and preferences of their respective organizations.The results, therefore, may depend on the mix ofevaluators. (See sidebars on AHP and MAUT.)

Another multi-objective selection rule is tochoose the alternative with the highest figure of meritfrom among those that meet specified individualobjectives. This selection rule is used extensively bySource Evaluation Boards (SEBs) in the NASAprocurement process. Each proposal, from among thosemeeting specific technical objectives (requirements), isscored on such attributes as technical design, price,systems engineering process quality, etc. In applyingthis rule, the attributes being scored by the SEB areknown to the bidders, but their weighing may not be.(See NHB 5103.6B.)

In trade studies where no measure of systemeffectiveness can be constructed, but performance ortechnical attributes can be quantified, a possibleselection rule is to choose the alternative that minimizescost for given levels of performance or technicalattributes. This rule presupposes that system cost canbe unambiguously measured, and is related to the all ofthe quantified performance or technical attributes thatare considered constraints. Practical application againrequires that all of the alternatives be put on an equalbasis with respect to the performance or technicalattributes. This may not be practical for trade

The Analytic Hierarchy Process

AHP is a decision technique in which a figure of meritis determined for each of several alternatives througha series of pair-wise comparisons. AHP is normallydone in six steps:

(1) Describe in summary form the alternativesunder consideration.

(2) Develop a set of high-level evaluationobjectives; for example, science data return,national prestige, technology advancement, etc.

(3) Decompose each hi-level evaluation objectiveinto a hierarchy of evaluation attributes thatclarify the meaning of the objective.

(4) Determine, generally by conducting structuredinterviews with selected individuals (“experts”)or by having them fill out structuredquestionnaires, the relative importance of theevaluation objectives and attributes throughpair-wise comparisons.

(5) Have each evaluator make separate pair-wisecomparisons of the alternatives with respect toeach evaluation attribute. These subjectiveevaluations are the raw data inputs to aseparately developed AHP program , whichproduces a single figure of merit for eachalternative. This figure of merit is based onrelative weight determined by the evaluatorsthemselves.

(6) Iterate the questionnaire and AHP evaluationprocess until a consensus ranking of thealternative is achieved

With AHP, sometimes consensus is achievedquickly; other times, several feedback rounds are re-quired. The, feed back consists of reporting the com-puted values (for each evaluator and for the group) foreach option, reasons for differences in evaluation, andidentified areas of contention and/or inconsistency.Individual evaluators may choose to change theirsubjective judgments on both attribute weights andpreferences. At this point, inconsistent and divergentpreferences can be targeted for more detailed study.

AHP assumes the existence of an underlyingpreference “Vector” (with magnitudes and directions)that is revealed through the pair-wise comparisons.This is a powerful assumption, which may at best holdonly for the participating evaluators. The figure ofmerit produced for each alternative is the result of thegroup’s subjective judgments and is not necessarily areproducible result. For information on AHP, seeThomas L. Saaty, The Analytic Hierarchy Process,1980.

Page 90: Engineering Systems Handbook

NASA Systems Engineering HandbookSystems Analysis and Modeling Issues

Multi-Attribute Utility Theory

MAUT is a decision technique in which a figure of merit (or utility) is determined for each of several alternativesthrough a series of preference-revealing comparisons of simple lotteries. An abbreviated MAUT decision mechanismcan be described in six steps:

(1) Choose a set of descriptive, but quantifiable, attributes designed to characterize each alternative.(2) For each alternative under consideration, generate values for each attribute in the set; these may be point

estimates, or probability distributions, if the uncertainty in attribute values warrants explicit treatment.(3) Develop an attribute utility function for each attribute in the set. Attribute utility functions range from 0 to 1; the

least desirable value, xi0, of an attribute (over its range of plausible values) is assigned a utility value of 0, and

the most desirable, xi*, is assigned a utility value of 1. That is, ui(xi0) = 0 and ui(xi*) = 1. The utility value of an

attribute value, xi, intermediate between the least desirable and most desirable is assessed by finding the valuexi such that the decision maker is indifferent between receiving xi for sure, or, a lottery that yields xi

0 withprobability p i or xi* with probability 1 - pi. From the mathematics of MAUT, ui(xi) = pi ui(xi

0) + (1 - pi) ui(xi*) = 1 - pi.(4) Repeat the process of indifference revealing until there are enough discrete points to approximate a continuous

attribute utility function.(5) Combine the individual attribute utility functions to form a multiattribute utility function. This is also done using

simple lotteries to reveal indifference between receiving a particular set of attribute values with certainty, or, alottery of attribute values. In its simplest form, the resultant multiattribute utility function is a weighted sum of theindividual attribute utility functions.

(6) Evaluate each alternative using the multiattribute utility function.

The most difficult problem with MAUT is getting the decision makers or evaluators to think in terms of lotteries. This canoften be overcome by an experienced interviewer. MAUT is based on a set of mathematical axioms about the wayindividuals should behave when confronted by uncertainty. Logical consistency in ranking alternatives is assured so longas evaluators adhere to the axioms; no guarantee can be made that this will always be the case. An extended discussionof MAUT is given in Keeney and Raiffa, Decisions with Multiple Objectives: Preferences and Value Tradeoffs, 1976. Atextbook application of MAUT to a NASA problem can be found in Jeffrey H. Smith, et al., An Application of MultiaffributeDecision Analysis to the Space Station Freedom Program, Case Study: Automation and Robotics Technol ogy Evaluation,1990.

studies in which the alternatives cannot be described bya set of continuous mathematical relationships.

Selection Rules When Uncertainty Predominates.When the measures of system effectiveness,performance or technical attributes, and system cost forthe alternatives in the trade study look like those foralternative C in Figure 22, the selection of the bestalternative may need to be handled differently. This isbecause of the general propensity of decision makers toshow risk-averse behavior when dealing with largevariations in cost and/or effectiveness outcomes. Insuch cases, the expected value (i.e., the mean) of somestochastic outcome variable is not a satisfactory pointmeasure of that variable.

To handle this class of decision problem, thesystem engineer may wish to invoke a vonNeumann-Morgenstem selection rule. In this case,alternatives are treated as "gambles" (or lotteries). Theprobability of each outcome is also known or can besubjectively estimated, usually by creating a decisiontree. The von Neumann-Morgenstem selection ruleapplies a separately developed utility function to eachoutcome, and chooses the alternative that maximizesthe expected utility. This selection rule is easy to apply

when the lottery outcomes can be measured in dollars.Although multi-attribute cases are more complex, theprinciple remains the same.

The basis for the von Neumann-Morgenstemselection rule is a set of mathematical axioms abouthow individuals should behave when confronted byuncertainty. Practical application of this rule requires anability to enumerate each "state of nature" (hereafter,simply called "state"), knowledge of the outcomeassociated with each enumerated state for eachalternative, the probabilities for the various states, and amathematical expression for the decision maker's utilityfunction. This selection rule has also found use in theevaluation of system procurement alternatives. SeeSection 4.6.3 for a discussion of some related topics,including decision analysis, decision trees, andprobabilistic risk assessment.

Another selection rule for this class of decisionproblem is called the minimax rule. To apply it, the sys-tem engineer computes a loss function for eachenumerated state for each alternative. This rule choosesthe alternative that minimizes the maximum loss.Practical application re-

Page 91: Engineering Systems Handbook

NASA Systems Engineering HandbookSystems Analysis and Modeling Issues

quires an ability to enumerate each state and define theloss function. Because of its "worst case" feature, thisrule has found some application in military systems.

5.1.4 Trade Study Process: Summary

System architecture and design decisions will bemade. The purpose of the trade study process is toensure that they move the design toward an optimum.The basic steps in that process are:

• Understand what the system's goals, objectives,and constraints are, and what the system mustdo to meet them—that is, understand thefunctional requirements in the operatingenvironment.

• Devise some alternative means to meet thefunctional requirements. In the early phases ofthe project life cycle, this means focusing onsystem architectures; in later phases, emphasisis given to system designs.

• Evaluate these alternatives in terms of theoutcome variables (system effectiveness, itsunderlying performance or technical attributes,and system cost). Mathematical models areuseful in this step not only for forcingrecognition of the relationships among theoutcome variables, but also for helping todetermine what the performance requirementsmust be quantitatively.

• Rank the alternatives according to anappropriate selection rule.

• Drop less-promising alternatives and proceed tonext level of resolution, if needed.

This process cannot be done as an isolatedactivity. To make it work effectively, individuals withdifferent skills—system engineers, design engineers,specialty engineers, program analysts, decisionscientists, and project managers—must cooperate. Theright quantitative methods and selection rule must beused. Trade study assumptions, models, and resultsmust be documented as part of the project archives.

5.2 Cost Definition and Modeling

This section deals with the role of costs in thesystems analysis and engineering process, how tomeasure it, how to control it, and how to obtainestimates of it. The reason costs and their estimates areof great importance in systems engineering goes back tothe principal objective of systems engineering: fulfillingthe system's goals in the most cost-effective manner.The cost of each alternative should be one of the mostimportant outcome variables in trade studies performedduring the systems engineering process.

One role, then, for cost estimates is in helping tochoose rationally among alternatives. Another is as acontrol mechanism during the project life cycle. Cost

measures produced for project life cycle reviews areimportant in determining whether the system goals andobjectives are still deemed valid and achievable, andwhether constraints and boundaries are worthmaintaining. These measures are also useful indetermining whether system goals and objectives haveproperly flowed down through to the varioussubsystems.

As system designs and operational conceptsmature, cost estimates should mature as well. At eachreview, cost estimates need to be presented andcompared to the funds likely to be available to completethe project. The cost estimates presented at earlyreviews must be given special attention since theyusually form the basis under which authority to proceedwith the project is given. The system engineer must beable to provide realistic cost estimates to the projectmanager. In the absence of such estimates, overrunsare likely to occur, and the credibility of the entiresystem development process, both internal and exter-nal, is threatened.

5.2.1 Life-Cycle Cost and Other Cost Measures

A number of questions need to be addressed sothat costs are properly treated in systems analysis andengineering. These questions include:

• What costs should be counted?• How should costs occurring at different times be

treated?• What about costs that cannot easily be

measured in dollars?

What Costs Should be Counted. The mostcomprehensive measure of the cost of an alternative isits life-cycle cost. According to NHB 7120.5, a system'slife-cycle cost is "the total of the direct, indirect,recurring, nonrecurring, and other related costs incurred,or estimated to be incurred in the design, development,production, operation, maintenance, support, andretirement [of it] over its planned life span." A lessformal definition of a system's life-cycle cost is the totalcost of acquiring, owning, and disposing of it over itsentire lifetime. System life-cycle cost should beestimated and used in the evaluation of alternativesduring trade studies. The system engineer shouldinclude in the

Page 92: Engineering Systems Handbook

NASA Systems Engineering HandbookSystems Analysis and Modeling Issues

life-cycle cost those resources, like civil service work-years, that may not require explicit projectexpenditures. A system's life-cycle cost, when properlycomputed, is the best measure of its cost to NASA.

Life-cycle cost has several components, asshown in Figure 25. Applying the informal definitionabove, life-cycle cost consists of (a) the costs ofacquiring a usable system, (b) the costs of operatingand supporting it over its useful life, and (c) the cost ofdisposing of it at the end of its useful life. The systemacquisition cost includes more than the DDT&E andprocurement of the hardware and software; it alsoincludes the other start-up costs resulting from theneed for initial training of personnel, initial spares, thesystem's technical documentation, support equipment,facilities, and any launch services needed to place thesystem at its intended operational site.

The costs of operating and supporting thesystem include, but are not limited to, operationspersonnel and supporting activities, ongoing integratedlogistics support, and pre-planned productimprovement. For a major system, these costs areoften substantial on an annual basis, and whenaccumulated over years of operations can constitutethe majority of life -cycle cost.

At the start of the project life cycle, all of thesecosts lie in the future. At any point in the project lifecycle, some costs will have been expended. These

expended resources are known as sunk costs. For thepurpose of doing trade studies, the sunk costs of anyalternative under consideration are irrelevant, nomatter how large. The only costs relevant to currentdesign trades are those that lie in the future. The logicis straightforward: the way resources were spent in thepast cannot be changed. Only decisions regarding theway future resources are spent can be made. Sunkcosts may alter the cost of continuing with a particularalternative relative to others, but when choosingamong alternatives, only those costs that remainshould be counted.

At the end of its useful life, a system mayhave a positive residual or salvage value. This valueexists if the system can be sold, bartered, or used byanother system. This value needs to be counted inlife-cycle cost, and is generally treated as a negativecost.

Costs Occurring Over Time. The life-cycle costcombines costs that typically occur over a period ofseveral years. Costs incurred in different years cannotbe treated the same because they, in fact, representdifferent resources to society. A dollar wisely investedtoday will return somewhat more than a dollar nextyear. Treating a dollar today the same as a dollar nextyear ignores this potential trade.

Page 93: Engineering Systems Handbook

NASA Systems Engineering HandbookSystems Analysis and Modeling Issues

Calculating Present Discounted Value

Calculating the PDV is a way of reducing a stream ofcosts to a single number so that alternative streamscan be compared unambiguously. Several formulasfor PDV are used, depending on whether time is to betreated as a discrete or a continuous variable, andwhether the project's time horizon is finite or not. Thefollowing equation is useful for evaluating systemalternatives when costs have been estimated asyearly amounts, and the project's anticipated usefullife is T years. For alternative i,

TPDVi = Σ Cit (1 + r)-t

t=0

where r is the annual discount rate and Cit is the esti-mated cost of alternative i in year t.

Once the yearly costs have been estimated, thechoice of the discount rate is crucial to the evaluationsince it ultimately affects how much or how littlerunout costs contribute to the PDV. While calculatingthe PDV is generally accepted as the way to clear withcosts occurring over a period of years, there is muchdisagreement and confusion over the appropriatediscount rate to apply in systems engineering tradestudies. The Office of Management and Budget(OMB) has mandated the use of a rate of sevenpercent for NASA systems when constant dollars(dollars adjusted to the price level as of some fixedpoint in time) are used in the equation. When nominaldollars (sometimes called then-year, runout orreal-year dollars) are used, the OMB-mandatedannual rate should be increased by the inflation rateassumed for that year. Either approach yieldsessentially the same PDV. For details, see OMBCircular A-94, Guidelines and Discount Rates forBenefit Cost Analysis of Federal Programs, October1992.

Discounting future costs is a way of makingcosts occurring in different years commensurable. Whenapplied to a stream of future costs, the discountingprocedure yields the present discounted value (PDV) ofthat stream. The effect of discounting is to reduce thecontribution of costs incurred in the future relative tocosts incurred in the near term. Discounting should beperformed whether or not there is inflation, though caremust be taken to ensure the right discount rate is used.(See sidebar on PDV.)

In trade studies, different alternatives oftenhave cost streams that differ with respect to time. Onealternative with higher acquisition costs than anothermay offer lower operations and support costs. Withoutdiscounting, it would be difficult to know which streamtruly represents the lower life-cycle cost. Trade studies

should report the PDV of life-cycle cost for eachalternative as an outcome variable.

Difficult-To-Measure Costs. In practice, some costspose special problems. These special problems, whichare not unique to NASA systems, usually occur in twoareas: (a) when alternatives have differences in theirreducible chances of loss of life and (b) whenexternalities are present. Two examples of externalitiesthat impose costs are pollution caused by some launchsystems and the creation of orbital debris. Because it isdifficult to place a dollar figure on these resource uses,they are generally called incommensurable costs. Thegeneral treatment of these types of costs in tradestudies is not to ignore them, but instead to keep trackof them along with dollar costs.

5.2.2 Controlling Life-Cycle Costs

The project manager/system engineer mustensure that the system life-cycle cost (established at theend of Phase A) is initially compatible with NASA'sbudget and strategic priorities and that it demonstrativelyremains so over the project life cycle. According to NHB7120.5, every NASA program/project must:

• Develop and maintain an effective capability toestimate, assess, monitor, and control itslife-cycle cost throughout the project life cycle

• Relate life-cycle cost estimates to a well-definedtechnical baseline, detailed project schedule,and set of cost-estimating assumptions

• Identify the life-cycle costs of alternative levelsof system requirements and capability

• Report time-phased acquisition cost and techni-cal parameters to NASA Headquarters.

There are a number of actions the systemengineer can take to effect these objectives. Earlydecisions in the systems engineering process tend tohave the greatest effect on the resultant systemlife-cycle cost. Typically, by the time the preferredsystem architecture is selected, between 50 and 70percent of the system's life-cycle cost has been "lockedin." By the time a preliminary system design is selected,this figure may be as high as 90 percent. This presentsa major dilemma to the system engineer, who must leadthis selection process. Just at the time when decisionsare most critical, the state of information about thealternatives is least certain. Uncertainty about costs is afact of systems engineering.

This suggests that efforts to acquire betterinformation about the life-cycle cost of each alternativeearly in the project life-cycle (Phases A and B)potentially have very high payoffs. The system engineerneeds to understand what the principal life-cycle costdrivers are. Some

Page 94: Engineering Systems Handbook

NASA Systems Engineering HandbookSystems Analysis and Modeling Issues

major questions to consider are: How much does eachalready on well-understood technology? Can the systembe manufactured using routine processes or are higherprecision processes required? What tests are needed toverify and validate each alternative system design, andhow costly are they? What reliability levels are neededby each alternative? What environmental and safetyrequirements must be satisfied?

For a system whose operational life is expectedto be long and to involve complex activities, thelife-cycle cost is likely to be far greater than theacquisition costs alone. Consequently, it is particularlyimportant with such a system to bring in the specialtyengineering disciplines such as reliability,maintainability, supportability, and operationsengineering early in the systems engineering process,as they are essential to proper life-cycle cost estimation.

Another way of acquiring better information onthe cost of alternatives is for the project to haveindependent cost estimates prepared for comparisonpurposes.

Another mechanism for controlling life-cyclecost is to establish a life-cycle cost managementprogram as part of the project's management approach.(Life-cycle cost management has traditionally beencalled design-to-life-cycle cost.) Such a programestablishes life-cycle cost as a design goal, perhaps withsub-goals for acquisition costs or operations and supportcosts. More specifically, the objectives of a life-cyclecost management program are to:

• Identify a common set of ground rules andassumptions for life-cycle cost estimation

• Ensure that best-practice methods, tools, andmodels are used for life-cycle cost analysis

• Track the estimated life-cycle cost throughoutthe project life cycle, and, most important

• Integrate life-cycle cost considerations into thedesign and development process via tradestudies and formal change requestassessments.

Trade studies and formal change requestassessments provide the means to balance theeffectiveness and life-cycle cost of the system. Thecomplexity of integrating life-cycle cost considerationsinto the design and development process should not beunderestimated, but neither should the benefits, whichcan be measured in terms of greater cost-effectiveness.The existence of a rich set of potential life-cycle costtrades makes this complexity even greater.

The Space Station Alpha Program providesmany examples of such potential trades. As oneexample, consider the life-cycle cost effect of increasingthe mean time between failures (MTBF) of Alpha'sOrbital Replacement Units (ORUs). This is likely toincrease the acquisition cost, and may increase themass of the station. However, annual maintenancehours and the weight of annual replacement spares will

decline. The same station availability may be achievedwith fewer on-orbit spares, thus saving precious internalvolume used for spares storage. For ORUs external tothe station, the amount of extravehicular activity, with itsassociated logistics support, will also decline. With suchcomplex interactions, it is difficult to know what theoptimum point is. At a minimum, the system engineermust have the capability to assess the life-cycle cost ofeach alternative. (See Appendix B.8 on the operationsand operations cost effects of ORU MTBF and Section6.5 on Integrated Logistics Support.)

5.2.3 Cost Estimating

The techniques used to estimate each life-cyclecost component usually change as the project life cycleproceeds. Methods and tools used to support budgetestimates and life-cycle cost trades in Phase A may notbe sufficiently detailed to support those activities duringPhase C/D. Further, as the project life cycle proceeds,the requirements and the system design mature as well,revealing greater detail in the Work BreakdownStructure (WBS). This should enable the application ofcost estimating techniques at a greater resolution.

Three techniques are described below --parametric cost models, analogy, and grass-roots.Typically, the choice of technique depends on the stateof information available to the cost analyst at each pointin the project life cycle. Table 4 shows this dependence.

Parametric (or "top-down") cost models aremost useful when only a few key variables are known orcan be estimated. The most common example of aparametric model is the statistical Cost EstimatingRelationship (CER). A single equation (or set ofequations) is derived from a set of historical datarelating one or more of a system's characteristics to itscost using well-established statistical methods. Anumber of statistical CERs have been developed toestimate a spacecraft's hardware acquisition cost. Thesetypically use an estimate of its weight and othercharacteristics, such as design complexity and inheri-

Page 95: Engineering Systems Handbook

NASA Systems Engineering HandbookSystems Analysis and Modeling Issues

Statistical Cost Estimating Relationships:Example and Pitfalls

One model familiar to most cost analysts is thehistorically based CER. In its usual form, this modelis a linear expression with cost (the dependentvariable) as a function of one or more descriptivecharacteristics. The coefficients of the linearexpression are estimated by fitting historical datafrom previously completed projects of a similarnature using statistical regression techniques. Thistype of model is analytic and deterministic. Anexample of this type of model for estimating the firstunit cost, C, of a space-qualified Earth-orbitingreceiver/exciter is:

In C = 3.822 + 1.110 In W + 0.436 In z

where W is the receiver/exciter's weight, and z isthe number of receiver boxes; In is the naturallogarithm function. (Source: U.S. Air Force SystemsCommand-Space Division, Unmanned SpaceVehicle Cost Model, Seventh Edition, August 1994.)CERs are used extensively in advanced technologysystems, and have been challenged on boththeoretical and practical grounds. One challengecan be mounted on the basis of the assumption ofan unchanging relationship between cost and theindependent variables. Others have questioned thevalidity of CERs based on weight, a commonindevariable in many models, in light of advances inelectronic packaging and composite materials.Objections to using statistical CERs also includeproblems of input accuracy, low statisticalsignificance due to limited data points, ignoring thestatistical confidence bands, and, lastly, biases inthe underlying data.

tance, to obtain an estimate of cost. Similarly, softwareCERs have been developed as well, relying onjudgments about source lines of code and other factorsto obtain development costs. (See sidebar on statisticalCERs.)

Another type of parametric model relies onaccepted relationships. One common example can befound in the application of logistics relationships to theestimation of repair costs and initial and recurringspares costs. The validity of these cost estimates alsodepends on the quality of the input parameters.

The principal advantages of parametric costmodels are that the results are reproducible, are moreeasily documented than other methods, and often canbe produced with the least amount of time and effort.This makes a properly constructed performance-basedparametric cost model useful in early trade studies.

Analogy is another way of estimating costs.When a new system or component has functional andperformance characteristics similar to an existing onewhose cost is known, the known cost can be adjusted toreflect engineering judgments about differences.

Grass-roots (or "bottom-up") estimates are theresult of rolling up the costs estimated by eachorganization performing work described in the WBS.Properly done, grass-roots estimates can be quiteaccurate, but each time a `'what if" question is raised, anew estimate needs to be made. Each change ofassumptions voids at least part of the old estimate.Because the process of obtaining grassroots estimatesis typically time-consuming and labor-intensive, thenumber of such estimates that can be prepared duringtrade studies is in reality severely limited.

Whatever technique is used, the direct cost ofeach hardware and software element often needs to be"wrapped" (multiplied by a factor greater than one) tocover the costs of integration and test, programmanagement systems engineering, etc. These additionalcosts are

Page 96: Engineering Systems Handbook

NASA Systems Engineering HandbookSystems Analysis and Modeling Issues

called system-level costs, and are often calculated aspercentages of the direct costs.Using Parametric Cost Models. A number ofparametric cost models are available for costing NASAsystems. Some of these are shown in Table 5.Unfortunately, none alone is sufficient to estimatelife-cycle cost. Assembling an estimate of life-cycle costoften requires that several different models (along withthe other two techniques) be used together. To integratethe costs being estimated by these different models, thesystem engineer should ensure that the inputs to andassumptions of the models are consistent, that allrelevant life-cycle cost components are covered, andthat the timing of costs is correct.

The system engineer may sometimes find itnecessary to make some adjustments to model resultsto achieve

Learning Curve Theory

The learning curve (also known as the progress orexperience curve) is the time-honored way ofdealing with the empirical observation that the unitof fabricating multiple units of complex systems likeaircraft and spacecraft tends to decline as thenumber increases. In its usual form, the theorystates that as the total quantity produced doubles,the cost per unit decreases by a constantpercentage. The cost per unit may be either theaverage cost over the number produced, or the costof the last unit produced. In the first case, the curveis generally known as the cumulative averagelearning curve; in the second case, it is known asthe unit learning curve. Both formulations haveessentially the same rate of learning.

Let C(1) be the unit cost of the first productionunit, and C(Q) be the unit cost of the Qth productionunit, then learning curve theory states there is anumber, b, such that

C(Q) = C(1) Qb

The number b is specified by the rate of learning. A90 percent learning rate means that the unit cost ofthe second production unit is 90 percent of the firstproduction unit cost; the unit cost of the fourth is 90percent of the unit cost of the second, and so on. Ingeneral, the ratio of C(2Q) to C(Q) is the learningrate, LR, expressed as a decimal; using the aboveequation, b = In (LR)/ln 2, where In is the naturallogarithm.

Learning curve theory may not always be applicablebecause, for example, the time rate of production hasno effect on the basic equation. For more detail onlearning curves, including empirical studies and tablesfor various learning rates, see Harold Asher, Cost -Quantity Relationships in the Airframe Industry, R-291,The Rand Corporation, 1956.

a life-cycle cost estimate. One such situation occurswhen the results of different models, whose estimatesare expressed in different year constant dollars, must becombined. In that case, an appropriate inflation factormust be applied. Another such situation arises when amodel produces a cost estimate for the first unit of ahardware item, but the project requires multiple units. Inthat case, a learning curve can be applied to the firstunit cost to obtain the required multiple-unit estimate.(See sidebar on learning curve theory.)

A third situation requiring additional calculationoccurs when a model provides a cost estimate of thetotal

An Example of a Cost Spreader Function: The Beta Curve

One technique for spreading estimated acquisitioncosts over time is to apply the beta curve. Thisfifth-degree polynomial, which was developed at JSCin the late 1960s, expresses the cumulative costfraction as a function of the cumulative time fraction,T:

Cum Cost Fraction = 10T2(1 - T)2(A + BT)

+ T4(5 - 4T) for 0 ≤T ≤1.

A and B are parameters (with 0 ≤A + B ≤1 ) thatdetermine the shape of the beta curve. In particular,these parameters control what fraction of thecumulative cost has been expended when 50percent of the cumulative time has been reached.The figure below shows three examples: with A = 1and B = 0 as in curve (1), 81 percent of the costshave been expended at 50 percent of thecumulative time; with A = 0 and B = 1 as in curve(2), 50 percent of the costs have been expended at50 percent of the cumulative time; in curve (3) withA = B = 0, it's 19 percent.

Typically, JSC uses a 50 percent profile with A = 0and B = 1, or a 60 percent profile with A = 0.32 andB = 0.68, based on data from previous projects.

Page 97: Engineering Systems Handbook

NASA Systems Engineering HandbookSystems Analysis and Modeling

acquisition effort, but doesn't take into account the multi-year nature of that effort. The system engineer can usea set of "annual cost spreaders" based on the typicalramping-up and subsequent ramping-down ofacquisition costs for that type of project. (See sidebar onbeta curves.)

Although some general parametric cost modelsfor space systems are already available, their properuse usually requires a considerable investment inreaming time. For projects outside of the domains ofthese existing cost models, new cost models may beneeded to support trade studies. Efforts to developthese need to begin early in the project life cycle toensure their timely application during the systemsengineering process. Whether existing models or newlycreated ones are used, the SEMP and its associatedlife-cycle cost management plan should identify which(and how) models are to be used during each phase ofthe project life cycle.

5.3 Effectiveness Definition and Modeling

The concept of system effectiveness is moreelusive than that of cost. Yet, it is also one of the mostimportant factors to consider in trade studies. Inselecting among alternatives, the system engineer musttake into account system effectiveness, even when it isdifficult to define and measure reliably.

A measure of system effectiveness describesthe accomplishment of the system's goals andobjectives quantitatively. Each system (or family ofsystems with identical goals and objectives) has its ownmeasure of system effectiveness. There is no universalmeasure of effectiveness for NASA systems, and nonatural units with which to express effectiveness.Further, effectiveness is dependent on the context (i.e.,project or supersystem) in which the system is beingoperated, and any measure of it must take this intoaccount. The system engineer can, however, exploit atew basic, common features of system effectiveness indeveloping strategies for measuring it.

5.3.1 Strategies for Measuring SystemEffectiveness

System effectiveness is almost alwaysmultifaceted, and is typically the result of the combinedeffects of:

• System output quality• Size or quantity of system output• System coverage or comprehensiveness• System output timeliness• System availability.

A measure of effectiveness and its measurementmethod (i.e., model) should focus on the critical facet (orfacets) of effectiveness for the trade study issue under

consideration. Which facets are critical can often bededuced from the accompanying functional analysis.The functional analysis is also very useful in helping toidentify the underlying system performance or technicalattributes that mathematically determine systemeffectiveness. (Note that each of the above facets mayhave several dimensions. If this is the case, then eachdimension can be considered a function of theunderlying system performance or technical attributes.)Ideally, there is a strong connection between the systemfunctional analysis, system effectiveness measure, andthe functional and performance requirements. The samefunctional analysis that results in the functional re-quirements flowdown also yields the systemeffectiveness and performance measures that areoptimized (through trade studies) to produce the systemperformance requirements.

An effectiveness measurement method or modelshould provide trustworthy relationships between theseunderlying performance or technical attributes and themeasure of system effectiveness. Early in the projectlife cycle, the effectiveness model may embody simpleparametric relationships among the high-levelperformance and technical attributes and the measureof system effectiveness. In the later phases of theproject life cycle, the effectiveness model may use morecomplex relationships requiring more detailed, specificdata on operational scenarios and on each of thealternatives. In other words, early effectivenessmodeling during architecture trade studies may take afunctional view, while later modeling during design tradestudies may shift to a product view. This is not unlike theprogression of the cost modeling from simpleparametrics to more detailed grass-roots estimates.

The system engineer must tailor the effectivenessmeasure and its measurement method to the resolutionof

Practical Pitfalls in Using Effectiveness Measuresin Trade Studies

Obtaining trustworthy relationships among the systemperformance or technical attributes and system effec-tiveness is often difficult, Purported effectiveness mod -els often only treat one or two of the facets described inthe text. Supporting models may not have been properlyintegrated. Data are often incomplete or unreliable.Under these conditions, reported system effectivenessresults for different alternatives in a trade study mayshow only the relative effectiveness of the alternativeswithin the context of the trade study. The system engi-neer must recognize the practical pitfalls of using suchresults.

Page 98: Engineering Systems Handbook

NASA Systems Engineering HandbookSystems Analysis and Modeling Issues

the system design. As the system design andoperational concept mature, effectiveness estimatesshould mature as well. The system engineer must beable to provide realistic estimates of systemeffectiveness and its underlying performance andtechnical attributes not only for trade studies, but forproject management through the tracking of TPMs.

This discussion so far has been predicated onone accepted measure of system effectiveness. The jobof computing system effectiveness is considerablyeasier when the system engineer has a single measureand measurement method (model). But, as with costs, asingle measure may not be possible. When it does notexist, the system engineer must fall back to computingthe critical high-level, but nevertheless still underlying,system performance or technical attributes. In effect,these high-level performance or technical attributes areelevated to the status of measures of (system)effectiveness (MoEs) for trade study purposes, eventhough they do not represent a truly comprehensivemeasure of system effectiveness.

These high-level performance or technicalattributes might represent one of the facets describedabove, or they may be only components of one. Theyare likely to re

quire knowledge or estimates of lower-orderperformance or technical attributes. Figure 26 showshow system effectiveness might look in an hierarchicaltree structure. This figure corresponds, in some sense,to Figure 25 on life-cycle cost, though rolling up bysimple addition obviously does not apply to systemeffectiveness.

Lastly, it must be recognized that systemeffectiveness, like system cost, is uncertain. This fact isgiven a fuller treatment in Section 5.4.

5.3.2 NASA System Effectiveness Measures

The facets of system effectiveness in Figure 26are generic. Not all must apply to a particular system.The system engineer must determine whichperformance or technical attributes make up systemeffectiveness, and how they should be combined, on asystem-by-system basis. Table 6 provides examples ofhow each facet of system effectiveness could beinterpreted for specific classes of NASA flight systems.No attempt has been made to enumerate all possibleperformance or technical attributes, or

Page 99: Engineering Systems Handbook

NASA Systems Engineering HandbookSystems Analysis and Modeling Issues

to fill in each possible entry in the table; its purpose isillustrative only.

For many of the systems shown in the table,system effectiveness is largely driven by continual (orcontinuous) operations at some level of output over aperiod of years. This is in contradistinction to anApollo-type project, in which the effectiveness is largelydetermined by the successful completion of a singleflight within a clearly specified time horizon. Themeasures of effectiveness in these two cases arecorrespondingly different. In the former case (with itslengthy operational phase and continual output), systemeffectiveness measures need to incorporate quantitativemeasures of availability. The system engineeraccomplishes that through the involvement of thespecialty engineers and the application of specializedmodels described in the next section.

5.3.3 Availability and Logistics SupportabilityModeling

One reason for emphasizing availability andlogistics supportability in this chapter is that futureNASA systems are less likely to be of the"launch-and-logistically forget" type. To the extent thatlogistic support considerations are major determinantsof system effectiveness during operations, it is essentialthat logistics support be thoroughly analyzed in tradestudies during the earlier phases of the project life cycle.A second reason is that availability and logisticssupportability have been rich domains for methodologyand model development. The increasing sophistication

of the methods and models has allowed the system-wideeffects of different support alternatives to be more easilypredicted. In turn, this means more opportunities toimprove system effectiveness (or to lower life-cyclecost) through the integration of logistics considerationsin the system design.

Availability models relate system design andintegrated logistics support technical attributes to theavailability component of the system effectivenessmeasure. This type of model predicts the resultingsystem availability as a function of the systemcomponent failure and repair rates and the logisticssupport resources and policies. (See sidebar onmeasures of availability.)

Logistics supportability models relate systemdesign and integrated logistics support technicalattributes to one or more "resource requirements"needed to operate the system in the accomplishment ofits goals and objectives. This type of model focuses, forexample, on the system maintenance requirements,number and location of spares, processing facilityrequirements, and even optimal inspection policies. Inthe past, logistics supportability models have typicallybeen based on measures pertaining to that particularresource or function alone. For example, a system'sdesired inventory of spares was detemmined on thebasis of meeting measures of supply efficiency, such aspercent of demands met. This tended to lead tosuboptimal resource requirements from the system'spoint of view. More modem models of logisticssupportability base re-

Page 100: Engineering Systems Handbook

NASA Systems Engineering HandbookSystems Analysis and Modeling Issues

Measures of Availability

Availability can be calculated as the ratio of operating time to total time, where the denominator, total time, can bedivided into operating time ("uptime") and "downtime." System availability depends on any factor that contributes todowntime. Underpinning system availability, then, are the reliability and maintainability attributes of the system design,but other logistics support factors can also play significant roles. If these attributes and support factors, and the operat-ing environment of the system are unchanging, then several measures of steady-state availability can be readily calcu-lated. (When steady-state conditions do not apply, availability can be calculated, but is made considerably morecomplex by the dynamic nature of the underlying conditions.) The system engineer should be familiar with theequations below describing three concepts of steady-state availability for systems that can be repaired.

• Inherent = MTTF / (MTTF + MTTR)• Achieved = MTTMA / (MTTMA + MMT)• Operational = MTTMA / (MTTMA + MMT + MLDT)

= MTTMA / (MTTMA + MDT)

where:MTTF = Mean time to failureMTTR = Mean time to repair (corrective)MTTMA = Mean time to a maintenance action (corrective and preventive)MMT = Mean (active) maintenance time (corrective and preventative)MLDT = Mean logistics delay time (includes downtime due to administrative delays, and waiting for spares,

maintenance personnel, or supplies)MDT = Mean downtime (includes downtime due to (active) maintenance and logistics delays)

Availability measures can be also calculated at a point in time, or as an average over a period of time. A further,but manageable, complication in calculating availability takes into account degraded modes of operation for redundantsystems. For systems that cannot be repaired, availability and reliability are equal. (See sidebar on page 92.)

source requirements on the system availability effects.(See sidebar on logistics supportability models.)

Some availability models can be used todetermine a logistics resource requirement bycomputing the quantity of that resource needed toachieve a particular level of availability, holding otherlogistics resources fixed. The line between availabilitymodels and logistics supportability models can beinexact. Some logistics supportability models may dealwith a single resource; others may deal with severalresources simultaneously. They may take the form of asimple spreadsheet or a large computer simulation.Greater capability from these types of models is gen-erally achieved only at greater expense in time andeffort. The system engineer must determine whatavailability and logistics supportability models areneeded for each new system, taking into account theunique operations and logistics concepts andenvironment of that system. Generally both types ofmodels are needed in the trade study process totransform specialty engineering data into forms moreuseful to the system engineer. Which availability andlogistics supportability models are used during eachphase of the protect life cycle should be identified in theSEMP.

Another role for these models is to providequantitative requirements for incorporation into thesystem's formal Integrated Logistics Support (ILS) Plan.

Figure 27 shows the role of availability and logisticssupportability models in the trade study process.

Essential to obtaining useful products from anyavailability and/or logistics supportability model is thecollection of high quality specialty engineering data foreach alternative system design. (Some of these data arealso used in probabilistic risk assessments performed inrisk management activities.) The system engineer mustcoordinate efforts to collect and maintain these data in aformat suitable to the trade studies being performed.This task is made considerably easier by using digitaldatabases in relational table formats such as the onecurrently under development for MIL-STD- 1388-2B.

Continuing availability and logisticssupportability modeling and data collection through theoperations phase permits operations trend analysis andassessment on the system (e.g., is system availabilitydeclining or improving?) In general, this kind of analysisand assessment is extremely useful in identifyingpotential areas for product improvement such as greatersystem reliability, lower cost logistics support, and bettermaintenance and spares poli-

Page 101: Engineering Systems Handbook

NASA Systems Engineering HandbookSystems Analysis and Modeling Issues

Logistics Supportability Models: Two Examples

Logistics supportability models utilize the reliability and maintainability attributes a particular system design, andother logistics system variables, to quantify the demands (i.e., requirements) for scarce logistics resources duringoperations. The models described here were both developed for Space Station Freedom. One is a stochasticsimulation in which each run is a "trial" drawn from a population of outcomes. Multiple runs must be made todevelop accurate estimates of means and variances for the variables of interest. The other is a deterministicanalytic model. Logistic supportability models may be of either type. These two models deal with the uniquelogistics environment of Freedom.

SIMSYLS is a comprehensive stochastic simulation of on-orbit maintenance and logistics resupply ofFreedom. It provides estimates of the demand (means and variances) for maintenance resources such as EVA andIVA, as well as for logistics upmass and downmass resources. In addition to the effects of actual and false ORUfailures, the effects of various other stochastic events such as launch vehicle and ground repair delays can bequantified. SIMSYLS also produces several measures of operational availability. The model can be used in itsavailability mode or in its resource requirements mode.

M-SPARE is an availability-based optimal spares model. It determines the mix of ORU spares at any sparesbudget level that maximizes station availability, defined as the probability that no ORU had more demands duringa resupply cycle than it had spares to satisfy those demands. Unlike SIMSYLS, M-SPARE's availability measuredeals only with the effect of spares. M-SPARE starts with a target availability (or budget) and determines theoptimal inventory, a capability not possessed by SIMSYLS.For more detail, see DeJulio, E., SIMSYLS User's Guide, Boeing Aerospace Operations, February 1990, and Kline,Robert, et al., The M-SPARE Model, LMI, NS901R1, March 1990.

cies. (See Section 6.5 for more on Integrated LogisticsSupport.)

5.4 Probabilistic Treatment of Cost andEffectiveness

A probabilistic treatment of cost andeffectiveness is needed when point estimates for theseoutcome variables do not "tell the whole story"—that is,when information about the variability in a system's

projected cost and effectiveness is relevant to makingthe right choices about that system. When theseuncertainties have the potential to drive a decision, thesystems or program analyst must do more than justacknowledge that they exist. Some useful techniques formodeling the effects of uncertainty are described belowin Section 5.4.2. These techniques can be applied toboth cost models and effectiveness models, though themajority of examples given are for cost models.

Page 102: Engineering Systems Handbook

NASA Systems Engineering HandbookSystems Analysis and Modeling Issues

5.4.1 Sources of Uncertainty in Models

There are a number a sources of uncertainty inthe kinds of models used in systems analysis. Briefly,these are:

• Uncertainty about the correctness of the model'sstructural equations, in particular whether thefunctional form chosen by the modeler is thebest representation of the relationship betweenan equation's inputs and output

• Uncertainty in model parameters, which are, ina very real sense, also chosen by the modeler;this uncertainty is evident for model coefficientsderived from statistical regression, but evenknown physical constants are subject to someuncertainty due to experimental ormeasurement error

• Uncertainty in the true value of model inputs(e.g., estimated weight or thermal properties)that describe a new system.

As an example, consider a cost modelconsisting of one or more statistical CERs. In the earlyphases of the project life cycle (Phases A and B), thiskind of model is commonly used to provide a costestimate for a new NASA system. The project managerneeds to understand what confidence he/she can havein that estimate.

One set of uncertainties concerns whether theinput variables (for example, weight) are the properexplanatory variables for cost, and whether a linear orlog-linear form is more appropriate. Modelmisspecification is by no means rare, even for strictlyengineering relationships.

Another set of model uncertainties thatcontribute to the uncertainty in the cost estimateconcerns the model coefficients that have beenestimated from historical data. Even in a well-behavedstatistical regression equation, the estimated coefficientscould have resulted from chance alone, and thereforecost predictions made with the model have to be statedin probabilistic terms. (Fortunately, the upper and lowerbounds on cost for any desired level of confidence canbe easily calculated. Presenting this information alongwith the cost estimate is strongly recommended.)

The above uncertainties are present even if thecost model inputs that describe a new system areprecisely known in Phase A. This is rarely true; moreoften, model inputs are subject to considerableguesswork early in the project life cycle. The uncertaintyin a model input can be expressed by attributing aprobability distribution to it. This applies whether theinput is a physical measure such as weight, or asubjective measure such as a "complexity factor." Modelinput uncertainty can extend even to a grass-roots costmodel that might be used in Phases C and D. In thatcase, the source of uncertainty is the failure to identifyand capture the "unknown-unknowns." The model inputs

-- the costs estimated by each performing organization-- can then be thought of as variables having variousprobability distributions.

5.4.2 Modeling Techniques for HandlingUncertainty

The effect of model uncertainties is to induceuncertainty in the model's output. Quantifying theseuncertainties involves producing an overall probabilitydistribution for the output variable, either in terms of itsprobability density function (or mass function for discreteoutput variables) or its cumulative distribution function.(See sidebar on cost S-curves.) Some techniques forthis are:

The Cost S-Curve

The cost S-curve gives the probability of a project'scost not exceeding a given cost estimate. Thisprobability is sometimes called the budgetconfidence level. This curve aids in establishing theamount of contingency and Allowance for ProgramAdjustment (APA) funds to set aside as a reserveagainst risk.

In the S-curve shown above, the project's costcommitment provides only a 40 percent level ofconfidence; with reserves, the level is increased to50 percent. The steepness of the S-curve tells theproject manager how much the level of confidenceimproves when a small amount of reserves areadded.

Note that an Estimate at Completion (EAC)S-curve could be used in conjunction with the riskmanagement approach described for TPMs (seeSection 4.9.2), as another method of cost statusreporting and assessment meet.

Page 103: Engineering Systems Handbook

NASA Systems Engineering HandbookSystems Analysis and Modeling Issues

• Analytic solution• Decision analysis• Monte Carlo simulation.

Analytic Solution. When the structure of a model andits uncertainties permit, a closed-form analytic solutionfor the required probability density (or cumulativedistribution) function is sometimes feasible. Examplescan be found in simple reliability models (see Figure29).

Decision Analysis. This technique, which wasdiscussed in Section 4.6, also can produce a cumulativedistribution function, though it is necessary to descretizeany continuous input probability distributions. The moreprobability intervals that are used, the greater theaccuracy of the results, but the larger the decision tree.Furthermore, each uncertain model input adds morethan linear computational complexity to that tree,making this technique less efficient in many situationsthan Monte Carlo simulation, described next.

Monte Carlo Simulation. This technique is often usedto calculate an approximate solution to a stochasticmodel that is too complicated to be solved by analyticmethods alone. A Monte Carlo simulation is a way ofsampling input points from their respective domains inorder to estimate the probability distribution of the output

variable. In a simple Monte Carlo analysis, a value foreach uncertain input is drawn at random from itsprobability distribution, which can be either discrete orcontinuous. This set of random values, one for eachinput, is used to compute the corresponding outputvalue, as shown in Figure 28. The entire process is thenrepeated k times. These k output values constitute arandom sample from the probability distribution over theoutput variable induced by the input probabilitydistributions.

For an example of the usefulness of thistechnique, recall Figures 2 (in Chapter 2) and 24 (thischapter), which show the projected cost andeffectiveness of three alternative design concepts asprobability "clouds." These clouds may be reasonablyinterpreted as the result of three system-level MonteCarlo simulations. The information displayed by theclouds is far greater than that embodied in pointestimates for each of the alternatives.

An advantage of the Monte Carlo technique isthat standard statistical tests can be applied to estimatethe precision of the resulting probability distribution. Thispermits a calculation of the number of runs (samples)needed to obtain a given level of precision. If computingtime or costs are a significant constraint, there areseveral ways of reducing them through more deliberatesampling strategies. See MSFC-HDBK-1912, SystemsEngineering (Volume 2), for a discussion of thesestrategies.

Commercial software to perform Monte Carlosimulation is available. These include add-in packagesfor some of the popular spreadsheets, as well aspackages that allow the systems or program analyst tobuild an entire Monte Carlo model from scratch on apersonal computer. These packages generally performthe needed computations in an efficient manner andprovide graphical displays of the results, which is veryhelpful in communicating probabilistic information. Forlarge applications of Monte Carlo simulation, such asthose used in addressing logistics supportability, customsoftware may be needed. (See the sidebar on logisticssupportability models.)

Monte Carlo simulation is a fairly easytechnique to apply. Also, what a particular combinationof uncertainties mean can often be communicated moreclearly to managers. A powerful example of thistechnique applied to NASA flight readiness certificationis found in Moore, Ebbeler, and Creager, who combineMonte Carlo simulation with traditional reliability and riskanalysis techniques.

Page 104: Engineering Systems Handbook

NASA Systems Engineering HandbookIntegrating Engineering Specialties Into The Systems Engineering Process

6 Integrating Engineering Specialties Intothe Systems Engineering Process

This chapter discusses the basic concepts, tech-niques, and products of some of the specialtyengineering disciplines, and how they fit into thesystems engineering process.

6.1 Role of the Engineering Specialties

Specialty engineers support the systemsengineering process by applying specific knowledge andanalytic methods from a variety of engineering specialtydisciplines to ensure that the resulting system is actuallyable to perform its mission in its operationalenvironment. These specialty engineering disciplinestypically include reliability, maintainability, integratedlogistics, test, fabrication/production, human factors,quality assurance, and safety engineering. One view ofthe role of the engineering specialties, then, is missionassurance. Part of the system engineer's job is to seethat these mission assurance functions are coherentlyintegrated into the project at the right times and thatthey address the relevant issues.

Another idea used to explain the role of theengineering specialties is the "Design-for-X" concept.The X stands for any of the engineering "ilities" (e.g.,reliability, testability, producibility, supportability) that theproject level system engineer needs to consider to meetthe project's goals/objectives. While the relevantengineering specialties may vary on NASA projects byvirtue of their diverse nature, some are always needed.It is the system engineer's job to identify the particularengineering specialities needed for his/her tailoredProduct Development Team (PDT). The selectedorganizational approach to integrating the engineeringspecialities into the systems engineering process andthe technical effort to be made should be summarized inthe SEMP (Part III). Depending on the nature and scopeof the project, the technical effort may also need moredetailed documentation in the form of individualspecialty engineering program plans.

As part of the technical effort, specialtyengineers often perform tasks that are common acrossdisciplines. Foremost, they apply specialized analyticaltechniques to create information needed by the projectmanager and system engineer. They also help defineand write system requirements in their areas ofexpertise, and they review data packages, engineeringchange requests (ECRs), test results, anddocumentation for major project reviews. The projectmanager and/or system engineer need to ensure thatthe information and products so generated add value tothe project commensurate with their cost.

The specialty engineering technical effort shouldalso be well integrated both in time and content, notseparate organizations and disciplines operating in nearisolation (i.e., more like a basketball team, rather than agolf foursome). This means, as an example, that the

reliability engineer's FMECA (or equivalent analysis)results are passed at the right time to the maintainabilityengineer, whose maintenance analysis is subsequentlyincorporated into the logistics support analysis (LSA).LSA results, in turn, are passed to the project-levelsystem engineer in time to be combined with other costand effectiveness data for a major trade study.Concurrently, the reliability engineer's FMECA resultsare also passed to the risk manager to incorporatecritical items into the Critical Items List (CIL) whendeemed necessary, and to alert the PDT to developappropriate design or operations mitigation strategies.The quality assurance engineer's effort should be in-tegrated with the reliability engineer's so that, forexample, component failure rate assumptions in thelatter's reliability model are achieved or bettered by theactual (flight) hardware. This kind of process harmonyand timeliness is not easily realized in a project; itnevertheless remains a goal of systems engineering.

6.2 Reliability

Reliability can be defined as the probability thata device, product, or system will not fail for a givenperiod of time under specified operating conditions.Reliability is an inherent system design characteristic.As a principal contributing factor in operations andsupport costs and in system effectiveness (see Figure26), reliability plays a key role in determining thesystem's cost-effectiveness.

6.2.1 Role of the Reliability Engineer

Reliability engineering is a major specialty disci-pline that contributes to the goal of a cost-effectivesystem. This is primarily accomplished in the systemsengineering process through an active role inimplementing specific design features to ensure that thesystem can perform in the predicted physicalenvironments throughout the mission, and by makingindependent predictions of system reliability for designtrades and for (test program, operations, and integratedlogistics support) planning.

The reliability engineer performs several tasks,which are explained in more detail in NHB 5300.4(1A -1),

Page 105: Engineering Systems Handbook

NASA Systems Engineering HandbookIntegrating Engineering Specialties Into the Systems Engineering Process

Reliability Relationships

The system engineer should be familiar with the following reliability parameters and mathematical relationships forcontinuously operated systems

Many reliability analyses assume that failures are random so that λ(t) = λ and the failure probability densityfollows an exponential distribution. In that case, R(t) = exp (-λt), and the Mean Time To Failure (MTTF) = 1/λ.Another popular assumption that has been shown to apply to many systems is a failure probability density thatfollows a Weibull distribution; in that case, the hazard rate λ(t) satisfies a simple power law as a function of t. Withthe proper choice of Weibull parameters, the constant hazard rate can be recovered as a special case. While these(or similar) assumptions may be analytically convenient, a system's actual hazard rate may be less predictable.(Also see bathtub curve sidebar!)

Reliability Program Requirements for Aeronautical andSpace System Contractors. In brief, these tasks include:

• Developing and executing a reliability programplan

• Developing and refining reliability predictionmodels, including associated environmental(e.g., vibration, acoustic, thermal, andEMI/EMC) models, and predictions of systemreliability. These models and predictions shouldreflect applicable experience from previousprojects.

• Establishing and allocating reliability goals andenvironmental design requirements

• Supporting design trade studies covering suchissues as the degree of redundancy andreliability vs. maintainability

• Supporting risk management by identifyingdesign attributes likely to result in reliabilityproblems and recommending appropriate riskmitigations

• Developing reliability data for timely use in theproject's maintainability and ILS programs

• Developing environmental test requirementsand specifications for hardware qualification.The reliability engineer may provide technicalanalysis and justification for eliminating orrelaxing qualification test requirements. Theseactivities are usually closely coordinated withthe project's verification program.

• Performing analyses on qualification test data toverify reliability predictions and validate thesystem reliability prediction models, and tounderstand and resolve anomalies

• Collecting reliability data under actualoperations conditions as a part of overall systemvalidation.

The reliability engineer works with otherspecialty engineers (e.g., the quality assurance,maintainability, verification, and producibility engineers)on system reliability issues. On small projects, thereliability engineer may perform some or all of theseother jobs as well.

6.2.2 Reliability Program Planning

The reliability program for a project describeswhat activities will be undertaken in support of reliabilityengineering. The reliability engineer develops areliability program considering its cost, schedule, andrisk implications. This planning should begin duringPhase A. The project manager/system engineer mustwork with the reliability engineer to develop anappropriate reliability program as

Lunar Excursion Module (LEM) Reliability

Part of the reliability engineer's job is to develop anunderstanding of the underlying physical andhuman-induced causes of failures, rather thanassuming that all failures are random. According toJoseph Gavin, Director of the LEM Program atGrumman, "after about 10 years of testing ofindividual [LEM] components and subsystems,[NASA] found something like 14,000 anomalies,only 22 of which escaped definite understanding

Page 106: Engineering Systems Handbook

NASA Systems Engineering HandbookIntegrating Engineering Specialties Into the Systems Engineering Process

The Bathtub Curve

For many systems, the hazard rate function looks like the classic "bathtub curve" as in the graph below. Because ofburn-in failures and/or inadequate quality assurance practices,λ(t) is initially high, but gradually decreases duringthe infant failure rate period. During the useful life period, λ(t) remains constant, reflecting randomly occurringfailures. Later, λ(t) begins to increase because of wearout failures. The exponential reliability formula applies onlyduring the useful life period.

many factors need to be considered in developing thisprogram. These factors include:

• NASA payload classification. The reliabilityprogram's analytic content and itsdocumentation of problems and failures aregenerally more extensive for a Class A payloadthan for a Class D one. (See Appendix B.3 forclassification guidelines.)

• Mission environmental risks. Several missionenvironmental models may need to bedeveloped. For flight projects, these includeground (transportation and handling), launch,

on-orbit (Earth or other), and planetaryenvironments. In addition, the reliabilityengineer must address design and verificationrequirements for each such environment.

• Degree of design inheritance andhardware/software reuse.

The reliability engineer should document thereliability program in a reliability program plan, whichshould be summarized in the SEMP (Part III) andupdated as needed through the project life cycle; thesummary may be sufficient for small projects.

Page 107: Engineering Systems Handbook

NASA Systems Engineering HandbookIntegrating Engineering Specialties Into the Systems Engineering Process

6.2.3 Designing Reliable Space-Based Systems

Designing reliable space-based systems hasalways been a goal for NASA, and many painful lessonshave been reamed along the way. The system engineershould be aware of some basic design approaches forachieving reliability. These basic approaches includefault avoidance, fault tolerance, and functionalredundancy.

Fault Avoidance. Fault avoidance, a joint objective ofthe reliability engineer and quality assurance engineer(see Section 6.3), includes efforts to:

• Provide design margins, or use appropriateaerating guidelines, if available

• Use high-quality parts where needed. (Failurerates for Class S parts are typically one-fourth ofthose procured to general militaryspecifications.)

• Consider materials and electronics packagingcarefully

• Conduct formal inspections of manufacturingfacilities, processes, and documentation

• Perform acceptance testing or inspections on allparts when possible.

Fault Tolerance. Fault tolerance is a system designcharacteristic associated with the ability of a system tocontinue operating after a component failure hasoccurred. It is implemented by having designredundancy and a fault detection and responsecapability. Design redundancy can take several forms,some of which are represented in Figure 29 along withtheir reliability relationships.

Functional Redundancy. Functional redundancy is asystem design and operations characteristic that allowsthe system to respond to component failures in a waysufficient to meet mission requirements. This usuallyinvolves operational work-arounds and the use ofcomponents in ways that were not originally intended.As an example, a repair of the damaged Galileohigh-gain antenna was impossible, but a work-aroundwas accomplished by software fixes that furthercompressed the science data and images; these werethen returned through the low-gain antenna, although ata severely reduced data rate.

These three approaches have different costsassociated with their implementation: Class S parts aretypically more expensive, while redundancy adds mass,volume, costs, and complexity to the system. Differentapproaches to reliability may therefore be appropriatefor different projects. In order to choose the bestbalance among approaches, the system engineer mustunderstand the system-

level effects and life-cycle cost of each approach. Toachieve this, trade study methods of Section 5.1 shouldbe used in combination with reliability analysis tools andtechniques.

6.2.4 Reliability Analysis Tools and Techniques

Reliability Block Diagrams. Reliability block diagramsare used to portray the manner in which the componentsof a complex system function together. These diagramscompactly describe how components are connected.Basic reliability block diagrams are shown in Figure 29.

Fault Trees and Fault Tree Analysis. A fault tree is agraphical representation of the combination of faults thatwill result in the occurrence of some (undesired) topevent. It is usually constructed during a fault treeanalysis, which is a qualitative technique to uncovercredible ways the top event can occur. In theconstruction of a fault tree, successive subordinatefailure events are identified and logically linked to thetop event. The linked events form a tree structureconnected by symbols called gates, some basicexamples of which appear in the fault tree shown inFigure 30. Fault trees and fault tree analysis are oftenprecursors to a full probabilistic risk assessment (PRA).For more on this technique, see the U.S. NuclearRegulatory Commission Fault Tree Handbook.

Page 108: Engineering Systems Handbook

NASA Systems Engineering HandbookIntegrating Engineering Specialties Into the Systems Engineering Process

Reliability Models. Reliability models are used topredict the reliability of alternative architectures/designsfrom the estimated reliability of each component. Forsimple systems, reliability can often be calculated byapplying the rules of probability to the variouscomponents and "strings" identified in the reliabilityblock diagram. (See Figure 29.) For more complexsystems, the method of minimal cut sets, which relies onthe rules of Boolean algebra, is often used to evaluate asystem's fault tree. When individual componentreliability functions are themselves uncertain, MonteCarlo simulation methods may be appropriate. Thesemethods are described in reliability engineeringtextbooks, and software for calculating reliability iswidely available. For a compilation of models/software,see D. Kececioglu, Reliability, Availability, andMaintainability Software Handbook.

FMECAs and FMEAs. Failure Modes, Effects, andCriticality Analysis (FMECA) and Failure Modes andEffects Analysis (FMEA) are specialized techniques forhardware failure and safety risk identification andcharacterization. (Also see Section 4.6.2.)

Problem/Failure Reports (P/ FRs). The reliabilityengineer uses the Problem/Failure Reporting System (oran approved equivalent) to report reliability problemsand nonconformances encountered during qualificationand acceptance testing (Phase D) and operations(Phase E).

6.3 Quality Assurance

Even with the best of available designs,hardware fabrication (and software coding) and testingare subject to the vagaries of Nature and human beings.The system engineer needs to have some confidencethat the system actually produced and delivered is inaccordance with its functional, performance, and designrequirements. Quality Assurance (QA) provides anindependent assessment to the project manager/systemengineer of the items produced and processes usedduring the project life cycle. The quality assuranceengineer typically acts as the system engineer's eyesand ears in this context. The project manager/systemengineer must work with the quality assurance engineerto develop a quality assurance program (the extent,responsibility, and timing of QA activities) tailored to theproject it supports. As with the reliability program, thislargely depends on the NASA payload classification (seeAppendix B.3).

6.3.1 Role of the Quality Assurance Engineer

The quality assurance engineer performsseveral tasks, which are explained in more detail in NHB5300.4(1B), Quality Program Provisions for Aeronauticaland Space System Contractors. In brief, these tasksinclude:

• Developing and executing a quality assuranceprogram plan

• Ensuring the completeness of configurationmanagement procedures and documentation,and monitoring the fate of ECRs/ECPs (seeSection 4.7)

• Participating in the evaluation and selection ofprocurement sources

• Inspecting items and facilities duringmanufacturing/fabrication, and items deliveredto NASA field centers

• Ensuring the adequacy of personnel training andtechnical documentation to be used duringmanufacturing/fabrication

• Ensuring verification requirements are properlyspecified, especially with respect to testenvironments, test configurations, and pass/failcriteria

• Monitoring qualification and acceptance tests toensure compliance with verificationrequirements and

Page 109: Engineering Systems Handbook

NASA Systems Engineering HandbookIntegrating Engineering Specialties Into the Systems Engineering Process

test procedures, and to ensure that test data arecorrect and complete

• Monitoring the resolution and close-out ofnonconformances and Problem/Failure Reports(P/FRs)

• Verifying that the physical configuration of thesystem conforms to the "build-to" (or "code-to")documentation approved at CDR

• Collecting and maintaining QA data forsubsequent failure analyses.

The quality assurance engineer also participatesin major reviews (primarily SRR, PDR, CDR, and FRR)on issues of design, materials, workmanship, fabricationand verification processes, and other characteristics thatcould degrade product system quality.

6.3.2 Quality Assurance Tools and Techniques

PCA/FCA. The Physical Configuration Audit (PCA) veri-fies that the physical configuration of the system corre-sponds to the approved "build-to" (or "code-to") docu-mentation. The Functional Configuration Audit (FCA)verifies that the acceptance verification (usually, test)results are consistent with the approved verificationrequirements. (See Section 4.8.4.)

In-Process Inspections. The extent, timing, andlocation of in-process inspections are documented in thequality assurance program plan. These should beconducted in consonance with themanufacturing/fabrication and verification programplans. (See Sections 6.6 and 6.7.)

QA Survey. A QA survey examines the operations, pro-cedures, and documentation used in the project, andevaluates them against established standards andbenchmarks. Recommendations for corrective actionsare reported to the project manager.

Material Review Board. The Material Review Board(MRB), normally established by the project manager andchaired by the project-level quality assurance engineer,performs formal dispositions on nonconformances.

6.4 Maintainability

Maintainability is a system design characteristicassociated with the ease and rapidity with which thesystem can be retained in operational status, or safelyand economically restored to operational statusfollowing a failure. Often used (though imperfect)measures of maintainability include mean maintenancedowntime, maintenance effort (work hours) peroperating hour, and annual maintenance cost. Howevermeasured, maintainability arises from many factors: thesystem hardware and software design, the required skilllevels of maintenance personnel, adequacy of

diagnostic and maintenance procedures, test equipmenteffectiveness, and the physical environment underwhich maintenance is performed.

6.4.1 Role of the Maintainability Engineer

Maintainability engineering is another majorspecialty discipline that contributes to the goal of acost-effective system. This is primarily accomplished inthe systems engineering process through an active rolein implementing specific design features to facilitatesafe maintenance actions in the predicted physicalenvironments, and through a central role in developingthe integrated logistics support (ILS) system. (SeeSection 6.5 on ILS.)

The maintainability engineer performs severaltasks, which are explained in more detail in NHB5300.4(1E), Maintainability Program Requirements forSpace Systems. In brief, these tasks include:

• Developing and executing a maintainability pro-gram plan. This is usually done in conjunctionwith the ILS program plan.

• Developing and refining the systemmaintenance concept as a part of the ILSconcept

• Establishing and allocating maintainabilityrequirements. These requirements should beconsistent with the maintenance concept andtraceable to system-level availability objectives.

• Performing an engineering design analysis toidentify maintainability design deficiencies

• Performing analyses to quantify the system'smaintenance resource requirements, anddocumenting them in the Maintenance Plan

• Verifying that the system’s maintainabilityrequirements and maintenance-related aspectsof the ILS requirements are met

• Collecting maintenance data under actualoperations conditions as part of ILS systemvalidation.

Many of the analysis tasks above areaccomplished as part of the Logistics Support Analysis(LSA), described in Section 6.5.3. The maintainabilityengineer also participates in and contributes to majorproject reviews on the above items as appropriate to thephase of the project.

Page 110: Engineering Systems Handbook

NASA Systems Engineering HandbookIntegrating Engineering Specialties Into the Systems Engineering Process

6.4.2 The System Maintenance Concept andMaintenance Plan

As the system operations concept and userrequirements evolve, so does the ILS concept. Centralto the latter is the system maintenance concept. Itserves as the basis for establishing the system'smaintainability design requirements and its logisticssupport resource requirements (through the LSAprocess). In developing the system maintenanceconcept, it is useful to consider the mission profile, howthe system will be used, its operational availability goals,anticipated useful life, and physical environ ments.

Traditionally, a description of the systemmaintenance concept is hardware-oriented, though thisneed not always be so. The system maintenanceconcept is typically described in terms of the anticipatedlevels of maintenance (see sidebar on maintenancelevels), general repair policies regarding corrective andpreventive maintenance, assumptions about supplysystem responsiveness, the availability of new orexisting facilities, and the maintenance environment.Initially, the system maintenance concept may be basedon experience with similar systems, but it should not beexempt from trade studies early in the project life cycle.These trade studies should focus on the costeffectiveness of alternative maintenance concepts in thecontext of overall system optimization.

Maintenance Levels forSpace Station Alpha

As with many complex systems, the maintenance con-cept for Alpha calls for three maintenance levels:organizational, intermediate, and depot (or vendor).The system engineer should be familiar with theseterms and the basic characteristics associated witheach level. As an example, consider Alpha:

Level Work Performed SparesOrgani-zational

On-orbit crew performs ORUremove-and-replace, visualinspections, minor servicingand calibration.

Few.

Inter-mediate

Depot/Vendor

KSC maintenance facilityrepairs ORUs, performs de-tailed inspections, servicing,calibrations, and somemodifications.

Factory performs majoroverhauls, modifications,and complex calibrations.needed.

Extensive

Moreextensive, orfabricated asneeded.

The Maintenance Plan, which appears as a majortechnical section in the Integrated Logistics Support

Plan (ILSP), documents the system maintenanceconcept, its maintenance resource requirements, andsupporting maintainability analyses. The MaintenancePlan provides other inputs to the ILSP in the areas ofspares, maintenance facilities, test and supportequipment, and, for each level of maintenance, itprovides maintenance training programs, facilities,technical data, and aids. The supporting analysesshould establish the feasibility and credibility of theMaintenance Plan with aggregate estimates ofcorrective and preventive maintenance workloads, initialand recurring spares provisioning requirements, andsystem availability. Aggregate estimates should be theresult of using best practice maintainability analysistools and detailed maintainability data suitable for theLSA. (See Section 6.5.3.)

6.4.3 Designing Maintainable Space-BasedSystems

Designing NASA space-based systems formaintainability will be even more important in the future.For that reason, the system engineer should be aware ofbasic design features that facilitate IVA and EVAmaintenance. Some examples of good practice include:

• Use coarse and fine installation alignmentguides as necessary to assure ease of OrbitalReplacement Unit (ORU) installation andremoval

• Have minimum sweep clearances betweeninterface tools and hardware structures; includeadequate clearance envelopes for thosemaintenance activities where access to anopening is required

• Define reach envelopes, crew load/forces, andgeneral work constraints for IVA and EVAmaintenance tasks

• Consider corrective and preventivemaintenance task frequencies in the location ofORUs

• Allow replacement of an ORU without removalof other ORUs

• Choose a system thermal design that precludesdegradation or damage during ORUreplacement or maintenance to any other ORU

• Simplify ORU handling to reduce the likelihoodof mishandling equipment or parts

• Encourage commonality, standardization, andinterchangeability of tooling and hardware itemsto ensure a minimum number of items

• Select ORU fasteners to minimize accessibilitytime consistent with good design practice

Page 111: Engineering Systems Handbook

NASA Systems Engineering HandbookIntegrating Engineering Specialties Into the Systems Engineering Process

• Design the ORU surface structure so that nosafety hazard is created during the removal,replacement, test, or checkout of any ORUduring IVA or EVA maintenance; includecautions/warnings for mission or safety criticalORUs

• Design software to facilitate modifications,verifica tions, and expansions

• Allow replacement of software segments on-linewithout disrupting mission or safety critical func-tions

• Allow on- or off-line software modification, re-placement, or verification without introducinghazardous conditions.

6.4.4 Maintainability Analysis Tools andTechniques

Maintenance Functional Flow Block Diagrams(FFBDs). Maintenance FFBDs are used in the sameway as system FFBDs, described in Appendix B.7.1. Atthe top level, maintenance FFBDs supplement andclarify the system maintenace concept; at lower levels,

Maintainability Lessons Learnedfrom HST Repair (STS-61)

When asked (for this handbook! what maintainabilitylessons were learned from their mission, the STS~61crew responded with the following:

• The maintainability considerations designedinto the Hubble Space Telescope (HST)worked.

• For spacecraft in LEO, don’t preclude aservicing option; this means, for example,including a grapple fixture even t trough it hasa cost and mass impact.

• When servicing is part of the maintenanceconcept, make sure that it's appliedthroughout the space craft. (The HST SolarArray Electronics Box, for example, was notdesigned to be replaced, but had to benevertheless!)

• Pay attention to details like correctly sizing thehand holds, and using connectors andfasteners designed for easy removal andreattachment.

Other related advice:

• Make sure ground-based mock-ups and draw-ings exactly represent the "as-deployed" con-figuration.

• Verify tool-to-system interfaces, especiallywhen new tools are involved.

• Make provision in the maintainability programfor high-fidelity maintenance training.

they provide a basis for the LSA's maintenance taskinventory.

Maintenance Time Lines . Maintenance time lineanalysis (see Appendix B.7.3) is performed whentime-to-restore is considered a critical factor for missioneffectiveness and/or safety. (Such cases might includeEVA and emergency repair procedures.) A maintenancetime line analysis may be a simple spreadsheet or, atthe other end, involve extensive computer simulationand testing.

FMECAs and FMEAs. Failure Modes, Effects, and Criti-cality Analysis (FMECA) and Failure Modes and EffectsAnalysis (FMEA) are specialized techniques forhardware failure and safety risk identification andcharacterization. They are discussed in this handbookunder risk management (see Section 4.6.2) andreliability engineering (see Section 6.2.4). For themaintainability engineer, the FMECA/FMEA needs to beaugmented at the LRU/ORU level with failure predictiondata (i.e., MTTF or MTBF), failure detection means, andidentification of corrective maintenance actions (for theLSA task inventory).

Maintainability Models . Maintainability models areused in assessing how well alternative designs meetmaintainability requirements, and in quantifying themaintenance resource requirements. Modelingapproaches may range from spreadsheets thataggregate component data, to complex Markov modelsand stochastic simulations. They often use reliability andtime-to-restore data at the LRU/ORU level obtainedfrom experience with similar components in existingsystems. Some typical uses to which these models areput include:

• Annual maintenance hours and/or maintenancedowntime estimates

• System MTTR and availability estimates (seesidebar on availability measures on page 86)

• Trades between reliability and maintainability• Optimum LRU/ORU repair level analysis

(ORLA)• Optimum (reliability-centered) preventive

maintenance analysis• Spares requirements analysis• Mass/volume estimates for (space-based)

spares Repair vs. discard analysis.

LSA and LSAR. The Logistics Support Analysis(LSA) is the formal technical mechanism for integratingsupportability considerations into the systemsengineering process. Many of the above tools andtechniques provide maintainability inputs to the LSA, orare used to develop LSA outputs. Results of the LSA arecaptured in Logistics Support

Page 112: Engineering Systems Handbook

NASA Systems Engineering HandbookIntegrating Engineering Specialties Into the Systems Engineering Process

Analysis Record (LSAR) data tables, which formallydocument the baselined ILS system. (See Section6.5.3.)

Problem/Failure Reports (P/ FRs). The maintainabilityengineer uses the Problem/Failure Reporting System (oran approved equivalent) to report maintainabilityproblems and nonconformances encountered duringqualification and acceptance testing (Phase D) andoperations (Phase E).

6.5 Integrated Logistics Support

The objective of Integrated Logistics Support(ILS) activities within the systems engineering process isto ensure that the product system is supported duringdevelopment (Phase D) and operations (Phase E) in acost-effective manner. This is primarily accomplished byearly, concurrent consideration of supportabilitycharacteristics, performing trade studies on alternativesystem and ILS concepts, quantifying resourcerequirements for each ILS element using best-practicetechniques, and acquiring the support items associatedwith each ILS element. During operations, ILS activitiessupport the system while seeking improvements in itscost-effectiveness by conducting analyses in responseto actual operational conditions. These analysescontinually reshape the ILS system and its resourcesrequirements. Neglecting ILS or poor ILS decisionsinvariably have adverse effects on the life-cycle cost ofthe resultant system.

6.5.1 ILS Elements

According to NHB 7120.5, the scope of ILS in-cludes the following nine elements:

• Maintenance: the process of planning andexecuting life-cycle repair/services conceptsand requirements necessary to ensure sustainedoperation of the system

• Design Interface: the interaction and relationshipof logistics with the systems engineeringprocess to ensure that supportability influencesthe definition and design of the system so as toreduce life-cycle cost

• Technical Data: the recorded scientific,engineering, technical, and cost informationused to define, produce, test, evaluate, modify,deliver, support, and operate the system

• Training: the processes, procedures, devices,and equipment required to train personnel tooperate and support the system

• Supply Support: actions required to provide allthe necessary material to ensure the system'ssupportability and usability objectives are met

• Test and Support Equipment: the equipment re-quired to facilitate development, production, andoperation of the system

• Transportation and Handling: the actions, re-sources, and methods necessary to ensure theproper and safe movement, handling,packaging, and storage of system items andmaterials

• Human Resources and Personnel Planning:actions required to determine the best skills-mix,considering current and future operator,maintenance, engineering, and administrativepersonnel costs

• System Facilities: real property assets requiredto develop and operate a system.

6.5.2 Planning for ILS

ILS planning should begin early in the projectlife cycle, and should be documented in an ILS programplan. This plan describes what ILS activities areplanned, and how they will be conducted and integratedinto the systems engineering process. For majorprojects, the ILS program plan may be a separatedocument because the ILS system (ILSS) may itself bea major system. For smaller projects, the SEMP (PartIII) is the logical place to document such information. Animportant part of planning the ILS program concerns thestrategy to be used in performing the Logistics SupportAnalysis (LSA) since it can involve a major commitmentof logistics engineering specialists. (See Section 6.5.3.)

Documenting results of ILS activities throughthe project life cycle is generally done in the IntegratedLogistics Support Plan (ILSP). The ILSP is the seniorILS document used by the project. A preliminary ILSPshould be prepared by the completion of Phase B andsubsequently maintained. This plan documents theproject's logistics support concept, responsibility foreach ILS element by project phase, and LSA results,especially trade study results. For major systems, theILSP should be a distinct and separate part of thesystem documentation. For smaller systems, the ILSPmay be integrated with other system documentation.The ILSP generally contains the following technicalsections:

• Maintenance Plan—Developed from the systemmaintenance concept and refined during thesystem design and LSA processes. (NMI5350.1A, Maintainability and MaintenancePlanning Policy, and

Page 113: Engineering Systems Handbook

NASA Systems Engineering HandbookIntegrating Engineering Specialties Into the Systems Engineering Process

• NHB 5300.4(1E), Maintainability Program Re-quirements for Space Systems, do not use theterm ILS, but they nevertheless mandate almostall of the steps found in an LSA. See Section6.4.2 for more details on the maintenance plan.)

• Personnel and Training Plan—Identifies bothoperator and maintenance training, includingdescriptions of training programs, facilities,equipment, technical data, and special trainingaids. According to NMI 5350.1A/NHB5300.4(1E), the maintenance training element ispart of the maintenance plan.

• Supply Support Plan—Covers requiredquantities of spares (reparable and expendable)and consumables (identified through the LSA),and procedures for their procurement,packaging, handling, storage, andtransportation. This plan should also cover suchissues as inventory management, breakoutscreening, and demand data collection andanalysis. According to NMI 5350.1A/NHB5300.4(1E), the spares provisioning element ispart of the maintenance plan.

• Test and Support Equipment Plan —Covers re-quired types, geographical location, andquantities of test and support equipment(identified through the LSA). According to NMI5350.1A/NHB 5300.4(1E), it is part of themaintenance plan.

• Technical Data Plan—Identifies procedures toacquire and maintain all required technical data.According to NMI 5350.1A/NHB 5300.4(1E),technical data for training is part of themaintenance plan. Transportation and HandlingPlan —Covers all equipment, containers, andsupplies (identified through the LSA), andprocedures to support packaging, handling,storage, and transportation of systemcomponents

• Facilities Plan—Identifies all real property assetsrequired to develop, test, maintain, and operatethe system, and identifies those requirementsthat can be met by modifying existing facilities.It should also provide cost and scheduleprojections for each new facility or modification.

• Disposal Plan—Covers equipment, supplies,and procedures for the safe and economicdisposal of all items (e.g., condemned spares),including ultimately the system itself.

The cost of ILS (and hence the life-cycle cost ofthe system) is driven by the inherent reliability andmaintainability characteristics of the system design. Theproject level system engineer must ensure that theseconsiderations influence the design process through awell-conceived ILS program. In brief, a good-practiceapproach to achieving cost-effective ILS includes effortsto:

• Develop an ILS program plan, and coordinate itwith the SEMP (Part III)

• Perform the technical portion of the plan, i.e.,the Logistics Support Analysis, to select the bestcombined system and LS alternative, and toquantify the resulting logistics resourcerequirements

• Document the selected ILS system andsummarize the logistics resource requirementsin the ILSP

• Provide supportability inputs to the systemrequirements and/or specifications

• Verify and validate the selected ILS system.

6.5.3 ILS Tools and Techniques: The LogisticsSupport Analysis

The Logistics Support Analysis (LSA) is theformal technical mechanism for integratingsupportability considerations into the systemsengineering process. The LSA is performed iterativelyover the project life cycle so that successiverefinements of the system design move toward thesupportability objectives. To make this happen, the ILSengineer identifies supportability andsupportability-related design factors that need to beconsidered in trade studies during the systemsengineering process. The project-level system engineerimports these considerations largely through their impacton projected system effectiveness and life-cycle cost.The ILS engineer also acts as a system engineer (forthe ILSS) by identifying ILSS functional requirements,performing trade studies on the ILSS, documenting thelogistics support resources that will be required, andoverseeing the verification and validation of the ILSS.

The LSA process found in MIL-STD-1388-1Acan serve as a guideline, but its application in NASAshould be tailored to the project. Figures 31a and 31bshow the LSA process in more detail as it proceedsthrough the NASA project life cycle. Each iteration usesmore detailed inputs and provides more refinement inthe output so that by the time operations begin (PhaseE), the full complement of logistics support resourceshas been identified and the ILSS verified. The first stepat each iteration is to understand the mission, thesystem architecture/design, and the ILSS parameters.Specifically, the first step encompasses the followingactivities:

• Receiving (from the project-level systemengineer) factors related to the intended use ofthe system such as the operations concept,mission duration,

Page 114: Engineering Systems Handbook

NASA Systems Engineering HandbookIntegrating Engineering Specialties Into the Systems Engineering Process

• number of units, orbit parameters, spacetransportation options, allocated supportabilitycharacteristics, etc.

• Documenting existing logistics resourcecapabilities and/or assets that may becost-effective to apply to or combine with theILSS for the system being developed

• Identifying technological opportunities that canbe exploited. (This includes both newtechnologies in the system architecture/designthat reduce logistics support resourcerequirements as well as new technologieswithin the ILSS that make it less expensive tomeet any level of logistics support resourcerequirements.)

• Documenting the ILS concept and initial"strawman" ILSS, or updating (in later phases)the baseline ILSS.

The ILS engineer uses the results of theseactivities to establish supportability andsupportability-related design factors, which are passedback to the project-level system engineer. This means:

• Identifying and estimating the magnitude ofsupportability factors associated with thevarious system and operations concepts beingconsidered. Such factors might includeoperations team size, system RAM (reliability,availability and maintain

Page 115: Engineering Systems Handbook

NASA Systems Engineering HandbookIntegrating Engineering Specialties Into the Systems Engineering Process

ability) parameters, estimated annual IVA/EVAmaintenance hours and upmass requirements,etc.

• Using the above to assist the project-levelsystem engineer in projecting systemeffectiveness and life cycle cost, andestablishing system availability and/or systemsupportability goals. (See NHB 7120.5, and thishandbook, Section 5.3.3.)

• Identifying and characterizing the systemsupportability risks. (See NHB 7120.5, and thishandbook, Section 4.6.)

• Documenting supportability-related design con-straints.

The heart of the LSA lies in the next group ofactivities, during which systems engineering andanalysis are applied to the ILSS itself. The ILS engineermust first identify the functional requirements for theILSS. The functional analysis process establishes thebasis for a task inventory associated with the productsystem and, with the task inventory, aids in theidentification of system design deficiencies requiringredesign. The task inventory generally includescorrective and preventive maintenance tasks, and otheroperations and support tasks arising from the ILSSfunctional requirements. A principal input to the in-ventory of corrective and preventive maintenance tasks,which is typically constructed by the maintainability engi-neer, is the FMECA/FMEA (or equivalent analysis). TheFMECA/FMEA itself is typically performed by the reli-ability engineer. The entire task inventory isdocumented in Logistics Support Analysis Record(LSAR) data tables.

The ILS engineer then creates plausible ILSSalternatives, and conducts trade studies in the mannerdescribed earlier in Section 5.1. The trade studies focuson different issues depending on the phase of theproject. In Phases A and B, trade studies focus onhigh-level issues such as whether a spacecraft in LEOshould be serviceable or not, what mix of logisticsmodules seems best to support an inhabited spacestation, or what's the optimum number of maintenancelevels and locations. In Phases C and D, the focuschanges, for example to an individual end-item's op-timum repair level. In Phase E, when the system designand its logistics support requirements are essentiallyunderstood, trade studies often revisit issues in the lightof operational data. These trade studies almost alwaysrely on techniques and models especially created for thepurpose of doing a LSA. For a catalog of LSAtechniques and models, the system engineer canconsult the Logistics Support Analysis TechniquesGuide (1985), Army Materiel Command Pamphlet No.700-4.

By the end of Phase B, the results of the ILSSfunctional analyses and trade studies should besufficiently refined and detailed to provide quantitativedata on the logis-

MIL-STD-1388-1A/2B

NHB 7120.5 suggests MIL-STD 1388 as a guidelinefor doing an LSA. MIL-STD 1388-1A is divided intofive sections:

• LSA Planning and Control (not shown inFigures 31a and 31b)

• Mission and Support System Definition(shown as boxes A and B)

• Preparation and Evaluation of Alternatives(shown as boxes C, D, and E)

• Determination of Logistics Support ResourceRequirements (shown as box F)

• Supportability Assessment (shown as boxesG and H.

MIL-STD 1388-1A also provides useful tipsand encourages principles already established in thishandbook: functional analysis, successive refinementof designs through trade studies, focus on systemeffectiveness and life-cycle cost, and appropriatemodels and selection rules.

MIL-STD 1388-2B contains the LSAR relational datatable formats and data dictionaries for documenting ILSinformation and LSA results in machine-readable form.

tics support resource requirements. This isaccomplished by doing a task analysis for each task inthe task inventory. These requirements are formallydocumented by amending the LSAR data tables.Together, ILSS trade studies, LSA models, and LSARdata tables provide the project-level system engineerwith important life-cycle cost data and measures of(system) effectiveness (MoEs), which are successivelyrefined through Phases C and D as the product systembecomes better defined and better data becomeavailable. The relationship between inputs (from thespecialty engineering disciplines) to the LSA processand its outputs can be seen in Figure 27 (see Section5.3).

In performing the LSA, the ILS engineer alsodetermines and documents (in the LSAR data tables)the logistics resource requirements for Phase D systemintegration and verification, and deployment (e.g.,launch). For most spacecraft, this support includespre-launch transportation and handling, storage, andtesting. For new access-to-space systems, support maybe needed during an extended period of developmentallaunches, and for inhabited space stations, during anextended period of on-orbit assembly operations. TheILS engineer also contributes to risk managementactivities by considering the adequacy of sparesprovisioning, and of logistics plans and processes. Forexample, spares provisioning must take into account thepossibility that production lines will close during the anti-

Page 116: Engineering Systems Handbook

NASA Systems Engineering HandbookIntegrating Engineering Specialties Into the Systems Engineering Process

cipated useful life of the system.As part of verification and validation activity, the

ILS engineer performs supportability verificationplanning and gathers supportability verification/test dataduring Phase D. These data are used to identify andcorrect deficiencies in the system design and ILSS, andto update the LSAR data tables. During Phase E,supportability testing and analyses are conducted underactual operational conditions. These data provide auseful legacy to product improvement efforts and futureprojects.

6.5.4 Continuous Acquisition and Life-CycleSupport

LSA documentation and supporting LSAR datatables contain large quantities of data. Making use ofthese data in a timely manner is currently difficultbecause changes occur often and rapidly duringdefinition, design, and development (Phases B throughD). Continuous Acquisition and Life-Cycle Support(CALS)—changed in 1993 from Computer-AidedAcquisition and Logistics Support—technology canreduce this dilemma by improving the digital exchangeof data across NASA field centers and between NASAand its contractors. Initial CALS efforts within thelogistics engineering community focused on developingCALS digital data exchange standards; currentemphasis has shifted to database integration andproduct definition standards, such as STEP (Standardfor the Exchange of Product) Model Data.

CALS represents a shift from a paper- (andlabor-) intensive environment to a highly automated andintegrated one. Concommitant with that are expectedbenefits in reduced design and development time andcosts, and in the improved quality

Can NASA Benefit from CALS?

The DoD CALS program was initiated in 1985, since1988, it has been required on new DoD systems. Ac-cording to Clark, potential DOD-wide savings fromCALS exceeds $160M (FY92$). However, GAOstudies have been critical of DoD's CALS’implementation. These criticisms focused on CALS'limited ability to share information among users.

For NASA field centers to realize savings fromCALS, new enabling investments in hardware,software, and training may be required. While many ofNASA's larger contractors have already installedCALS technology, the system engineer wishing toemploy CALS must recognize that both GALS andnon-CALS approaches may be needed to interact withsmall business suppliers, and that proprietarycontractor data, even when digitized, needs to beprotected.

improved quality of ILS products and decisions. CALScost savings accrue primarily in three areas: concurrentengineering, configuration control, and ILS functions. Ina concurrent engineering environment, NASA'smulti-disciplinary PDTs (which may mirror and work withthose of a system contractor) can use CALS technologyto speed the exchange of and access to data amongPDTs. Availability of data through CALS on parts andsuppliers also permits improved parts selection andacquisition. (See Section 3.7.2 for more on concurrentengineering.)

Configuration control also benefits from CALStechnology. Using CALS to submit, process, and trackECRs/ECPs can reduce delays in approving or rejectingthem, along with the indirect costs that delays cause. Al-though concurrent engineering is expected to reduce thenumber of ECRs/ECPs during design and development(Phases C and D), their timely disposition can producesignificant cost savings. (See Section 4.7.2 for more onconfiguration control.)

Lastly, CALS technology potentially enables ILSfunctions such as supply support to be performedsimultaneously and with less manual effort than atpresent. For example, procurement of design-stablecomponents and spares can begin earlier (to allowearlier testing); at the same time, provisioning for othercomponents can be further deferred (until designstability is achieved), thus reducing the risk of costlymistakes. Faster vendor response time also meansreduced spares inventories during operations.

6.6 Verification

Verification is the process of confirming thatdeliverable ground and flight hardware and software arein compliance with functional, performance, and designrequirements. The verification process, which includesplanning, requirements definition, and complianceactivities, begins early and continues throughout theproject life cycle. These activities are an integral part ofthe systems engineering process. At each stage of theprocess, the system engineer's job is to understand andassess verification results, and to lead in the resolutionof any anomolies. This section describes a genericNASA verification process that begins with a verificationprogram concept and continues through operational anddisposal verification. Whatever process is chosen by theprogram/project should be documented in the SEMP.

Page 117: Engineering Systems Handbook

NASA Systems Engineering HandbookIntegrating Engineering Specialties Into the Systems Engineering Process

ops a verification program considering its cost,schedule, and risk implications. No one program can beapplied to every project, and each verification activityand product must be assessed as to its applicability to aspecific project. The verification program requiresconsiderable coordination by the verification engineer,as both system design and test organizations aretypically involved to some degree throughout.

6.6.1 Verification Process Overview

Verification activities begin in Phase A of aproject. During this phase, inputs to the project'sintegrated master schedule and cost estimates aremade as the verification program concept takes shape.These planning activities increase in Phase B with therefinement of requirements, costs, and schedules. Inaddition, the system's requirements are assessed todetermine preliminary methods of verification and toensure that the requirements can be

Page 118: Engineering Systems Handbook

NASA Systems Engineering HandbookIntegrating Engineering Specialties Into the Systems Engineering Process

verified. The outputs of Phase B are expanded in PhaseC as more detailed plans and procedures are prepared.In Phase D, verification activities increase substantially;these activities normally include qualification andacceptance verification, followed by verification inpreparation for deployment and operational verification.Figures 32a and 32b show this process through theNASA project life cycle. (Safety reviews as applied toverification activities are not shown as separateactivities in the figures.)

The Verification Program Concept. A verification pro-gram should be tailored to the project it supports. Theproject manager/system engineer must work with theverification engineer to develop a verification programconcept. Many factors need to be considered indeveloping this concept and the subsequent verificationprogram. These factors include:

• Project type, especially for flight projects.Verification methods and timing depend on thetype of flight article involved (e.g., anexperiment, payload, or launch vehicle).

• NASA payload classification. The verificationactivities and documentation required for aspecific flight article generally depend upon itsNASA payload classification. As expected, theverification program for a Class A payload isconsiderably more comprehensive than that fora Class D payload. (See Appendix B.3 forclassification guidelines.)

• Project cost and schedule implications.Verification activities can be significant driversof a project's cost and schedule; theseimplications should be considered early in thedevelopment of the verification program. Tradestudies should be performed to supportdecisions about verification methods and re-quirements, and the selection of facility typesand locations. As an example, a trade studymight be made to decide between performing atest at a centralized facility or at severaldecentralized locations.

• Risk implications. Risk management must beconsidered in the development of theverification program. Qualitative riskassessments and quantitative risk analyses(e.g., a FMECA) often identify new concernsthat can be mitigated by additional testing, thusincreasing the extent of verification activities.Other risk assessments contribute to tradestudies that determine the preferred methods ofverification to be used and when those methodsshould be performed. As an example, a trademight be made between performing a modaltest versus determining modal characteristics bya less costly, but less revealing, analysis. Theproject manager/system engineer must

determine what risks are acceptable in terms ofthe project's cost and schedule.

• Availability of verification facilities/sites andtransportation assets to move an article fromone location to another (when needed). Thisrequires coordination with the ILS engineer.

• Acquisition strategy (i.e., in-house developmentor system contract). Often a NASA field centercan shape a contractor's verification processthrough the project's Statement of Work (SoW).

• Degree of design inheritance andhardware/software reuse.

Verification Methods and Techniques. The systemengineer needs to understand what methods andtechniques the verification engineer uses to verifycompliance with requirements. In brief, these methodsand techniques are:

• Test• Analysis• Demonstration• Similarity• Inspection• Simulation• Validation of records.

Verification by test is the actual operation ofequipment during ambient conditions or when subjectedto specified environments to evaluate performance. Twosubcategories can be defined: functional testing andenvironmental testing. Functional testing is an individualtest or series of electrical or mechanical performancetests conducted on flight or flight-configured hardwareand/or software at conditions equal to or less thandesign specifications. Its purpose is to establish that thesystem performs satisfactorily in accordance with designand performance specifications. Functional testinggenerally is performed at ambient conditions. Functionaltesting is performed before and after eachenvironmental test or major move in order to verifysystem performance prior to the next test/operation.Environmental testing is an individual test or series oftests conducted on flight or flight-configured hardwareand/or software to assure it will perform satisfactorily inits flight environment. Environmental tests includevibration, acoustic, and thermal vacuum. Environmentaltesting may be combined with functional testing if testobjectives warrant.

Verification by analysis is a process used in lieuof (or in addition to) testing to verify compliance tospecifications/requirements. The selected techniquesmay include

Page 119: Engineering Systems Handbook

NASA Systems Engineering HandbookIntegrating Engineering Specialties Into the Systems Engineering Process

systems engineering analysis, statistics and qualitativeanalysis, computer and hardware simulations, and com-puter modeling. Analysis may be used when it can bedetermined that: (1) rigorous and accurate analysis ispossible; (2) testing is not feasible or cost-effective; (3)similarity is not applicable; and/or (4) verification byinspection is not adequate.

Verification by demonstration is the use ofactual demonstration techniques in conjunction withrequirements such as maintainability and humanengineering features. Verification by similarity is theprocess of assessing by review of prior acceptance dataor hardware configuration and applications that thearticle is similar or identical in design and manufacturingprocess to another article that has previously beenqualified to equivalent or more stringent specifications.Verification by inspection is the physical evaluation ofequipment and/or documentation to verify designfeatures. Inspection is used to verify constructionfeatures, workmanship, and physical dimensions andcondition (such as cleanliness, surface finish, andlocking hardware). Verification by simulation is theprocess of verifying design features and performanceusing hardware or software other than flight items.Verification by validation of records is the process ofusing manufacturing records at end-item acceptance toverify construction features and processes for flighthardware.

Verification Stages. Verification stages are definedperiods of verification activity when different verificationgoals are met. In this handbook, the followingverification stages are used for flight systems:

• Development• Qualification• Acceptance• Preparation for deployment (also known as pre

launch)

Analyses and Models

Analyses based on models are used extensivelythroughout a program/project to verify and determinecompliance to performance and design requirements.Most verification requirements that cannot be verifiedby a test activity are verified through analyses andmodeling. The analysis and modeling process beginsearly in the project life cycle and continues throughmost of Phase D; these analyses and models areupdated periodically as actual data that are used asinputs become available. Often, analyses and modelsare validated or corroborated by the results of a testactivity. Any verification-related results should bedocumented as part of the project's archives.

• Operational (also known as on-orbit or in-flight)• Disposal (as needed).

The development stage is the period during whicha new project or system is formulated and implementedup to the manufacturing of qualification or flighthardware. Verification activities during this stage (e.g.,breadboard testing) provide confidence that the systemcan accomplish mission goals/objectives. When testsare conducted during this stage, they are usuallyperformed by the design organization, or by the designand test organizations together. Also, someprogram/project requirements may be verified orpartially verified through the activities of the PDR andCDR, both of which occur during this stage. Anydevelopment activity used to formally satisfyprogram/project requirements should have qualityassurance oversight.

The qualification stage is the period duringwhich the flight (protoflight approach) or flight-typehardware is verified to meet functional, performance.and design requirements. Verifications during this stageare conducted on flight-configured hardware atconditions more severe than acceptance conditions toestablish that the hardware wild perform satisfactorily inthe flight environments with sufficient margin. Theacceptance stage is the period during which thedeliverable flight end-item is shown to meet functional,performance, and design requirements under conditionsspecified for the mission. The acceptance stage endswith shipment of the flight hardware to the launch site.

The preparation for deployment stage beginswith the arrival of the flight hardware and/or software atthe launch site and terminates at launch. Requirementsverified during this stage are those that demand theintegrated vehicle and/or launch site facilities. Theoperational verification stage begins at liftoff; during thisstage, flight systems are verified to operate in spaceenvironment conditions, and requirements demandingspace environments are verified. The disposal stage isthe period during which disposal requirements areverified.

6.6.2 Verification Program Planning

Verification program planning is an interactiveand lengthy process occurring during all phases of aproject. but more heavily during Phase C. Theverification engineer develops a preliminary definition ofverification requirements and activities based on theprogram/project and mission requirements. An effortshould be made throughout a project's mission andsystem definition to phrase requirements in absoluteterms in order to simplify their verification. As thesystem and interface requirements are es-

Page 120: Engineering Systems Handbook

NASA Systems Engineering HandbookIntegrating Engineering Specialties Into the Systems Engineering Process

tablished and refined, the verification engineer assessesthem to determine the appropriate method of verificationor combination thereof. These requirements and themethod(s) of verification are then documented in the ap-propriate requirements document.

Using the methods of verification to beperformed for each verification stage, along with thelevels (e.g., part, subsystem, system) at which theverifications are to be performed, and anyenvironmental controls (e.g., contamination) that mustbe maintained, the verification engineer outlines apreliminary schedule of verification activities associatedwith development, qualification, and acceptance of thesystem. This preliminary schedule should be in ac-cordance with project milestones, and should beupdated as verification activities are refined.

During planning, the verification engineer alsoidentifies the documentation necessary to support theverification program. This documentation normallyincludes: (1) a Verification Requirements Matrix (VRM),(2) a Master Verification Plan (MVP), (3) a VerificationRequirements and Specifications Document (VRSD),and (4) a Verification Requirements ComplianceDocument (VRCD). Documentation for test proceduresand reports may also be defined. Because the systemengineer should be familiar with these basic elements ofa verification process, each of these is covered below.

Verification Requirements Matrix. The VerificationRequirements Matrix (VRM) is that portion of arequirements document (generally a SystemRequirements Document or Cl specification) thatdefines how each functional, performance, and designrequirement is to be verified, the stage in whichverification is to occur, and (sometimes) the applicableverification levels. The verification engineer developsthe VRM in coordination with the design, systemsengineering, and test organizations. VRM contents aretailored to each project's requirements, and the level ofdetail in VRMs may vary. The VRM is baselined as aresult of the PDR, and essentially establishes the basisfor the verification program. A sample VRM for a CIspecification is shown in Appendix B.9.

Master Verification Plan. The Master Verification Plan(MVP) is the document that describes the overallverification program. The MVP provides the content anddepth of detail necessary to provide full visibility of allverification activities. Each major activity is defined anddescribed in detail. The plan encompasses qualification,acceptance, pre-launch, operational, and disposalverification activities for flight hardware and software.(Development stage verification activities are notnormally documented in the plan, but may bedocumented elsewhere.) The plan pro

Verification Reports

A verification report should be provided for eachanalysis and at a minimum, for each major testactivity, such as functional testing, environmentaltesting, and end-to-end compatibility testing occursover long periods of time or is separated by otheractivities, verification reports may be needed for eachindividual test activity, such as functional testing,acoustic testing, vibration testing, and thermalvacuum/thermal balance testing. Verification reportsshould be completed within a few weeks following atest, and should provide evidence of compliance withthe verification requirements for which it wasconducted. The verification report should include asappropriate:

• Verification objectives and degree to whichthey were met

• Description of verification activity• Test configuration and differences from flight

configuration• Specific result of each test and each

procedure including annotated tests• Specific result of each analysis• Test performance data tables, graphs,

illustrations, and pictures• Descriptions of deviations from nominal

results, problems/failures, approved anomalycorrective actions, and re-test activity

• Summary of non-conformance/discrepancyreports including dispositions

• Conclusion and recommendations relative tosuccess of verification activity

• Status of support equipment as affected bytest

• Copy of as-run procedure• Authentication of test results and authorization

of acceptability.

vices a general schedule and sequence of events formajor verification activities. It also describes testsoftware, Ground Support Equipment (GSE), andfacilities necessary to support the verification activities.The verification engineer develops the plan through athorough understanding of the verification programconcept, the requirements in the Program (i.e., Level I)Requirements Document (PRD), System/Segment (i.e.,Level II) Requirements Document (SRD), and/or the CIspecification, and the methods identified in the VRM ofthose documents. Again, the development of the planrequires that the verification engineer work closely withthe design, systems engineering, and test organizations.A sample outline for this plan is illustrated in AppendixB.10.

Page 121: Engineering Systems Handbook

NASA Systems Engineering HandbookIntegrating Engineering Specialties Into the Systems Engineering Process

Verification Requirements and Specifications Docu -ment. The Verification Requirements and SpecificationsDocument (VRSD) defines the detailed requirementsand specifications for the verification of a flight article,including the ground system/segment. The VRSDspecifies requirements and specifications for activitiescovering qualification through operational verification.Requirements are also defined for flight softwareverification after the software has been installed in theflight article. The VRSD should cover verifications by allmethods; some programs/projects, however, use adocument that defines only requirements to be satisfiedby test.

The VRSD should include all requirementsdefined in Level I, II, and III requirements documentsplus derived requirements. The VRSD defines theacceptance criteria and any constraints for eachrequirement. The VRSD typical]y identifies the locationswhere requirements will be verified. On largeprograms/projects, a VRSD is normally developed foreach verification activity/location (e.g., thermal-vacuumtesting), and is tailored to include requirements for thatverification activity only. The verification engineerdevelops the VRSD from an understanding of therequirements, the verification program concept, and theflight article. The VRSD is baselined prior to the start ofthe verification activity. The heart of the VRSD is a datatable that includes the following fields:

• A numerical designator assigned to eachrequirement

• A statement of the specific requirement to beverified

• The "pass/fail" criteria and tolerances for eachrequirement

• Any constraints that must be observed• Any remarks to aid in the understanding of the

requirement• Location where the requirement will be verified.

The VRSD, along with flight article drawings andschematics, is the basis for the development ofverification procedures, and is also used as one of thebases for development of the Verification RequirementsCompliance Document (VRCD).

Verification Requirements Compliance Document.The Verification Requirements Compliance Document(VRCD) provides the evidence of compliance to eachLevel I through Level n design, performance, safety,and interface requirement, and to each VRSDrequirement. The flowdown to VRSD requirementscompletes the full requirements traceability. Compliancewith all the requirements ensures that Level Irequirements have been met.

The VRCD defines, for each requirement, themethod(s) of verification and corresponding complianceinformation for each method employed. The compliance

information provides either the actual data, or areference to the location of the actual data that showscompliance with the requirement. (The document alsoshows any non-compliances by referencing the relatedNon-Compliance Report (NCR) or Problem/FailureReport (P/FR); following resolution of the anomaly, thedocument specifies appropriate re-verificationinformation.) The compliance information may referencea verification report, an automated test program, averification procedure, an analysis report, or a test. Theinputting of compliance information into the compliancedocument occurs over a lengthy period of time, and onlarge systems and payloads, the effort may becontinuous. The information in the compliancedocument must be up-to-date for the SystemAcceptance Review(s) (SAR) and Flight ReadinessReview (FRR). The compliance document is notbaselined because compliance information is input tothe document throughout the entire project life cycle. Itis, however, an extremely important part of the project'sarchives.

The heart of the Verification RequirementsCompliance Document is also a data table with links tothe corresponding requirements. The VRCD includes thefollowing fields:

• A numerical designator assigned to eachrequirement

• A numerical designator that defines thedocument where the requirement is defined

• A statement of the specific requirement forwhich compliance is to be defined

• Verification method used to verify therequirement

• Location of the data that show compliance withthe requirement statement. This informationcould be a test, report, procedure, analysisreport, or other information that fully defineswhere the compliance data could be found.Retest information is also shown.

• Any non-conformances that occurred during theverification activities

• Any statements of compliance information as toany non-compliance or acceptance by meansother than the method identified, such as awaiver.

Verification Procedures. The verification proceduresare documents that provide step-by-step instructions forperforming a given verification activity. The procedure istailored to the verification activity that is to be performedto satisfy a requirement, and could be a test,demonstration, or any other verification-related activity.The procedure is

Page 122: Engineering Systems Handbook

NASA Systems Engineering HandbookIntegrating Engineering Specialties Into the Systems Engineering Process

written to satisfy requirements defined by the VRSD,and is submitted prior to the Test Readiness Review(TRR) or the start of the verification activity in which theprocedure is used. (See sidebar on TRRs.)

Procedures are also used to verify theacceptance of facilities, electrical and mechanicalground support equipment, and special test equipment.The information generally contained in a procedure is asfollows, but it may vary according to the activity and testarticle:

• Nomenclature and identification of the testarticle or material

• Identification of test configuration and any differ-ences from flight configuration

• Identification of objectives and criteriaestablished for the test by the applicableverification specifica tion

• Characteristics and design criteria to beinspected or tested, including values, withtolerances, for acceptance or rejection

• Description, in sequence, of steps andoperations to be taken

• Identification of computer software required• Identification of measuring, test, and recording

equipment to be used, specifying range,accuracy, and type

• Certification that required computer test pro-grams/support equipment and software havebeen verified prior to use with flight hardware

• Any special instructions for operating datarecording equipment or other automated testequipment as applicable

• Layouts, schematics, or diagrams showingidentification, location, and interconnection oftest equipment, test articles, and measuringpoints

• Identification of hazardous situations oroperations

• Precautions and safety instructions to ensuresafety of personnel and prevent degradation oftest articles and measuring equipment

• Environmental and/or other conditions to bemaintained with tolerances

• Constraints on inspection or testing• Special instructions for non-conformances and

anomalous occurrences or results• Specifications for facility, equipment

maintenance, housekeeping, certificationinspection, and safety and handlingrequirements before, during, and after the totalverification activity.

The procedure may provide blank spaces forrecording of results and narrative comments in orderthat the

Test Readiness Reviews

A Test Readiness Review (TRR) is held prior to eachmajor test to ensure the readiness of all ground, flight,and operational systems to support the performanceof the test. A review of the detailed status of thefacilities, Ground Support Equipment (GSE), testdesign, software, procedures, and verificationrequirements is made. The test activities andschedule are outlined and personnel responsibilitiesare identified. Verification emphasis is directedtoward ensuring that all verification requirements thathave been identified for the test have been included inthe test design and procedures.

completed procedure can serve as part of theverification report. The as-run and certified copy of theprocedure is maintained as part of the project'sarchives.

6.6.3 Qualification Verification

Qualification stage verification activities beginafter completion of development of the flight hardwaredesigns, and include analyses and testing to ensure thatthe flight or flight-type hardware (and software) will meetfunctional and performance requirements in anticipatedenvironmental conditions. Qualification tests generallyare designed to subject the hardware to worst caseloads and environmental stresses. Some of theverifications performed to ensure hardware complianceto worst case loads and environments arevibration/acoustic, pressure limits, leak rates, thermalvacuum, thermal cycling, electromagnetic interferenceand electromagnetic compatibility (EMI/EMC), high andlow voltage limits, and life time/cycling. During thisstage, many performance requirements are verified,while analyses and models are updated as test data areacquired. Safety requirements, defined by hazardanalysis reports, may also be satisfied by qualificationtesting.

Qualification usually occurs at the component orsubsystem level, but could occur at the system level aswell. When a project decides against building dedicatedqualification hardware, and uses the flight hardwareitself for qualification purposes, the process is termedprotoflight. Additional information on protoflight testing iscontained in MSFC-HDBK-670, General EnvironmentalTest Guidelines (GETG) for Protoflight Instruments andExperiments.

6.6.4 Acceptance Verification

The acceptance stage verification activitiesprovide the assurance that the flight hardware andsoftware are in

Page 123: Engineering Systems Handbook

NASA Systems Engineering HandbookIntegrating Engineering Specialties Into the Systems Engineering Process

compliance with all functional, performance, and designrequirements, and are ready for shipment to the launchsite. The acceptance stage begins with the acceptanceof each individual component or piece part for assemblyinto the flight article and continues through the SAR.

Some verifications cannot be performed after aflight article, especially a large one, has beenassembled and integrated (e.g., due to inaccessability).When this occurs, these verifications are performedduring fabrication and integration, and are known asin-process tests. Acceptance testing, then, begins within-process testing and continues through functionaltesting, environmental testing, and end-to-endcompatibility testing. Functional testing normally beginsat the component level and continues at the systemslevel, ending with all systems operating simultaneously.All tests are performed in accordance with requirementsdefined in the VRSD. When flight hardware isunavailable, or its use is inappropriate for a specific test,simulators may be used to verify interfaces. Anomaliesoccurring during a test are documented on theappropriate reporting system (NCR or P/FR), and aproposed resolution should be defined before testingcontinues. Major anomalies, or those that are not easilydispositioned, may require resolution by a collaborativeeffort of the system engineer, and the design, test, andother organizations. Where appropriate, analyses andmodels are validated and updated as test data areacquired.

6.6.5 Preparation for Deployment Verification

The pre-launch verification stage begins withthe arrival of the flight article at the launch site andconcludes at liftoff. During this stage, the flight article isprocessed and integrated with the launch vehicle. Thelaunch vehicle could be the Shuttle, some other launchvehicle, or the flight article could be part of the launchvehicle. Verifications requirements for this stage aredefined in the VRSD. When the launch site is theKennedy Space Center, the Operations andMaintenance Requirements and SpecificationsDocument (OMRSD) is used in lieu of the VRSD.

Verifications performed during this stage ensurethat no visible damage to the system has occurredduring shipment and that the system continues tofunction properly. If system elements are shippedseparately and integrated at the launch site, testing ofthe system and system interfaces is generally required.If the system is integrated into a carrier, the interface tothe carrier must also be verified. Other verificationsinclude those that occur following integration into thelaunch vehicle and those that occur at the launch pad;these are intended to ensure that the system isfunctioning and in its proper launch configuration. Con

Software IV&V

Some project managers/system engineers may wishto add IV&V (Independent Verification and Validation)to the software verification program. IV&V is aprocess whereby the products of the softwaredevelopment life cycle are independently reviewed,verified, and validated by an organization that isneither the developer nor the acquirer of the software.The IV&V agent should have no stake in the successor failure of the software; the agent’s only interestshould be to make sure that the software is thoroughlytested against its requirements.

IV&V activities duplicate the project’s V&Vactivities step-by-step during the life cycle, with theexception that the IV&V agent does no informaltesting. If IV&V is employed, formal acceptancetesting may be done only once, by the IV&V agent. Inthis case, the developer formally demonstrates thatthe software is ready for acceptance testing.

tingency verifications and procedures are developed forany contingencies that can be foreseen to occur duringpre-launch and countdown. These contingencyverifications and procedures are critical in that somecontingencies may require a return of the launch vehicleor flight article from the launch pad to a processingfacility.

6.6.6 Operational and Disposal Verification

Operational verification provides the assurancethat the system functions properly in a (near-) zerogravity and vacuum environment. These verificationsare performed through system activation and operation,rather than through a verification activity. Systems thatare assembled on-orbit must have each interfaceverified, and must function properly during end-to-endtesting. Mechanical interfaces that provide fluid and gasflow must be verified to ensure no leakage occurs, andthat pressures and flow rates are within specification.Environmental systems must be verified. Therequirements for all operational verification activities aredefined in the VRSD.

Disposal verification provides the assurancethat the safe deactivation and disposal of all systemproducts and processes has occurred. The disposalstage begins in Phase E at the appropriate time (i.e.,either as scheduled, or earlier in the event of prematurefailure or accident), and concludes when all missiondata have been acquired and verifications necessary toestablish compliance with disposal requirements arefinished. Both operational and disposal verificationactivities may also include validation assess-

Page 124: Engineering Systems Handbook

NASA Systems Engineering HandbookIntegrating Engineering Specialties Into the Systems Engineering Process

meets—that is, assessments of the degree to which thesystem accomplished the desired missiongoals/objectives.

6.7 Producibility

Producibility is a system characteristicassociated with the ease and economy with which acompleted design can be transformed (i.e., fabricated,manufactured, or coded) into a hardware and/orsoftware realization. While major NASA systems tend tobe produced in small quantities, a particular producibilityfeature can be critical to a system's cost-effectiveness,as experience with the Shuttle's thermal tiles has shown.

6.7.1 Role of the Production Engineer

The production engineer supports the systemsengineering process (as a part of the multi-disciplinaryPDT) through an active role in implementing specificdesign features to enhance producibility, and byperforming the production engineering analyses neededby the project. These tasks and analyses include:

• Performing the manufacturing/fabricationportion of the system risk management program(see Section 4.6). This is accomplished byconducting a rigorous production riskassessment and by planning effective riskmitigation actions.

• Identifying system design features that enhanceproducibility. Efforts usually focus on designsimplification, fabrication tolerances, andavoidance of hazardous materials.

• Conducting producibility trade studies todetermine the most cost-effectivefabrication/manufacturing process

• Assessing production feasibility within projectconstraints. This may include assessingcontractor and principal subcontractorproduction experience and capability, newfabrication technology, special tooling, andproduction personnel training requirements.Identifying long-lead items and critical materials

• Estimating production costs as a part oflife-cycle cost management

• Developing production schedules• Developing approaches and plans to validate

fabrication/manufacturing processes.

The results of these tasks and productionengineering analyses are documented in theManufacturing Plan with a level of detail appropriate tothe phase of the project. The production engineer alsoparticipates in and contributes to major project reviews(primarily PDR and CDR) on the above items, and tospecial interim reviews such as the ProductionReadiness Review (ProRR).

6.7.2 Producibility Tools and Techniques

Manufacturing Functional Flow Block Diagrams(FFBDs). Manufacturing FFBDs are used in the sameway system FFBDs. described in Appendix B.7.1, areused. At the top level, manufacturing FFBDssupplement and clarify the system's manufacturingsequence.

Risk Management Templates. The risk managementtemplates of DoD 4245.7M, Transition fromDevelopment to Production ...Solving the Risk Equation,are a widely recognized series of risks, risk responses,and lessons reamed from DoD experience. Thesetemplates, which were designed to reduce risks inproduction, can be tailored to individual NASA projects.

Producibility Assessment Worksheets. These work-sheets, which were also developed for DoD, use a judg-ment-based scoring approach to help choose amongalternative production methods. See ProducibilityMeasurement for DoD Contracts.

Producibility Models. Producibility models are used inaddressing a variety of issues such as assessing thefeasibility of alternative manufacturing plans, andestimating production costs as a part of life-cycle costmanagement. Specific producibility models may include:

• Scheduling models for estimating productionoutput, and for integrating systemenhancements and/or spares production into themanufacturing sequence

• Manufacturing or assembly flow simulations,e.g., discrete event simulations of factoryactivities

• Production cost models that include learningand production rate sensitivities. (See sidebarpage 82.)

Statistical Process Control/Design of Experiments.These techniques, long applied in manufacturing toidentify the causes of unwanted variations in productquality and reduce their effects, have had a rebirthunder TQM. A collection of currently popular techniquesof this new quality engineering is known as Taguchimethods. For first-hand information on Taguchimethods, see his book, Quality Engineering inProduction Systems, 1989. A handbook approach to tosome of these techniques can be found in the

Page 125: Engineering Systems Handbook

NASA Systems Engineering HandbookIntegrating Engineering Specialties Into the Systems Engineering Process

Navy's Producibility Measurement Guidelines:Methodologies for Product Integrity.

6.8 Social Acceptability

NASA systems must be acceptable to thesociety that funds them. The system engineer takes thisinto account by integrating mandated social concernsinto the systems engineering process. For somesystems, these concerns can result in significant designand cost penalties. Even when social concerns can bemet, the planning and analysis associated with doing socan be time-consuming (even to the extent of affectingthe project's critical path), and use significantspecialized engineering resources. The system engineermust include these costs in high-level trade studies ofalternative architectures/designs.

6.8.1 Environmental Impact

NASA policy and federal law require all NASAactions that may impact the quality of the environmentbe executed in accordance with the policies andprocedures of the National Environmental Policy Act(NEPA). For any NASA project or other major NASAeffort, this requires that studies and analyses beproduced explaining how and why the project is planned,and the nature and scope of its potential environmentalimpact. These studies must be performed whether theproject is conducted at NASA Headquarters, a fieldcenter, or a contractor facility, and must properly beginat the earliest period of project planning (i.e., not laterthan Phase A). Findings, in the form of anEnvironmental Assessment (EA) and, if warranted,through the more thorough analyses of anEnvironmental Impact Statement (EIS), must bepresented to the public for review and comment. (Seesidebar on NEPA.)

At the outset, some NASA projects will be ofsuch a magnitude and nature that an EIS is clearlygoing to be required by NEPA, and some will clearly notneed an EIS. Most major NASA projects, however, fallin between, where the need for an EIS is a prioriunclear, in such cases an EA is prepared to determinewhether an EIS is indeed required. NASA's experiencesince 1970 has been that projects in which there is therelease—or potential release— of large or hazardousquantities of pollutants (rocket exhaust gases, exoticmaterials, or radioactive substances), require an EIS.For projects in this category, an EA is not performed,and the project's analyses should focus on and supportthe preparation of an EIS.

The NEPA process is meant to ensure that theproject is planned and executed in a way that meets thena-

What is NEPA?

The National Environmental Policy Act (NEPA) of1969 declares a national environmental policy andgoals, and provides a method for accomplishing thosegoals. NEPA requires an Environmental ImpactStatement (EIS) for "major federal actions significantlyaffecting the quality of the human environment."

Some environmental impact reference docu-ments include:

• National Environmental Policy Act (NEPA) of1969, as amended (40 CFR 1500-1508)

• Procedures for Implementing the NationalEnvironmental Policy Act (14 CFR 1216.3)

• Implementing the Requirements of theNational Environmental Policy Act, NHB8800.11

• Executive Order 11514, Protection and En-hancement of Environmental Quality, March 5,1970, as amended by Executive Order 11991,May 24, 1977

• Executive Order 12114, Environmental EffectsAbroad of Major Federal Actions, January 4,1979.

tional environmental policy and goals. First, the processhelps the system engineer shape the project by puttingpotential environmental concerns in the forefront duringPhase A. Secondly, the process provides the means forreporting to the public the project's rationale andimplementation method. Finally, it allows public reviewof and comment on the planned effort, and requiresNASA to consider and respond to those comments. Thesystem engineer should be aware of the following NEPAprocess elements.

Environmental Assessment (EA). An EA is a concisepublic document that serves to provide sufficientevidence and analyses for determining whether toprepare either an EIS or a Finding of No SignificantImpact (FONSI). The analyses performed shouldidentify the environmental effects of all reasonablealternative methods of achieving the project'sgoals/objectives so that they may be compared. Thealternative of taking no action (i.e., not doing the pro-ject) should also be studied. Although there is norequirement that NASA select the alternative having theleast environmental impact, there must be sufficientinformation available to make clear what those impactswould be, and to describe the reasoning behind NASA'spreferred selection. The environmental analyses are anintegral part of the project's systems engineeringprocess.

The EA is the responsibility of the NASA Head-quarters Program Associate Administrator (PAA)responsi-

Page 126: Engineering Systems Handbook

NASA Systems Engineering HandbookIntegrating Engineering Specialties Into the Systems Engineering Process

ble for the proposed project or action. The EA can becarried out at Headquarters or at a NASA field center.Approval of the EA is made by the responsible PAA.Most often, approval of the EA takes the form of amemorandum to the Associate Administrator (AA) forManagement Systems and Facilities (Code J) statingeither that the project requires an EIS, or that it doesnot. If an EIS is found to be necessary, a Notice ofIntent (NOI) to prepare an EIS is written; if an EIS isfound to be unnecessary, a Finding of No SignificantImpact (FONSI) is written instead.

Finding of No Significant Impact (FONSI). A FONSIshould briefly present the reasons why the proposedproject or action, as presented in the EA, has beenjudged to have no significant effect on the humanenvironment, and does not therefore require thepreparation of an EIS. The FONSI for projects andactions that are national in scope is published in theFederal Register, and is available for public review for a30-day period. During that time, any supportinginformation is made readily available on request.

Notice of Intent (NOI). A Notice of Intent to file an EISshould include a brief description of the proposedproject or action, possible alternatives, the primaryenvironmental issues uncovered by the EA, and NASA'sproposed scoping procedure, including the time andplace of any scoping meetings. The NOI is prepared bythe responsible Headquarters PAA and published in theFederal Register. It is also sent to interested parties.

Scoping. The responsible Headquarters PAA must con-duct an early and open process for determining thescope of issues to be addressed in the EIS, and foridentifying the significant environmental issues. Scopingis also the responsibility of the Headquarters PAAresponsible for the proposed project or action; however,the responsible Headquarters PAA often works closelywith the Code J AA. Initially, scoping must consider thefull range of environmental parameters en route toidentifying those that are significant enough to beaddressed in the EIS. Examples of the environmentalcategories and questions that should be asked in thescoping process are contained in NHB 8800.11,Implementing the Provisions of the National En-vironmental Policy Act, Section 307.d.

Environmental Impact Statement (EIS). The EA andscoping elements of the NEPA process provide theresponsible Headquarters PAA with an evaluation ofsignificant environmental effects and issues that mustbe covered in the EIS. Preparation of the EIS itself maybe carried out by NASA alone, or with the assistance orcooperation of other government agencies and/or acontractor. If a contractor is used, the contractor shouldexecute a disclosure statement prepared by NASAHeadquarters indicating that the contractor has nointerest in the outcome of the project.

The section on environmental consequences isthe analytic heart of the EIS, and provides the basis forthe comparative evaluation of the alternatives. Theanalytic results for each alternative should be displayedin a way that highlights the choices offered the decisionmaker(s). An especially suitable form is a matrixshowing the alternatives against the categories ofenvironmental impact (e.g., air pollution, water pollution,endangered species). The matrix is filled in with (anestimate of) the magnitude of the environmental impactfor each alternative and category. The subsequentdiscussion of alternatives is an extremely important partof the EIS, and should be given commensurateattention.

NASA review of the draft EIS is managed by theCode J AA. When submitted for NASA review, the draftEIS should be accompanied by a proposed list offederal, state and local officials, and other interestedparties.

External review of the draft EIS is also managedby the Code J AA. A notice announcing the release andavailability of the draft EIS is published in the FederalRegister, and copies are distributed with a request forcomments. Upon receipt of the draft, the EnvironmentalProtection Agency (EPA) also places a notice in theFederal Register, and the date of that publication is thedate that all time limits related to the draft's releasebegin. A minimum of 45 days must be allowed forcomments. Comments from external reviewers receivedby the Code J AA will be sent to the office responsiblefor preparing the EIS. Each comment should beincorporated in the final EIS.

The draft form of the final EIS, modified as re-quired by the review process just described, should beforwarded to the Code J AA for a final review beforeprinting and distribution. The final version should includesatisfactory responses to all responsible comments.While NASA need not yield to each and every opposingcomment, NASA's position should be rational, logical,and based on data and arguments stronger than thosecited by the commentors opposing the NASA views.

According to NHB 8800.11, Implementing the Pro-visions of the National Environmental Policy Act (Section309.b), "an important element in the EIS process is in-volvement of the public. Early involvement can go along way toward meeting complaints and objectionsregarding a proposed action, and experience has taughtthat a fully informed and involved public is considerablymore supportive of a proposed action. When a proposedaction is believed likely to generate significant publicconcern, the public should be brought in for consultationin the early planning stages. If an EIS is warranted, thepublic should

Page 127: Engineering Systems Handbook

NASA Systems Engineering HandbookIntegrating Engineering Specialties Into the Systems Engineering Process

be involved both in scoping and in the EIS review. Earlyinvolvement can help lead to selection of the bestalternative and to the least public objection."

Record of Decision (ROD). When the EIS process hasbeen completed and public review periods haveelapsed, NASA is free to make and implement thedecision(s) regarding the proposed project or action. Atthat time, a Record of Decision (ROD) is prepared bythe Headquarters PAA responsible for the project oraction. The ROD becomes the official public record ofthe consideration of environmental factors in reachingthe decision. The ROD is not published in the FederalRegister, but must be kept in the official files of theprogram/project in question and made available onrequest.

6.8.2 Nuclear Safety Launch Approval

Presidential Directive/National Security CouncilMemorandum-25 (PD/NSC-25) requires that flightprojects calling for the use of radioactive sources followa lengthy analysis and review process in order to seekapproval for launch. The nuclear safety launch approvalprocess is separate and distinct from the NEPAcompliance process. While there may be overlaps in thedata-gathering for both, the documentation required forNEPA and nuclear safety launch approval fulfillseparate federal and NASA requirements. While NEPAis to be done at the earliest stages of the project, launchapproval officially begins with Phase C/D.

Phase A/B activities are driven by therequirements of the EA/EIS. At the earliest possible time(not later than Phase A), the responsible HeadquartersPAA must undertake to develop the project EA/EIS anda Safety Analysis/Launch Approval Plan in coordinationwith the nuclear power system integration engineerand/or the launch vehicle integration engineer. Aprimary purpose of the EA/EIS is to ensure acomprehensive assessment of the rationale for choosinga radioactive source. In addition, the EA/EIS illuminatesthe environmental effects of alternative mission designs,flight systems, and launch vehicles, as well as therelative nuclear safety concerns of each alternative.

The launch approval engineer ensures that thefollowing specific requirements are met during Phase A:

• Conduct a radioactive source design trade studythat includes the definition, spacecraft designimpact evaluation, and cost trades of allreasonable alternatives

• Identify the flight system requirements that arespecific to the radioactive source

• For nuclear power alternatives, identify flightsystem power requirements and alternatives,and define the operating and accidentenvironments to allow DOE (U.S. Department ofEnergy) to assess the applicability of existingnuclear power system design(s).

During Phase B, activities depend on thespecifics of the project's EA/EIS plan. The responsibleHeadquarters PAA determines whether the preparationand writing of the EA/EIS will be done at a NASA fieldcenter, at NASA Headquarters, or by a contractor, andwhat assistance will be required from other field centers,the launch facility, DOE, or other agencies andorganizations. The launch approval engineer ensuresthat the following specific requirements are met duringPhase B:

• Update and refine the project, flight system,launch vehicle, and radioactive sourcedescriptions

• Update and refine the radioactive source designtrade study developed during Phase A

• Assist DOE where appropriate in conducting apreliminary assessment of the mission's nuclearrisk and environmental hazards.

The launch approval engineer is alsoresponsible for coordinating the activities, interfaces,and record-keeping related to mission nuclear safetyissues. The following tasks are managed by the launchapproval engineer:

• Develop the project EA/EIS and Safety Analy-sis/Launch Approval Plan

• Maintain a database of documents related toEA/EIS and nuclear safety launch approvaltasks. This database will help form and maintainthe audit trail record of how and why technicaldecisions and choices are made in the missiondevelopment and planning process. Attention tothis activity early on saves time and expenselater in the launch approval process when theproject may be called upon to explain why aparticular method or alternative was givengreater weight in the planning process.

• Provide documentation and review support asappropriate in the generation of mission dataand trade studies required to support the EA/EISand safety analyses

• Establish a project point-of-contact to the launchvehicle integration engineer, DOE, and NASAHeadquarters regarding support to the EA/EISand nuclear safety launch approval processes.This includes responding to public andCongressional que-

Page 128: Engineering Systems Handbook

NASA Systems Engineering HandbookIntegrating Engineering Specialties Into the Systems Engineering Process

• ries regarding radioactive source safety issues,and supporting proceedings resulting from anylitigation that may occur.

• Provide technical analysis support as requiredfor the generation of accident and/or commanddestruct environment for the radioactive sourcesafety analysis. The usual technique for thetechnical analysis is a probabilistic riskassessment (PRA). See Section 4.6.3.

6.8.3 Planetary Protection

The U.S. is a signatory to the United Nation'sTreaty of Principles Governing the Activities of States inthe Exploration and Use of Outer Space, Including theMoon and Other Celestial Bodies. Known as the "OuterSpace" treaty, it states in part (Article IX) thatexploration of the Moon and other celestial bodies shallbe conducted "so as to avoid their harmfulcontamination and also adverse changes in theenvironment of the Earth resulting from the introductionof extraterrestrial matter." NASA policy (NMI 8020.7D)specifies that the purpose of preserving solar systemconditions is for future biological and organic constituentexploration. It also establishes the basic NASA policy forthe protection of the Earth and its biosphere fromplanetary and other extraterrestrial sources ofcontamination.

The general regulations to which NASA flightprojects must adhere are set forth in NHB 8020.12B,Planetary Protection Provisions for RoboticExtraterrestrial Missions. Different requirements apply todifferent missions, depending on which solar systemobject is targeted and the spacecraft or mission type(flyby, orbiter, lander, sample-return, etc.). For somebodies (such as the Sun, Moon, Mercury), there are nooutbound contamination requirements. Presentrequirements for the outbound phase of missions toMars, however, are particularly rigorous. Planning for

planetary protection begins in Phase A, during whichfeasibility of the mission is established. Prior to the endof Phase A, the project manager must send a letter tothe Planetary Protection Officer (PPO) within the Officeof the AA for Space Science stating the mission typeand planetary targets, and requesting that the missionbe assigned a planetary protection category. Table 7shows the current planetary protection categories and asummary of their associated requirements.

Prior to the Preliminary Design Review (PDR) atthe end of Phase B. the project manager must submit tothe NASA PPO a Planetary Protection Plan detailing theactions that will be taken to meet the requirements. Theproject's progress and completion of the requirementsare reported in a Planetary Protection Pre-LaunchReport submitted to the NASA PPO for approval. Theapproval of this report at the Flight Readiness Review(FRR) constitutes the final approval for the project andmust be obtained for permission to launch. An update tothis report, the Planetary Protection Post-LaunchReport, is prepared to report any deviations from theplanned mission due to actual launch or early missionevents. For sample return missions only, additionalreports and reviews are required: prior to launch towardthe Earth, prior to commitment to Earth reentry, andprior to the release of any extraterrestrial sample to thescientific community for investigation. Finally, at theformally declared end-of-mission, a Planetary ProtectionEnd-of-Mission Report is prepared. This documentreviews the entire history of the mission in comparisonto the original Planetary Protection Plan, and documentsthe degree of compliance with NASA's planetaryprotection requirements. This document is typicallyreported on by the NASA PPO at a meeting of theCommittee on Space Research (COSPAR) to informother spacefaring nations of NASA's degree ofcompliance with international planetary protectionrequirements.

Page 129: Engineering Systems Handbook

NASA Systems Engineering Handbook

Appendix A—Acronyms

Acronyms are useful because they provide ashorthand way to refer to an organization, a kind ofdocument, an activity or idea, etc. within a generallyunderstood context. Their overuse, however, caninterfere with communications. The NASA Lexiconcontains the results of an attempt to provide acomprehensive list of all acronyms used in NASAsystems engineering. This appendix contains two lists:the acronyms used in this handbook and the acronymsfor some of the major NASA organizations.

AA Associate Administrator (NASA)APA Allowance for Program AdjustmentACWP Actual Cost of Work PerformedAGE Aerospace Ground EquipmentAHP Analytic Hierarchy ProcessBCWP Budgeted Cost of Work PerformedBCWS Budgeted Cost of Work ScheduledC/SCSC Cost/Schedule Control System CriteriaCALS Continuous Acquisition and Life-Cycle

SupportCCB Configuration (or Change) Control BoardCDR Critical Design ReviewCER Cost Estimating RelationshipCI Configuration ItemCIL Critical Items ListCoF Construction of FacilitiesCOSPAR Committee on Space ResearchCOTR Contracting Office Technical RepresentativeCPM Critical Path MethodCR Change RequestCSCI Computer Software Configuration ItemCSM Center for Systems ManagementCWBS Contract Work Breakdown StructureDCR Design Certification ReviewDDT&E Design, Development, Test and EvaluationDoD (U.S.) Department of DefenseDOE (U.S.) Department of EnergyDR Decommissioning ReviewDSMC Defense Systems Management CollegeEA Environmental AssessmentEAC Estimate at CompletionECP Engineering Change ProposalECR Engineering Change RequestEIS Environmental Impact StatementEMC Electromagnetic compatibilityEMI Electromagnetic interferenceEOM End of MissionEPA (U.S.) Environmental Protection AgencyEVA Extravehicular ActivitiesEVM Earned Value MeasurementFCA Functional Configuration AuditFFBD Functional Flow Block DiagramFH Flight HardwareFMEA Failure Modes and Effects AnalysisFMECA Failure Modes, Effects, and Criticality

AnalysisFONSI Finding of No Significant Impact

FRR Flight Readiness ReviewGAO General Accounting OfficeGOES Geosynchonous Orbiting Environmental

SatelliteGSE Ground Support EquipmentHQ NASA HeadquartersHST Hubble Space TelescopeI&V Integration and VerificationILS Integrated Logistics SupportILSP Integrated Logistics Support PlanILSS Integrated Logistics Support SystemIOP Institutional Operating PlanIRAS Infrared Astronomical SatelliteIV&V Independent Verification and ValidationIVA Intravehicular ActivitiesLEM Lunar Excursion Module (Apollo)LEO Low Earth OrbitLMEPO Lunar/Mars Exploration Program OfficeLMI Logistics Management InstituteLOOS Launch and Orbital Operations SupportLRU Line Replaceable UnitLSA Logistics Support AnalysisLSAR Logistics Support Analysis RecordMDT Mean DowntimeMCR Mission Concept ReviewMDR Mission Definition ReviewMESSOC Model for Estimating Space Station

OperationsMICM Multi-variable Instrument Cost ModelMLDT Mean Logistics Delay TimeMMT Mean Maintenance TimeMNS Mission Needs StatementMoE Measure of (system) EffectivenessMRB Material Review BoardMRR Mission Requirements ReviewMTBF Mean Time Between FailuresMTTF Mean Time To FailureMTTMA Mean Time To a Maintenance ActionMTTR Mean Time To Repair/RestoreNAR Non-Advocate ReviewNCR Non-Compliance (or Non-Conformance)

ReportNEPA National Environmental Policy ActNHB NASA HandbookNMI NASA Management InstructionNOAA (U.S.) National Oceanic and Atmospheric

AdministrationNOI Notice of IntentOMB Office of Management and Budget

Page 130: Engineering Systems Handbook

NASA Systems Engineering Handbook

OMRSD Operations and Maintenance Requirements and Specifications Document (KSC)

ORLA Optimum Repair Level AnalysisORR Operational Readiness ReviewORU Orbital Replacement UnitP/FR Problem Failure ReportPAA Program Associate Administrator (NASA)PAR Program/Project Approval ReviewPBS Product Breakdown StructurePCA Physical Configuration AuditPDR Preliminary Design ReviewPDT Product Development TeamPDV Present Discounted ValuePERT Program Evaluation and Review TechniquePOP Program Operating PlanPPAR Preliminary Program/Project Approval ReviewPPO Planetary Protection OfficerPRA Probabilistic Risk AssessmentPRD Program Requirements DocumentProRR Production Readiness ReviewQA Quality AssuranceQFD Quality Function DeploymentRAM Reliability, Availability, and MaintainabilityRAS Requirements Allocation SheetRID Review Item DiscrepancyRMP Risk Management PlanROD Record of DecisionRTG Radioisotope Thermoelectric GeneratorSAR System Acceptance ReviewSDR System Definition ReviewSEB Source Evaluation BoardSEMP Systems Engineering Management PlanSEPIT Systems Engineering Process Improvement

TaskSEWG Systems Engineering Working Group (NASA)SI Le Systeme International d' Unites (the

international [metric] system of units)SIRTF Space Infrared Telescope FacilitySOFIA Stratospheric Observatory for Infrared

AstronomySoSR Software Specification ReviewSoW Statement of WorkSSR System Safety ReviewSRD System/Segment Requirements DocumentSRM&QASafety, Reliability, Maintainability, and

Quality AssuranceSRR System Requirements ReviewSTEP Standard for the Exchange of Product (model

data)STS Space Transportation SystemSSA Space Station AlphaSSF Space Station FreedomTBD To Be Determined; To Be DoneTDRS Tracking and Data Relay SatelliteTLA Time Line AnalysisTLS Time Line SheetTPM Technical Performance Measure(ment)TQM Total Quality ManagementTRR Test Readiness ReviewV&V Verification and Validation

VMP Verification Master PlanVRCD Verification Requirements Compliance

DocumentVRM Verification Requirements MatrixVRSD Verification Requirements and Specifications

DocumentWBS Work Breakdown StructureWFD Work Flow Diagram

NASA Organizations

ARC Ames Research Center, Moffett Field CA94035

COSMIC Computer Software Management &Information Center, University of Georgia, 382E. Broad St., Athens GA 30602

GISS Goddard Institute for Space Studies (GSFC),2880 Broadway, New York NY 10025

GSFC Goddard Space Flight Center, Greenbelt Rd.,Greenbelt MD 20771

HQ National Aeronautics and SpaceAdministration Headquarters, Washington DC20546

JPL Jet Propulsion Laboratory, 4800 Oak GroveDr., Pasadena CA 91109

JSC Lyndon B. Jhonson Space Center, HoustonTX 77058

KSC John F. Kennedy Space Center, KennedySpace Center FL 32899

SCC Slidell Computer Complex, 1010 Gauss Blvd,Slidell LA 70458

SSC John C. Stennis Space Center, Stennis SpaceCenter MS 39529

STIF Scientific & Technical Information Facility,P.O. Box 8757, BWI Airport MD 21240

WFF Wallops Flight Facility (GSFC), WallopsIsland VA 23337

WSTF White Sands Test Facility (JSC), P.O. DrawerMM, Las Cruces NM 88004

Page 131: Engineering Systems Handbook

NASA Systems Engineering Handbook

Appendix B—Systems EngineeringTemplates and Examples

Appendix B.1—A Sample SEMP Outline

An outline recommended by the DefenseSystems Management College for the SystemsEngineering Management Plan is shown below. Thisoutline is a sample only, and should be tailored for thenature of the project and its inherent risks.

Systems Engineering Management Plan

Title PageIntroduction

Part 1 - Technical Program Planning and Control1.0 Responsibilities and Authority1.1 Standards, Procedures, and Training1.2 Program Risk Analysis1.3 Work Breakdown Structures1.4 Program Review1.5 Technical Reviews1.6 Technical Performance Measurements1.7 Change Control Procedures1.8 Engineering Program Integration1.9 Interface Control1.10 Milestones/Schedule1.11 Other Plans and Controls

Part 2 - Systems Engineering Process2.0 Mission and Requirements Analysis2.1 Functional Analysis2.2 Requirements Allocation2.3 Trade Studies

2.4 Design Optimization/Effectiveness Compatibility2.5 Synthesis2.6 Technical Interface Compatibility2.7 Logistic Support Analysis2.8 Producibility Analysis2.9 Specification Tree/Specifications2.10 Documentation2.11 Systems Engineering Tools

Part 3—Engineering Specialty/Integration Requirements3.1 Integration Design/Plans

3.1.1 Reliability3.1.2 Maintainability3.1.3 Human Engineering3.1.4 Safety3.1.5 Standardization3.1.6 Survivability/Vulnerability3.1.7 Electromagnetic

Compatibility/Interference3.1.8 Electromagnetic Pulse Hardening3.1.9 Integrated Logistics Support3.1.10 Computer Resources Lifecycle

Management Plan3.1.1 1 Producibility3.1.12 Other Engineering Specialty

Requirements/Plans3.2 Integration System Test Plans3.3 Compatibility with Supporting Activities

3.3.1 System Cost-Effectiveness3.3.2 Value Engineering3.3.3 TQM/Quality Assurance3.3.4 Materials and Processes

Page 132: Engineering Systems Handbook

NASA Systems Engineering Handbook

Appendix B.2 -- A "Tailored" WBS for anAirborne Telescope

Figure B- I shows a partial Product BreakdownStructure (PBS) for the proposed StratosphericObservatory for Infrared Astronomy (SOFIA), a 747SPaircraft outfitted with a 2.5 to 3.0 m telescope. The PBShas been elaborated for the airborne facility's telescopeelement. The PBS level names have been madeconsistent with the sidebar on page 3 of this handbook.

Figures B-2 through B-5 show a correspondingWork Breakdown Structures (WBSs) based on theprinciples in Section 4.3 of this handbook. At each level,

the prime product deliverables from the PBS are WBSelements. The WBS is completed at each level byadding needed service (i.e., functional) elements suchas management, systems engineering, integration andtest, etc. The integration and test WBS element at eachlevel refers to the activities of unifying prime productdeliverables at that level.

Although the SOFIA project is used as anillustration in this appendix, the SOFIA WBS should betailored to fit actual conditions at the start of Phase C/Das determined by the project manager. One example ofa condition that could substantially change the WBS isinternational participation in the project.

Page 133: Engineering Systems Handbook

NASA Systems Engineering Handbook

Page 134: Engineering Systems Handbook

NASA Systems Engineering Handbook

Page 135: Engineering Systems Handbook

NASA Systems Engineering Handbook

Page 136: Engineering Systems Handbook

NASA Systems Engineering Handbook

Appendix B.4—A Sample Risk Management PlanOutline

1.0 Introduction1.1 Purpose and Scope of the RMP1.2 Applicable Documents and Definitions1.3 Program/Project (or System) Description

2.0 Risk Management Approach2.1 Risk Management Philosophy/Overview2.2 Management Organization and

Responsibilities2.3 Schedule, Milestones, and Reviews2.4 Related Program Plans2.5 Subcontractor Risk Management2.6 Program/Project Risk Metrics

3.0 Risk Management Methodologies, Processes, andTools3.1 Risk Identification and Characterization3.2 Risk Analysis3.3 Risk Mitigation and Tracking

4.0 Significant Identified Risks*4.1 Technical Risks4.2 Programmatic Risks4.3 Supportability Risks4.4 Cost Risks4.5 Schedule Risks

* Each subsection contains risk descriptions, charac-terizations, analysis results, mitigation actions, andreporting metrics .

Page 137: Engineering Systems Handbook

NASA Systems Engineering Handbook

Page 138: Engineering Systems Handbook

NASA Systems Engineering Handbook

Appendix B.6—A Sample ConfigurationManagement Plan Outline

1.0 Introduction1.1 Description of the Cls1.2 Program Phasing and Milestones1.3 Special Features

2.0 Organization2.1 Structure and Tools2.2 Authority and Responsibility2.3 Directives and Reference Documents

3.0 Configuration Identification3.1 Baselines3.2 Specifications

4.0 Configuration Control4.1 Baseline Release4.2 Procedures4.3 CI Audits

5.0 Interface Management5. 1 Documentation5.2 Interface Control

6.0 Configuration Traceability6.1 Nomenclature and Numbering6.2 Hardware Identification6.3 Software and Firmware Identification

7.0 Configuration Status Accounting andCommunications7.1 Data Bank Description7.2 Data Bank Content7.3 Reporting

8.0 Configuration Management Audits

9.0 Subcontractor/Vendor Control

Page 139: Engineering Systems Handbook

NASA Systems Engineering Handbook

Appendix B.7—Techniques of FunctionalAnalysis

Appendix B.7 is reproduced from the DefenseSystems Management Guide, published January 1990by the Defense Systems Management College, Ft.Belvoir, VA.

• • •

System requirements are analyzed to identifythose functions which must be performed to satisfy theobjectives of each functional area. Each function isidentified and described in terms of inputs, outputs, andinterface requirements from top down so thatsubfunctions are recognized as part of larger functionalareas. Functions are arranged in a logical sequence sothat any specified operational usage of the system canbe traced in an end-to-end path. Although there aremany tools available, functional identification isaccomplished primarily through the use of 1) functionalflow block diagrams (FFBDs) to depict task sequencesand relationships, 2) N2 diagrams to develop datainterfaces, and 3) time line analyses to depict the timesequence of time-critical functions.

B.7.1 Functional Flow Block Diagrams

The purpose of the FFBD is to indicate thesequential relationship of all functions that must beaccomplished by a system. FFBDs depict the timesequence of functional events. That is, each function(represented by a block) occurs following the precedingfunction. Some functions may be performed in parallel,or alternate paths may be taken. The duration of thefunction and the time between functions is not shown,and may vary from a fraction of a second to manyweeks. The FFBDs are function oriented, not equipmentoriented. In other words, they identify "what" musthappen and do not assume a particular answer to "how"a function will be performed.

FFBDs are developed in a series of levels.FFBDs show the same tasks identified throughfunctional decomposition and display them in theirlogical, sequential relationship. For example, the entireflight mission of a spacecraft can be defined in a toplevel FFBD, as shown in Figure B-6. Each block in thefirst level diagram can then be expanded to a series offunctions, as shown in the second level diagram for"perform mission operations." Note that the diagramshows both input (transfer to operational orbit) andoutput (transfer to space transportation system orbit),

thus initiating the interface identification and controlprocess. Each block in the second level diagram can beprogressively developed into a series of functions, asshown in the third level diagram on Figure B-6. Thesediagrams are used both to develop requirements and toidentify profitable trade studies. For example, does thespacecraft antenna acquire the tracking and data relaysatellite (TDRS) only when the payload data are to betransmitted, or does it track TDRS continually to allowfor the reception of emergency commands ortransmission of emergency data? The FFBD alsoincorporates alternate and contingency operations,which improve the probability of mission success. Theflow diagram provides an understanding of totaloperation of the system, serves as a basis fordevelopment of operational and contingency proce-dures, and pinpoints areas where changes in operationalprocedures could simplify the overall system operation.In certain cases, alternate FFBDs may be used torepresent various means of satisfying a particularfunction until data are acquired, which permits selectionamong the alternatives.

B.7.2 N2 Diagrams

The N2 diagram has been used extensively todevelop data interfaces, primarily in the software areas.However, it can also be used to develop hardware inter-faces. The basic N2 chart is shown in Figure B-7. Thesystem functions are placed on the diagonal; theremainder of the squares in the N x N matrix representthe interface inputs and outputs. Where a blankappears, there is no interface between the respectivefunctions. Data flows in a clockwise direction betweenfunctions (e.g., the symbol F1 F2 indicates data flowingfrom function F1, to function F2). The data beingtransmitted can be defined in the appropriate squares.Alternatively, the use of circles and numbers permits aseparate listing of the data interfaces as shown in FigureB-8. The clockwise flow of data between functions thathave a feedback loop can be illustrated by a larger circlecalled a control loop. The identification of a criticalfunction is also shown in Figure B-8, where function F4has a number of inputs and outputs to all other functionsin the upper module. A simple flow of interface dataexists between the upper and lower modules atfunctions F7 and F8. The lower module has complexinteraction among its functions. The N2 chart can betaken down into successively lower levels to thehardware and software component functional levels. Inaddition to defining the data that must be suppliedacross the interface, the N2 chart can pinpoint areaswhere conflicts could arise.

Page 140: Engineering Systems Handbook

NASA Systems Engineering Handbook

Page 141: Engineering Systems Handbook

NASA Systems Engineering Handbook

Time line analysis adds consideration offunctional durations and is used to support thedevelopment of design requirements for operation, testand maintenance functions. The time line sheet (TLS) is

used to perform and record the analysis of time criticalfunctions and functional sequences. Additional toolssuch as mathematical models and computer simulationsmay be necessary. Time line analysis is performed onthose areas where time is critical to the missionsuccess, safety, utilization of resources, minimization ofdown time, and/or increasing availability. Not allfunctional sequences require time line analysis, onlythose in which time is a critical factor. The followingareas are often categorized as time critical: 1) functionsaffecting system reaction time, 2) mission turnaroundtime, 3) time countdown activities, and 4) functionsrequiring time line analysis to determine optimumequipment and/or personnel utilization. An example of ahigh level TLS for a space program is shown in FigureB-9.

For time critical function sequences, the timerequirements are specified with associated tolerances.Time line analyses play an important role in the trade-offprocess between man and machine. The decisionsbetween automatic and manual methods will be madeand will determine what times are allocated to whatsubfunctions. In addition to definingsubsystem/component time requirements, time lineanalysis can be used to develop trade studies in areasother than time consideration (e.g., should thespacecraft location be determined by the groundnetwork or by onboard computation using navigationsatellite inputs? Figure B-10 is an example of amaintenance TLS which illustrates that availability of anitem (a distiller) is dependent upon the completion ofnumerous maintenance tasks accomplishedconcurrently. Furthermore, it illustrates the traceability tohigher level requirements by referencing the appropriateFFBD and requirement allocation sheet (RAS).

Page 142: Engineering Systems Handbook

NASA Systems Engineering Handbook

Page 143: Engineering Systems Handbook

NASA Systems Engineering Handbook

Page 144: Engineering Systems Handbook

NASA Systems Engineering Handbook

Appendix B.8 -- The Effect of Changes inORU MTBF on Space Station FreedomOperations

The reliability of Space Station Freedom's(SSF) Orbital Replacement Units (ORUs) has aprofound effect on its operations costs. This reliability ismeasured by the Mean Time Between Failures (MTBF).One study of the effects, by Dr. William F. Fisher andCharles Price, was SSF External Maintenance TaskTeam Final Report (JSC, July 1990). Another, by AnneAccola, et al., shows these effects parametrically.Appendix B.8 excerpts this paper, Sensitivity Study ofSSF Operations Costs and Selected User Resources(presented at the International Academy of AstronauticsSymposium on Space Systems Costs Methodologiesand Applications, May 1990).

• • •There are many potential tradeoffs that can be

performed during the design stage of SSF. Many ofthem have major implications for crew safety,operations cost, and achievement of mission goals.Operations costs and important non-cost operationsparameters are examined. One example of a specificarea of concern in design is the reliability of the ORUsthat comprise SSF. The implications of ORU reliabilityon logistics upmass and downmass to and from SSF aregreat, thus affecting the resources available forutilization and for other operations activities. In addition,the implications of reliability on crew time available formission accomplishment (i.e., experiments) vs. stationmaintenance are important.

The MTBF effect on operations cost is shown inFigure B-11. Repair and spares costs are influencedgreatly by varying MTBF. Repair costs are inverselyproportional to MTBF, as are replacement spares. Theinitial spares costs are also influenced by variables otherthan MTBF. The combined spares cost, consisting ofinitial and replacement spares are not as greatlyaffected as are repair costs. The five-year operationscost is increased by only ten percent if all ORU MTBFare halved. The total operations cost is reduced by threepercent if all ORU MTBF are doubled. It would almostappear that MTBF is not as important as one wouldthink. However, MTBF also affects available crew timeand available upmass much more than operations costas shown in Figures B-12 and B-13.

Available crew time is a valuable commoditybecause it is a limited resource. Doubling the number ofORU replacements (by decreasing the MTBF) increasesthe maintenance crew time by 50 percent, thus reducingthe amount of time available to perform usefulexperiments or scientific work by 22 percent. By halvingthe ORU replacements, the maintenance crew timedecreases by 20 percent and the available crew timeincreases by eight percent.

Available upmass is another valuable resourcebecause a fixed number of Space Shuttle flights cantransport only a fixed amount of payload to the SSF.

Extra ORUs taken to orbit reduces available upmassthat could be used to take up experimental payloads.Essentially, by doubling the number of ORUreplacements, the available upmass is

Page 145: Engineering Systems Handbook

NASA Systems Engineering Handbook

driven to zero. Conversely, halving the number of ORUreplacements increases the available upmass by 30percent.

Although the effects of MTBF on resources isinteresting, it is a good idea to quantify the effectivenessof the scenarios based on total cost to maintain thenominal re-

sources. Figure B-14 shows the number of crewmembers needed each year to maintain the availablecrew time. The figure shows that to maintain thenominal available crew time after doubling the numberof ORU replacements, the Station would need two extracrew members. It should be noted that no attempt wasmade to assess the design capability or design costimpacts to accommodate these extra crew members.The savings of crew due to halving the number of ORUreplacements is small, effectively one less crewmember for half the year.

Figure B-15 shows the number of Space Shuttleflights over five years needed to maintain the nominalavailable upmass. The Space Shuttle flights wererounded upward to obtain whole flights. Doubling thenumber of ORU replacements would mean eight extraSpace Shuttle flights would be needed over five years.Halving the ORU replacements would require two fewerSpace Shuttle flights over five years. No attempt wasmade to assess the Space Shuttle capability to providethe extra flights or the design cost impacts to create theORUs with the different reliabilities.

Figure B-16 shows the effect of assessing thecost impact of the previous two figures and combiningthem with the five-year operations cost. The influence ofMTBF is effectively doubled when the resources ofavailable upmass and crew time are maintained at theirnominal values.

Page 146: Engineering Systems Handbook

NASA Systems Engineering Handbook

Page 147: Engineering Systems Handbook

NASA Systems Engineering Handbook

Appendix B.9 -- An Example of a VerificationRequirements Matrix

Appendix B.9 is a small portion of the VerificationRequirements Matrix (VRM) from the National Launch

System Level III System Requirements Document,originally published at the Marshall Space Flight Center.Its purpose here is to illustrate the content and onepossible format for a VRM. The VRM coding key for thisexample is shown on the next page.

Page 148: Engineering Systems Handbook

NASA Systems Engineering Handbook

Page 149: Engineering Systems Handbook

NASA Systems Engineering Handbook

Appendix B.10—A Sample Master VerificationPlan Outline

1.0 Introduction1.1 Scope1.2 Applicable Documents1.3 Document Maintenance and Control

2.0 Program/Project Description2.1 Program/Project Overview and

Verification Master Schedule2.2 Systems Descriptions2.3 Subsystems Descriptions

3.0 Integration and Verification (I&V)Organization and Staffing3.1 Program/Project Management

Offices3.2 NASA Field Center I&V

Organizations3.3 International Partner I&V

Organizations3.4 Prime Contractor I&V Organization3.5 Subcontractor I&V Organizations

4.0 Verification Team OperationalRelationships4.1 Verification Team Scheduling and

Review Meetings4.2 Verification and Design Reviews4.3 Data Discrepancy Reporting and

Resolution Procedures

5.0 Systems Qualification Verification5.1 Tests*5.2 Analyses5.3 Inspections5.4 Demonstrations

6.0 Systems Acceptance Verification6.1 Tests*6.2 Analyses6.3 Inspections6.4 Demonstrations

7.0 Launch Site Verification

8.0 On-Orbit Verification

9.0 Post-Mission/Disposal Verification

10.0 Verification Documentation

11.0 Verification Methodology

12.0 Support Equipment12.1 Ground Support Equipment12.2 Flight Support Equipment12.3 Transportation, Handling, and Other

Logistics Support12.4 TDRSS/NASCOM Support

13.0 Facilities

* This section contains subsections for each type of test,e.g., EMI/EMC, mechanisms, thermal/vacuum. This fur-ther division by type applies also to analyses,inspections, and demonstrations.

Page 150: Engineering Systems Handbook

NASA Systems Engineering Handbook

Appendix C—Use of the Metric System

C.1 NASA Policy

It is NASA policy (see NM1 8010.2A and NHB7120.5) to:

• Adopt the International System of Units, knownby the international abbreviation SI and definedby ANSI/IEEE Std 268-1992, as the preferredsystem of weights and measurements for allmajor system development programs.

• Use the metric system in procurements, grantsand business-related activities to the extenteconomically feasible.

• Permit continued use of the inch-pound systemof measurement for existing systems.

• Permit hybrid metric and inch-pound systemswhen full use of metric units is impractical or willcompromise safety or performance.

C.2 Definitions of Units

Parts of Appendix C are reprinted from IEEEStd 268-1992, American National Standard for MetricPractice, Copyright 1992 by the Institute of Electricaland Electronics Engineers, Inc. The IEEE disclaims anyresponsibility or liability resulting from the placementand use in this publication. Information is reprinted withthe permission of the EKE.

• • •

Outside the United States, the comma is widelyused as a decimal marker. In some applications,therefore, the common practice in the United States ofusing the comma to separate digits into groups of three(as in 23,478) may cause ambiguity. To avoid thispotential source of confusion, recommendedinternational practice calls for separating the digits intogroups of three, counting from the decimal point towardthe left and the right, and using a thin space to separatethe groups. In numbers of four digits on either side ofthe decimal point the space is usually not necessary,except for uniformity in tables.

C.2.1 SI Prefixes

The names of multiples and submultiples of SIunits may be formed by application of the prefixes andsymbols shown in the sidebar. (The unit of mass, thekilogram, is

Prefixes for SI Units

Factor Prefix Sym. Pronunciation

1024 yotta Y YOTT-a (a as in about)1021 zetta Z ZETT-a (a as in about)1018 exa E EX-a (a as in about)1015 peta P PET-a (as in petal)1012 tera T TERR-a (as in terrace)109 giga G GIGa (g as in giggle, a as in

about106 mega M MEG-a (as in megaphone)103 kilo k KILL-oh**102 hecto* h HECK-toe10 deka* da DECK-a (as in decahedron) 110-1 deci* d DESS-ih (as in decimal)10-2 centi* c SENT-ih (as in centipede)10-3 milli m MILL-ih (as in military)10-6 micro µ MIKE-roe (as in microphone)10-9 nano n NAN-oh (a as in ant)10-12 pico p PEEK-oh10-15 femto f FEM-toe10-18 atto a AT-toe (a as in hat)10-21 zepto z ZEP-toe (e as in step)10-24 yocto y YOCK-toe

* The prefixes that do not represent 1000 raised to apower (that is hecto, deka, deci, and centi) should beavoided where practical.** The first syllable of every prefix is accented toassure that the prefix will retain its identity. Kilometeris not an exception.

the only exception; for historical reasons, the gram isused as the base for construction of names.)

C.2.2 Base SI Units

ampere (A) The ampere is that constant current which,if maintained in two straight parallel conductors ofinfinite length, of negligible circular cross section, andplaced one meter apart in vacuum, would producebetween these conductors a force equal to 2 x 10-7

newton per meter of length.

candela (cd) The candela is the luminous intensity, in agiven direction, of a source that emits monochromaticradiation of frequency 540 x 1012 Hz and that has aradiant intensity in that direction of 1/683 watt persteradian.

kelvin (K) The kelvin, unit of thermodynamic tempera-ture, is the fraction 1/273.16 of the thermodynamic tem-perature of the triple point of water.

Page 151: Engineering Systems Handbook

NASA Systems Engineering Handbook

kilogram (kg) The kilogram is the unit of mass; it isequal to the mass of the international prototype of thekilogram. (The international prototype of the kilogram isa particular cylinder of platinum-iridium alloy which ispreserved in a vault at Sevres, France, by theInternational Bureau of Weights and Measures.)

meter (m) The meter is the length of the path traveledby light in a vacuum during a time interval of 1 299 792458 of a second.

mole (mol) The mole is the amount of substance of asystem which contains as many elementary entities asthere are atoms in 0.012 kilogram of carbon-12. Note:When the mole is used, the elementary entities must bespecified and may be atoms, molecules, ions, electrons,other particles, or specified groups of such particles.

second (s) The second is the duration of 9 192 631 770periods of the radiation corresponding to the transitionbetween the two hyperfine levels of the ground state ofthe cesium-133 atom.

C.2.3 Supplementary SI Units

radian (rad) The radian is the plane angle between tworadii of a circle that cut off on the circumference an arcequal in length to the radius.

steradian (sr) The steradian is the solid angle that, hav-ing its vertex in the center of a sphere, cuts off an areaof the surface of the sphere equal to that of a squarewith sides of length equal to the radius of the sphere.

C.2.4 Derived SI Units with Special Names

In addition to the units defined in thissubsection, many quantities are measured in terms ofderived units which do not have special names—suchas velocity in m/s, electric field strength in V/m, entropyin J/K.

becquerel (Bq = 1/s) The becquerel is the activity of aradionuclide decaying at the rate of one spontaneousnuclear transition per second.

degree Celsius (°C = K) The degree Celsius is equal tothe kelvin and is used in place of the kelvin forexpressing Celsius temperature defined by the equationt= T- T0, where t is the Celsius temperature, T is thethermodynamic temperature, and To = 273.15 K (bydefinition).

coulomb (C = A.s) Electric charge is the time integral ofelectric current; its unit, the coulomb, is equal to oneampere second.

farad (F = C/V) The farad is the capacitance of acapacitor between the plates of which there appears a

difference of potential of one volt when it is charged bya quantity of electricity equal to one coulomb.

gray (Gy = J/kg) The gray is the absorbed dose whenthe energy per unit mass imparted to matter by ionizingradiation is one joule per kilogram. (The gray is alsoused for the ionizing radiation quantities: specific energyimparted, kerma, and absorbed dose index.)

henry (H = Wb/A) The henry is the inductance of aclosed circuit in which an electromotive force of one voltis produced when the electric current in the circuit variesuniformly at a rate of one ampere per second.

hertz (Hz = 1/s) The hertz is the frequency of a periodicphenomenon of which the period is one second.

joule (J = N.m) The joule is the work done when thepoint of application of a force of one newton is displaceda distance of one meter in the direction of the force.

lumen (lm = cd.sr) The lumen is the luminous flux emit-ted in a solid angle of one steradian by a point sourcehaving a uniform intensity of one candela.

lux (lx = lm/m2) The lux is the illuminance produced bya luminous flux of one lumen uniformly distributed overa surface of one square meter.

newton (N = kg .m/s2) The newton is that force which,when applied to a body having a mass of one kilogram,gives it an acceleration of one meter per secondsquared.

ohm (ΩΩ = V/A) The ohm is the electric resistance be-tween two points of a conductor when a constant differ-ence of potential of one volt, applied between these twopoints, produces in this conductor a current of oneampere, this conductor not being the source of anyelectromotive force.

pascal (Pa = N/m2) The pascal [which, in the preferredpronunciation, rhymes with rascal] is the pressure orstress of one newton per square meter.

Page 152: Engineering Systems Handbook

NASA Systems Engineering Handbook

siemens (S = A/V) The siemens is the electric conduc-tance of a conductor in which a current of one ampere isproduced by an electric potential difference of one volt.

sievert (Sv = J/kg) The sievert is the dose equivalentwhen the absorbed dose of ionizing radiation multipliedby the dimensionless factors Q (quality factor) and N(product of any other multiplying factors) stipulated bythe International Commission on Radiological Protectionis one joule per kilogram.

tesla (T = Wb/m2) The tesla is the magnetic flux densityof one weber per square meter. In an alternativeapproach to defining the magnetic field quantities thetesla may also be defined as the magnetic flux densitythat produces on a one-meter length of wire carrying acurrent of one ampere, oriented normal to the fluxdensity, a force of one newton, magnetic flux densitybeing defined as an axial vector quantity such that theforce exerted on an element of current is equal to thevector product of this element and the magnetic fluxdensity.

volt (V = W/A) The volt (unit of electric potential differ-ence and electromotive force) is the difference ofelectric potential between two points of a conductorcarrying a constant current of one ampere, when thepower dissipated between these points is equal to onewatt.

watt (W = J/s) The watt is the power that represents arate of energy transfer of one joule per second.

weber (Wb = V.s) The weber is the magnetic flux that,linking a circuit of one turn, produces in it an electromo-tive force of one volt as it is reduced to zero at auniform rate in one second.

C.2.5 Units in Use with SI

Time The SI unit of time is the second. This unit is pre-ferred and should be used if practical, particularly whentechnical calculations are involved. In cases where timerelates to life customs or calendar cycles, the minute,hour, day and other calendar units may be necessary.For example, vehicle speed will normally be expressedin kilometers per hour.

minute (min) 1 min = 60 shour (h) 1 h = 60 min = 3600 secday (d) 1 d = 24 h = 86 400 secweek, month, etc.

Plane angle The SI unit for plane angle is the radian.Use of the degree and its decimal submultiples ispermissible when the radian is not a convenient unit.Use of the minute and second is discouraged except forspecial fields such as astronomy and cartography.

degree (°) 1°= (π/ 180) redminute (') 1' = (1/60)° = (π/10 800) radsecond (") 1 " = (1/60)' = (π/648 000) rad

Area The SI unit of area is the square meter (m2). Thehectare (ha) is a special name for the squarehectometer (hm2). Large land or water areas aregenerally expressed in hectares or in square kilometers(km2).

Volume The SI unit of volume is the cubic meter. Thisunit, or one of the regularly formed multiples such as thecubic centimeter, is preferred. The special name literhas been approved for the cubic decimeter, but use ofthis unit is restricted to volumetric capacity, drymeasure, and measure of fluids (both liquids andgases). No prefix other than milli- or micro- should beused with liter.

Mass The SI unit of mass is the kilogram. This unit, orone of the multiples formed by attaching an SI prefix togram (g), is preferred for all applications. Themegagram (Mg) is the appropriate unit for measuringlarge masses such as have been expressed in tons.However, the name ton has been given to several largemass units that are widely used in commerce andtechnology: the long ton of 2240 lb, the short ton of 2000lb, and the metric ton of 1000 kg (also called tonneoutside the USA) which is almost 2205 lb. None of theseterms are SI. The term metric ton should be restricted tocommercial usage, and no prefixes should be used withit. Use of the term tonne is deprecated.

Others The ANSI/IEEE standard lists the kilowatthour (1kWh = 3.6 MJ) in the category of "Units in Use with SITemporarily". The SI unit of energy, the joule, togetherwith its multiples, is preferred for all applications. Thekilowatthour is widely used, however, as a measure ofelectric energy. This unit should not be introduced intoany new areas, and eventually, it should be replaced bythe megajoule. In that same "temporary" category, thestandard also defines the barn (1 lb = 10-28 m2) for crosssection, the bar (1 bar = 105 Pa) for pressure, the curie(1 Ci = 3.7 x 1010 Bq) for radionuclide activity, theroentgen (1 R = 2.58 x 10-4 C/kg) for X- and gamma-rayexposure, the rem for dose equivalent (1 rem = 0.01Sv), and the rad (1 rd = 0.01 Gy) for absorbed dose.

Page 153: Engineering Systems Handbook

NASA Systems Engineering Handbook

C.3 Conversion Factors

One of the many places a complete set ofconversion factors can be found is in ANSI/IEEE Std268-1992. The abridged set given here is taken fromthat reference. Symbols of SI units are given in boldface type and in parentheses. Factors with an asterisk(*) between the number and its power of ten are exactby definition. To conform with the international practice,this section uses spaces -- rather than commas -- innumber groups.

Page 154: Engineering Systems Handbook

NASA Systems Engineering Handbook

Page 155: Engineering Systems Handbook

NASA Systems Engineering Handbook

Page 156: Engineering Systems Handbook

NASA Systems Engineering Handbook

Bibliography

Air Force, Department of the, Producibility Measurementfor DoD Contracts, Office of the Assistant for Reli-ability, Maintainability, Manufacturing, andQuality, SAF/AQXE, Washington, DC, 1992.Referred to on page(s) 111.

, Unmanned Space Vehicle Cost Model, SeventhEdition, SD TR-88-97, Air Force Systems Com-mand/Space Division (P. Nguyen, et al.), August1994. Referred to on page(s) 81.

Agrawal, Brij N., Design of Geosynchronous Spacecraft,Prentice-Hall, Inc., Englewood Cliffs, NJ 07632,1986. Referred to on page(s) 1.

Armstrong, J.E., and Andrew P. Sage, An Introduction toSystems Engineering, John Wiley & Sons, NewYork, 1995. Referred to on page(s) 1.

Army, Department of the, Logistics Support AnalysisTechniques Guide, Pamphlet No. 700-4, ArmyMateriel Command, 1985. Referred to onpage(s) 102.

Asher, Harold, Cost-Quantity Relationships in theAirframe Industry, R-291, The Rand Corporation,1956. Referred to on page(s) 82.

Barclay, Scott, et al., Handbook for Decision Analysis,Decisions and Designs, Inc., McLean, VA,September 1977. Referred to on page(s) 43.

Blanchard, B.S., and W.J. Fabrycky, SystemsEngineering and Analysis (2nd Edition),Prentice-Hall, Inc. Englewood Cliffs, NJ, 1990.Referred to on page(s) 1.

, System Engineering Management, John Wiley &Sons, Inc., New York, 1991. Referred to onpage(s) 1.

Boden, Daryl, and Wiley J. Larson (eds), Cost-EffectiveSpace Mission Operations, publication scheduledfor August 1995. Referred to on page(s) 1.

Boehm, Barry W., ''A Spiral Model of Software Develop-ment and Enhancement", Computer, pp 61 -72,May 1988. Referred to on page(s) 13.

Carter, Donald E., and Barbara Stilwell Baker,Concurrent Engineering: The ProductDevelopment Environment for the 1990s,

Addison-Wesley Publishing Co., Inc., Reading,MA, 1992. Referred to on page(s) 25.

Chamberlain, Robert G., George Fox and William H.Duguette, A Design Optimization Process forSpace Station Freedom, JPL Publication 90-23,June 15, 1990. Referred to on page(s) 18.

Chestnut, Harold, Systems Engineering Tools, JohnWiley & Sons, Inc., New York, 1965. Referred toon page(s) 1.

, Systems Engineering Methods, John Wiley &Sons, Inc., New York, 1965. Referred to onpage(s) 1.

Churchman, C. West, Russell L. Ackoff and E. LeonardArnoff, Introduction to Operations Research, JohnWiley & Sons, Inc., New York, 1957.

Clark, Philip, et al., How CALS Can Improve the DoDWeapon System Acquisition Process, PL207RI,Logistics Management Institute, November 1992.Referred to on page(s) 103.

Committee on Space Research, COSPAR InformationBulletin No. 38 (pp. 3-10), June 1967. Referredto on page(s) 115.

Defense, Department of, Transition from Developmentto Production, DoD 4245.7-M, 1985. Referred toon page(s) 40, 111.

, Metric System, Application in New Design, DOD-STD-1476A, 19 November 1986.

, Logistics Support Analysis and Logistics SupportAnalysis Record, MIL-STD-1388-1A/2B, revised21 January 1993, Department of Defense.Referred to on page(s) 86, 100, 102.

, Failure Modes, Effects, and Criticality Analysis,MIL-STD-1629A, Department of Defense. Re-ferred to on page(s) 41.

Defense Systems Management College, SchedulingGuide for Program Managers, GPO#008-020-01196-1, January 1990.

, Test and Evaluation Management Guide, GPO#008-020-01303-0, 1993.

Page 157: Engineering Systems Handbook

NASA Systems Engineering Handbook

,Integrated Logistics Support (ILS) Guide,DTIC/NTIS #ADA 171 -087, 1993.

,Systems Engineering Management Guide,DTIC/NTIS #ADA223- 168, 1994. Referred to onpage(s) 1, 40.

,Risk Management: Concepts and Guidance,GPO #008-020-01164-9, 1994.

DeJulio, E., SIMSYLS User's Guide, Boeing AerospaceOperations, February 1990. Referred to onpage(s) 87.

de Neufville, R., and J.H. Stafford, Systems Analysis forEngineers and Managers, McGraw-Hill, New York,1971. Referred to on page(s) 1, 43.

Electronic Industries Association (EIA), SystemsEngineering, EIA/IS-632, 2001 Pennsylvania Ave.NW, Washington, DC, 20006, December 1994.Referred to on page(s) 1, 4.

Executive Order 11514, Protection and Enhancement ofEnvironmental Quality, March 5, 1970, asamended by Executive Order 11991, May 24,1977. Referred to on page(s) 112.

Executive Order 12114, Environmental Effects Abroadof Major Federal Actions, January 4, 1979.Referred to on page(s) 112.

Fisher, Gene H., Cost Considerations in SystemsAnalysis, American Elsevier Publishing Co. Inc.,New York, 1971. (Also published as R-490-ASD,The Rand Corporation, December 1970.)Referred to on page(s) 1, 67.

Forsberg, Kevin, and Harold Mooz, "The Relationship ofSystem Engineering to the Project Cycle", Centerfor Systems Management, 5333 Betsy Ross Dr.,Santa Clara, CA 95054; also available in ACommitment to Success, Proceedings of the firstannual meeting of the National Council forSystems Engineering and the 12th annualmeeting of the American Society for EngineeringManagement, Chattanooga, TN, 20-23 October1991. Referred to on page(s) 20.

Fortescue, Peter W., and John P.W. Starl; (eds),Spacecraft Systems Engineering, John Wiley andSons Ltd., Chichester, England, 1991. Referredto on page(s) 1.

Gavin, Joseph G., Jr. (interview with), TechnologyReview, Vol. 97, No. 5, July 1994. Referred to onpage(s) 92.

General Accounting Office, DOD's CALS Initiative,GAO/AIMD-94-197R, September 30, 1994. Re-ferred to on page(s) 103.

Goddard Space Flight Center, Goddard Multi-variable In-strument Cost Model (MICM), Research Note #90-1, Resource Analysis Office (Bernard Dixon andPaul Villone, authors), NASA/GSFC, May 1990.Referred to on page(s) 81.

Green, A.E., and A.J. Bourne, Reliability Technology,Wiley Interscience, 1972.

Griffin, Michael D., and James R. French, SpaceVehicle Design, AIAA 90-8, American Institute ofAeronautics and Astronautics, c/o TASCO, P.O.Box 753, Waldorf, MD 20604 9961, 1990.Referred to on page(s) 1.

Hickman, J.W., et al., PRA Procedures Guide, TheAmerican Nuclear Society and The Institute ofElectrical and Electronics Engineers,NUREG/CR-2300, Washington, DC, January1983. Referred to on page(s) 42.

Hillier, F.S. and G.J. Lieberman, Introduction to Opera-tions Research, 2nd Edition, Holden-Day, Inc.,1978.

Hodge, John, "The Importance of Cost Considerations inthe Systems Engineering Process", in the NALMonograph Series: Systems Engineering Papers,NASA Alumni League, 922 Pennsylvania Ave.S.E., Washington, DC 20003, 1990. Referred toon page(s) 14.

Hood, Maj. William C., A Handbook of Supply InventoryModels, Air Force Institute of Technology, Schoolof Systems and Logistics, September 1987.

Hughes, Wayne P., Jr. (ed.), Military Modeling, MilitaryOperations Research Society, Inc., 1984.

IEEE, American National Standard for Metric Practice,ANSI/IEEE Std 268- 1992 (supersedes IEEE Std

Page 158: Engineering Systems Handbook

NASA Systems Engineering Handbook

268-1979 and ANSI Z210.1-1976), American Na-tional Standards Institute (ANSI), 1430 Broadway,New York, NY 10018. Referred to on page(s)139.

, IEEE Trial-Use Standard for Application andManagement of the Systems EngineeringProcess, IEEE Std 1220-1994., 345 E 47th St.,New York, NY 10017, February 28, 1995.Referred to on page(s) 1.

Jet Propulsion Laboratory, The JPL SystemDevelopment Management Guide, Version 1, JPLD-5000, NASA/JPL, November 15, 1989.

Johnson Space Center, NASA Systems EngineeringProcess for Programs and Projects, Version 1.O,JSC-49040, NASA/JSC, October 1994. Referredto on page(s) xi, 22.

Kececioglu, Dimitri, and Pantelis Vassiliou, 1992-1994Reliability, Availability, and MaintainabilitySoftware Handbook Reliability EngineeringProgram, University of Arizona, 1992. Referred toon page(s) 95.

Keeney, R.L., and H. Raiffa, Decisions with Multiple Ob-jechres: Preferences and Value Tradeoffs, JohnWiley and Sons, Inc., New York, 1976. Referredto on page(s) 76.

Kline, Robert, et al., The M-SPARE Model, LogisticsManagement Institute, NS901R1, March 1990.Referred to on page(s) 87.

Langley Research Center, Systems EngineeringHandbook for In-House Space Flight Projects,LHB 7122.1, Systems Engineering Division,NASA/LaRC, August 1994.

Lano, R., The N2 Chart, TRW Inc., 1977. Referred toon page(s) 68, 127.

Larson, Wiley J., and James R. Wertz (eds), SpaceMission Analysis and Design (2nd edition),published jointly by Microcosm, Inc., 2601 AirportDr., Suite 23O, Torrance, CA 90505 and KluwerAcademic Publishers, P.O. Box 17, 3300 AADordrecht, The Netherlands, 1992. Referred toon page(s) 1.

, Reducing Space Mission Cost, publicationscheduled for November 1995. Referred to onpage(s) 1.

Leising, Charles J., "System Architecture", in SystemEngineering at JPL, course notes (contact: Judy

Cobb, Jet Propulsion Laboratory), 1991. Referredto on page(s) 11.

Lexicon—Glossary of Technical Terms andAbbreviations Including Acronyms and Symbols,Draft/Version 1.0, produced for the NASAProgram/Project Management Initiative, Office ofHuman Development (Code ND), NASAHeadquarters, by DEF Enterprises, P.O. Box590481, Houston, TX 77259, March 1990.Referred to on page(s) 117.

Marshall Space Flight Center, Program Risk AnalysisHandbook, NASA TM-100311, Program PlanningOffice (R.G. Batson, author), NASA/MSFC,August 1987.

, Systems Engineering Handbook, Volume 1 —Overview and Processes; Volume 2—Tools,Techniques, and Lessons Learned,MSFC-HDBK-1912, Systems Analysis andIntegration Laboratory, Systems Analysis Division,NASA/MSFC, February 1991. Referred to onpage(s) 75, 89.

, Verification Handbook, Volume 1—VerificationProcess; Volume 2— Verification DocumentationExamples, MSFC-HDBK-2221, Systems Analysisand Integration Laboratory, Systems IntegrationDivision, Systems Verification Branch, February1994.

, General Environmental Test Guidelines (GETG)for Protoflight Instruments and Experiments,MSFC-HDBK-67O, Systems Analysis and Integra-tion Laboratory, Systems Integration Division,Systems Verification Branch, February 1994. Re-ferred to on page(s) 109.

McCormick, Norman, Reliability and Risk Analysis, Aca-demic Press, Orlando, FL, 1981.

Miles, Ralph F., Jr. (ed.), Systems Concepts—Lectureson Contemporary Approaches to Systems, JohnWiley & Sons, New York, 1973. Referred to onpage(s) 1.

Page 159: Engineering Systems Handbook

NASA Systems Engineering Handbook

Moore, N., D. Ebbeler and M. Creager, "A Methodologyfor Probabilistic Prediction of Structural Failures ofLaunch Vehicle Propulsion Systems", AmericanInstitute of Aeronautics and Astronautics, 1990.Referred to on page(s) 89.

Morgan, M. Granger, and Max Henrion, Uncertainty: AGuide to Dealing with Uncertainty in QuantitativeRisk and Policy Analysis, Cambridge UniversityPress, Cambridge, England, 1990.

Morris, Owen, ''Systems Engineering and Integrationand Management for NASA Manned Space FlightPrograms", in NAL Monograph Series: SystemsEngineering Papers, NASA Alumni League, 922Pennsylvania Ave. S.E., Washington, DC 20003,1990. Referred to on page(s) 9, 11.

NASA Headquarters, Initial OSSA Metrication TransitionPlan, Office of Space Science and Applications(Code S), NASA/HQ, September 1990.

, NHB 1700.1B, NASA Safety Policy and Require-ments Document, Office of Safety and MissionAssurance (Code Q), NASA/HQ, June 1993. Re-ferred to on page(s) 55.

, NHB 5103.6B, Source Evaluation Board Hand-book, Office of Procurement (Code H), NASA/HQ,October 1988. Referred to on page(s) 75.

, NHB 5300.4 (1A-1), Reliability Program Require-ments for Aeronautical and Space SystemContractors, Office of Safety and MissionAssurance (Code Q), NASA/HQ, January 1987.Referred to on page(s) 91, 92.

, NHB 5300.4 (1B), Quality Program Provisionsfor Aeronautical and Space System Contractors,Office of Safety and Mission Assurance (Code Q),NASA/HQ, April 1969. Referred to on page(s)95.

, NHB 5300.4 (1E), Maintainability Program Re-quirements for Space Systems, Office of Safetyand Mission Assurance (Code Q), NASA/HQ,March 1987. Referred to on page(s) 96, 100.

, NHB 7120.5 (with NMI 7120.4), Management ofMajor System Programs and Projects, Office ofthe Administrator (Code A), NASA/HQ, November1993 (revision forthcoming). Referred to onpage(s) ix, xi, 13, 14, 17, 99, 102, 139.

, NHB 8800.11, Implementing the Requirementsof the National Environmental Policy Act, Office ofManagement Systems and Facilities (Code J),NASA/HQ, June 1988. Referred to on page(s)113.

, NHB 8020.12B, Planetary Protection Provisionsfor Robotic Extraterrestrial Missions, Office of SpaceScience (Code S), NASA/HQ, forthcoming. Referred toon page(s) 115.

, NHB 9501.2B, Procedures for ContractorReporting of Correlated Cost and Performance Data, Fi-nancial Management Division (Code BF), NASA/HQ,February 1985. Referred to on page(s) 59.

, NMI 5350.1A, Maintainability and MaintenancePlanning Policy, Office of Safety and Mission Assurance(Code Q), NASA/HQ, September 26, 1991. Referred toon page(s) 99, 100.

, NMI 7120.4 (with NHB 7120.5), Management ofMajor System Programs and Projects, Office of theAdministrator (Code A), NASA/HQ, November 1993(revision forthcoming). Referred to on page(s) ix, xi, 3,13.

, NMI 8010.1A, Classification of NASA Payloads,Safety Division (Code QS), NASA/HQ, 1990. Referredto on page(s) 38, 123.

, NMI 8010.2A, Use of the Metric System ofMeasurement in NASA Programs, Office of Safety andMission Quality (Code Q), NASA/HQ, June 11, 1991.Referred to on page(s) 139.

, NMI 8020.7D, Biological Contamination Controlfor Outbound and Inbound Planetary Spacecraft, Officeof Space Science (Code S), NASA/HQ, December 3,1993. Referred to on page(s) 115.

, NMI 8070.4A, Risk Management Policy, SafetyDivision (Code QS), NASA/HQ, undated. Referred toon page(s) 37, 44.

National Environmental Policy Act (NEPA) of 1969, asamended (40 CFR 1500- 1508), Referred to onpage(s) 112.

Page 160: Engineering Systems Handbook

NASA Systems Engineering Handbook

Navy, Department of the, Producibility MeasurementGuidelines: Methodologies for Product Integrity,NAVSO P-3679, Office of the Assistant Secretaryfor Research, Development, and Acquisition,Washington DC, August 1993. Referred to onpage(s) 112.

Nuclear Regulatory Commission, U.S., (by N.H.Roberts, et al.), Fault Tree Handbook,NUREG-0492, Office of Nuclear RegulatoryResearch, January 1980. Referred to on page(s)94.

Office of Management and Budget, Guidelines and Dis-count Rates for Benefit-Cost Analysis of FederalPrograms, Circular A-94, OMB, October 1992.Referred to on page(s) 79.

Pace, Scott, U.S. Access to Space: Launch VehicleChoices for 1990-2010, The Rand Corporation,R-3820-AF, March 1990.

Presidential Directive/National Security CouncilMemorandum-25 (PD/NSC-25), Scientific ofTechnological Experiments with PossibleLarge-Scale Adverse Environmental Effects andLaunch of Nuclear Systems into Space, December14, 1977. Referred to on page(s) 114.

Procedures for Implementing the NationalEnvironmental Policy Act (14 CFR 1216.3),Referred to on page(s) 112.

Reilly, Norman B., Successful Systems Engineering forEngineers and Managers, Van Nostrand Reinhold,New York, 1993. Referred to on page(s) 1.

Ruskin, Arnold M., "Project Management and SystemEngineering: A Marriage of Convenience",Claremont Consulting Group, La Canada,California; presented at a meeting of the SouthernCalifornia Chapter of the Project ManagementInstitute, January 9, 1991. Referred to onpage(s) 11.

Saaty, Thomas L., The Analytic Hierarchy Process,McGraw-Hill, New York, 1980. Referred to onpage(s) 75.

Sage, Andrew P., Systems Engineering, John Wiley &Sons, New York, 1992. Referred to on page(s) 1.

Shishko, R., Catalog of JPL Systems Engineering Toolsand Models (1990), JPL D-8060, Jet PropulsionLaboratory, 1990.

, MESSOC Version 2.2 User Manual, JPL D-5749/Rev. B, Jet Propulsion Laboratory, October1990. Referred to on page(s) 81.

Sivarama Prased, A.V., and N. Somasehara, ''The AHPfor Choice of Technologies: An Application",Technology Forecasting and Social Change, Vol.38, pp. 151-158, September 1990.

Smith, Jeffrey H., Richard R. Levin and Elisabeth J.Carpenter, An Application of MultiattributeDecision Analysis to the Space Station FreedomProgram— Case Study: Automation and RoboticsTechnology Evaluation, JPL Publication 90-12, JetPropulsion Laboratory, May 1, 1990. Referred toon page(s) 76.

SP-7012 (NASA Special Publication), The InternationalSystem of Units; Physical Constants and Conver -sion Factors, E.A. Mechtly, published by theNASA Scientific and Technical Information Office,1973; U.S. Government Printing Office, (stocknumber 3300-00482).

Steuer, R., Multiple Criteria Optimization, John Wileyand Sons, Inc., New York, 1986.

Stewart, Rodney D., Cost Estimating, 2nd ea., JohnWiley and Sons, Inc., New York, 1991.

Taguchi, Genichi, Quality Engineering in ProductionSystems, New York: McGraw-Hill, 1989. Referredto on page(s) 111.

Wagner, G. M., Principles of Operations Research—With Applications to Managerial Decisions,Prentice Hall, 1969.

Page 161: Engineering Systems Handbook

NASA Systems Engineering Handbook

Index

Advanced projects 13Alpha—see Space Station AlphaAnalytic Hierarchy Process (AMP) 75Apollo 9, 10, 85Architecture—see system architectureAudits 30, 48, 49, 58, 96Availability

measures of 62as a facet of effectiveness 83-85, 96models of 85-87, 95, 98as part of the LSA 101

Baselines 4, 8, 14, 17, 18, 27, 36, 45control and management of 10, 21, 28-30, 44-48,

126in C/SCS 60in reviews 50-54

Bathtub curve 92, 93Bayesian probability 40Budget cycle, NASA 18, 25, 26Budgeting, for projects 31, 35, 37, 59

Change Control Board (CCB)composition of 46conduct of 46, 47

Change Request (CR) 46-49, 64, 65, 71, 80, 91, 95, 103Concurrent engineering 21, 22, 25, 28, 29, 39, 103Configuration control 4, 8, 11, 20, 45-48, 126Configuration management 10, 17, 29, 44-48, 126Congress 3, 18, 25, 26, 114Constraints 3, 5, 8-11, 18, 28, 35, 58, 67-70, 72, 77Contingency planning 39, 44Continuous Acquisition and Life-Cycle Support (CALS)

103Control gates 13-22, 27, 45, 48, 49, 50-58, 79Cost (see also life-cycle cost) 4, 5, 9, 10

account structure 27, 30, 31, 33caps 35, 37estimating 80-83fixed vs variable 37in trade studies 67-69, 74, 77operations 81, 132-134overruns 17, 77spreaders 82

Cost-effectiveness 4, 5, 9, 10, 29, 91, 96, 99, 111in trade studies 67-69, 72, 74, 77

Cost Estimating Relationship (CER) 80, 81, 88Cost/Schedule Control System (C/SCS) 59Critical Design Review (CDR) 18, 19, 45, 52, 53, 56, 58,

59, 96, 106

Critical Item/Issue List (CIL) 39, 44, 91, 125Critical path 13, 33, 34

Decision analysis 39, 41, 76Decision sciences 7, 77Decision support package 10, 71Decision trees 41, 70Design engineer(ing) 6, 8, 28, 49, 77Design-for-X 91Design reference mission—see reference missionDesign-to-cost—see design-to-life-cycle costDesign-to-life-cycle cost 80Design trades—see trade studiesDigraphs 39, 41Discounting—see present discounted value

Earned value 7, 31, 60, 62Effectiveness 4, 5, 9, 10

facets of 83-85, 91in TPM 61in trade studies 67-70, 100, 102

Engineering Change Request (ECR)—see changerequest

Engineering specialty disciplines 6, 91-115in concurrent engineering 22-25in SEMP 28, 29, 119in trade studies 69, 77, 80, 85

Environmental Assessment (EA) 112-114Environmental Impact Statement (EIS) 112 -114Estimate at Completion (EAC) 31, 60, 88Event trees 42

Failure Modes and Effects Analysis (FMEA)—see Fail-ure Modes, Effects, and Criticality Analysis

Failure Modes, Effects, and Criticality Analysis(FMECA) 39,41,91,95,98, 102, 105

Fault avoidance 94Fault tolerance 95Fault tree 39, 42, 94Feasibility 14, 18, 21, 50, 62Feasible design 5, 18Figure of merit 74, 75Flight Readiness Review (FRR) 19, 53, 54, 96, 108, 115Freedom—see Space Station FreedomFunctional Configuration Audit (FCA) 58, 96Functional Flow Block Diagram (FFBD) 68, 98, 111,

127-129Functional redundancy 94

Galileo 94Game theory 7Gantt chart 35, 36Goddard Space Flight Center (GSFC) 81

Page 162: Engineering Systems Handbook

NASA Systems Engineering Handbook

Hazard rate 92, 83Heisenberg—see uncertainty principleHubble Space Telescope 98

IEEE 1, 42, 139, 141, 142Improvement

continuous 64product or process 11, 20, 25, 47, 78, 86

Independent Verification and Validation (IV&V) 110Inflation, treatment of 78, 79, 82Institutional Operating Plan (IOP) 25Integrated Logistics Support (ILS) 17, 29, 30, 53, 78, 85-

87, 92, 99-103, 105, 119concept 96, 97plan 19, 87, 97-99

Integrationconceptual 11system 4, 11, 19, 22, 29, 30, 32, 33, 53

Interfacerequirements 9-11, 17, 19, 45, 52, 53, 68, 119,

127control of 28, 64, 119, 126

Jet Propulsion Laboratory (JPL) ix, x, 81Johnson Space Center (JSC) x, 43, 82

Kennedy Space Center (KSC) 110

Learning curve 82Lessons learned 11, 19, 30, 39, 41, 94, 98Lexicon, NASA 117Life-cycle cost (see also cost) 8, 10, 77-83

components of 77, 78controlling 79, 80, 83, 111

Linear programming 7, 72Logistics Support Analysis (LSA) 53, 91, 96-103, 119Logistics Support Analysis Record (LSAR) 98-103Lunar Excursion Module (LEM) 92

Maintainability 80, 92, 96-99concept 97definition of 96models 98

Maintenanceplan 97time lines 98, 129, 131

Make-or-buy 29Margin 43, 44, 62-64Marshall Space Flight Center (MSFC) x

handbooks 75, 89, 109historical cost models 81

Material Review Board (MRB) 96Mean Time Between Failures (M1BF) 80, 98, 132-134

Mean Time To Failure (MTTF) 86, 92, 98Mean Time to Repair (or Restore) (MTTR) 86, 98Metric system

conversion factors for 142-144definition of units in 139-141

Military standards 41, 86, 100, 102Mission analysis 7Mission assurance 91Mission Needs Statement (MNS) 14, 17, 45, 56Models, mathematical

characteristics of good 72, 73of cost 42, 80-83of effectiveness 42, 83-85Markov 98pitfalls in using 72, 88programming 7, 72relationship to SEMP 29, 83types of 71, 72use in systems engineering 6, 7, 21, 67-71, 87-89,

106, 129Monte Carlo simulation 39, 42, 69, 88, 89, 95Multi-attribute utility theory 75, 76

Network schedules 33-35, 42Non-Advocate Review (NAR) 14, 17, 18, 56NASA Handbook (NHB) ix, xi, 13, 14, 17, 55, 59, 75, 77,

79,91,95,96,99, 100, 102, 112, 113, 115, 139NASA Management Instruction (NMI) ix, xi, 1, 3, 13, 37,

38, 44, 99, 10(), 115, 123, 139Nuclear safety 114, 115

Objective function 4, 10, 74Objectives, of a mission, project, or system 3, 4, 8, 11,

17, 28, 37, 38, 50, 51, 56, 67-71, 74-77, 83, 91Office of Management and Budget (OMB) 25, 79Operations concept 9, 14, 17. 22, 68-70, 73, 77, 86, 101Operations research 7Optimization 3, 6, 7, 9, 13, 67, 72, 80, 83, 119Optimum repair level analysis (ORLA) 98Orbital Replacement Unit (ORU) 80, 87, 97, 98, 132-134Outer Space Treaty 115

Parametric cost estimation 80-83Partitioning—see interfacesPayload 17, 20, 132

classification 38, 93, 95, 105, 123nuclear—see nuclear safety

PERT 33, 39, 42Physical Configuration Audit (PCA) 48, 58, 96Precedence diagram 34Preliminary Design Review (PDR) 17, 18, 21, 22, 25,

45, 52,56,58,59,96, 106, 115

Page 163: Engineering Systems Handbook

NASA Systems Engineering Handbook

Present Discounted Value (PDV) 79, 80Probabilistic Risk Assessment (PRA) 39, 42, 44, 76, 86,

94, 115Probability distribution 5, 10, 38-43, 63, 88, 89, 92Problem/Failure Reports (P/FR) 64, 95, 96, 108, 110Producibility 111, 112

models 111Producing system (as distinct from the product system)

1, 27, 59, 64Product Breakdown Structure (PBS) 27, 30-33, 59, 61,

120Product development process 8, 13, 20-22Product development teams (PDT) 22, 25, 27, 56, 91Product improvement 11, 78, 103Product system 1, 27, 99Program, level of hierarchy 3Program/project control xi, 44, 59-61Program Operating Plan (POP) 25Project

level of hierarchy 3management (see also system management) xi,

27, 37, 46, 55, 59, 79plan 17-19, 28-30, 48, 49termination 49

Project life cycleNASA 13-20technical aspect of 20-26

Protoflight 106Prototype 13

Qualityof systems engineering process 64, 65, 75as a facet of effectiveness 84, 85

Quality Assurance (QA) 6, 29, 49, 52, 91, 94-96, 119Quality Function Deployment (QFD) 7Queueing theory 7

Red team 49Reference mission 9, 69Reliability 6, 22, 80, 91-95, 98, 100

block diagrams 94definition of 91in effectiveness 72, 84-87, 132engineering 41, 91-93models 89, 94, 95in SEMP 29, 93, 119in TPM 61

Reporting—see status reporting and assessmentRequirements 3, 6, 11, 14, 17, 22, 26, 28-30, 37, 45, 46,

51-54,56, 58, 59, 64-68, 80, 92, 103allocation of 11, 29, 129analysis 7, 9, 83, 127as part of the baseline 4, 8, 17, 18, 44, 45

documentation 14, 18, 38functional 9, 18, 50, 58, 61, 69, 77, 83, 95, 102,

103interface 9, 17, 50, 57, 127margin 62 64performance 6, 9, 18, 28, 50, 58, 59, 61, 68, 70,

73, 74, 77, 83, 95, 103reviews 17, 45, 49, 50, 55-57role of 27software 55-57specialty engineering 45, 92traceability 17, 28, 30, 48, 49, 57verification 22, 45, 57, 58, 61, 93, 95, 96, 103-111

Reservesproject 18, 37, 43, 44, 60, 88schedule 18, 35, 43

Resource leveling 35Resource planning—see budgetingRisk

analysis 38, 39, 41, 42, 89aversion 41, 76identification and characterization 38, 39 -41, 102management 29, 37-44, 91, 92, 105, 111mitigation 38, 39, 42-44, 92templates 40, 111types of 39, 40

Safety reviews 17, 19, 52-56Scheduling 33-35, 59-61S-curve, for costs 88Selection rules, in trade studies 6, 10, 67-69, 73-77Simulations 29, 72, 87, 89, 95SOFIA 120- 122Software 3, 7, 13, 18-22, 45, 47, 48, 52-57, 69, 78, 93,

96, 98, 103, 105-111, 126, 127cost estimating 81in WBS 30, 32, 34off-the-shelf systems engineering 35, 41, 75, 89,

95Source Evaluation Board (SEB) 75Space Shuttle 9, 40, 44, 47, 93, 110, 111, 125, 132, 133Space Station Alpha 8, 11, 40, 80, 97Space Station Freedom 76, 87, 132-134Specialty disciplines — see engineering specialty disci -

plinesSpecifications 9, 17-19, 22, 25, 29-31, 45, 46, 49, 51,

52, 56-58, 61,62, 64, 92, 100, 105, 107-110, 119Status reporting and assessment 31, 58-65, 88Successive refinement, doctrine of 7 -11, 17-19, 27Supportability (see also Integrated Logistics Support)

85-87, 91, 98-103risk 39, 43

Symbolic informationdesirable characteristics of 48

Page 164: Engineering Systems Handbook

NASA Systems Engineering Handbook

in systems engineering 27System Acceptance Review (SAR) 19, 45, 53, 108, 110System architecture 6, 8, 11, 14, 17, 18, 27, 31, 68, 69,

72, 73, 77, 79, 83, 89, 100System engineer

role of 6, 22, 28, 30, 44, 45, 61, 91, 103dilemma of 6, 79, 83

System management (see also project management) 4,6

Systems analysis, role of 6, 7, 61, 67Systems approach 7Systems engineering

objective of 4-6metrics 64, 65process xi, 5, 8-11, 20-25, 28-30, 33, 67-70, 77,

79, 91, 96, 99, 103, 112Systems Engineering Management Plan (SEMP) 17, 28-

31, 38, 40, 63, 64, 70, 83, 86, 91, 93, 99, 103Systems Engineering Process Improvement Task

(SEPIT) team xi, 3, 20Systems Engineering Working Group (SEWG) x, xi, 3

Taguchi methods 111Tailoring

of configuration management 45, 47by each field center 1of effectiveness measures 83of product development teams 22, 25, 91of project cycle 13, 28of project plans 45of project reviews 18of project hierarchy 3of risk management 38, 39of SEMP 29of systems engineering process metrics 64of verification 104, 105

Technical Performance Measure(ment) (TPM)assessment methods for 45, 60, 61, 88relationship to effectiveness measures 84relationship to SEMP 29, 63, 64, 119

role and selection of 31, 39, 44, 61, 62Test(ing) (see also verification) 3, 6, 11, 18, 22, 25, 33,

43, 45, 49, 51, 53, 55, 57, 58, 61-63, 69, 80, 81,91, 92, 94-100, 102-111

Test Readiness Review (TRR) 19, 30, 57, ]04, 109Total Quality Management (TQM) 7, 64, 111, 119Trade study

in ILS 99-103process 9, 17, 18, 67-71, 77, 100in producibility 111progress as a metric 64, 65in reliability and maintainability 98reports 10, 18, 71in verification 105

Trade tree 69, 70

Uncertainty, in systems engineering 5, 6, 20, 37-44, 69,79, 87-89

Uncertainty principle 39

Validation 11, 25, 28-30, 61, 96Variances, cost and schedule 60, 61Verification 4, 11, 17-19, 22, 29, 30, 45, 103-111

concept 105methods 105, 106relationship to status reporting 61, 64, 65reports 107requirements matrix 17, 19, 107, 135, 136stages 106

Waivers 48, 53, 58, 108Weibull distribution 92Work Breakdown Structure (WBS) 4, 17, 19, 27, 59, 80,

81, 119development of 30-33errors to avoid 32, 33example of 120-122and network schedules 34-36

Work flow diagram 34