enterprise design
TRANSCRIPT
Enterprise Design: Extending Product Design to Include
Manufacturing Process Design and Organization Design
A Thesis
Presented to
the Academic Faculty
By
Jesse Peplinski
In Partial Fulfillment
of the Requirements for the Degree of
Doctor of Philosophy in Mechanical Engineering
Georgia Institute of Technology
September 1997
Copyright c 1997 by Jesse Peplinski
iiii
ENTERPRISE DESIGN: EXTENDING PRODUCT DESIGNTO INCLUDE MANUFACTURING PROCESS DESIGN AND
ORGANIZATION DESIGN
Approved:
Farrokh Mistree, ChairmanProfessorMechanical Engineering
Janet K. AllenSenior Research ScientistMechanical Engineering
James I. CraigProfessorAerospace Engineering
Steven Y. LiangAssociate ProfessorMechanical Engineering
William M. RiggsAssociate ProfessorSchool of Management
David W. RosenAssistant ProfessorMechanical Engineering
Date Approved
iiiiii
DEDICATION
For my parents,
to whom I owe everything.
iviv
ACKNOWLEDGMENTS
As I am writing this page I am overwhelmed by images of my colleagues, relatives,
and friends who have been my life and my world these last few years. I owe a debt
beyond words to my parents and my grandmother, whose belief in me and encouragement
have opened up new worlds. This thesis stands as a testament to their success. I love you
all very much.
I owe the greatest thanks to my surrogate parents, Farrokh Mistree and Janet Allen,
who have drawn from me deeper insight than I thought possible. My years here in Atlanta
have been ones of great personal growth, and I am grateful for the tremendous roles they
have played in my life. I am also profoundly grateful to the other members of my
committee: to Dr. James Craig, for his consistently strenuous and insightful commentary;
to Dr. Steven Liang, for his rigorous and analytical perspective; to Dr. William Riggs, for
being my mentor and guide into the field of management; and to Dr. David Rosen, for
carrying the torch of scholarship.
I cherish the support and compassion given freely by my close colleagues in the
laboratory -- Kemper Lewis, Stewart Coulter, Tim Simpson, Matt Marston and especially
Patrick Koch. Much of what I am has grown from you. I am also deeply grateful to
George Chollar and Maurice Berryman, who have recognized the potential of my work and
are helping bring the dream of industrial implementation closer to reality. Through you I
hope to make a meaningful contribution to this world.
I consider myself blessed to have been surrounded by a wonderful community who
have ensured that my work did not become my life. I treasure the life I have found with
Mike, Peggy, and Ryan Elliott, who began as my relatives and who have become my
vv
family and my home. I hold the greatest respect and affection for my roommates, Eric
Gatti and Rick Newcomb, who managed to endure my company for all these years. I am
grateful to David Paul Frame for his support and encouragement, and to Bert Bras for his
grounded perspective and wild parties. I will forever think fondly of Debbie Finney, part
mother and part saint, who has been the heart of our laboratory. I am also thankful to Matt
Bauer, Yarom Polsky, Allison Ashley-Koch, Reid Bailey, and Keith Hooks, who have
given me the great gift of friendship.
Finally, I extend the deepest thanks to the National Science Foundation, for
providing me with the funding to make this work possible.
vivi
TABLE OF CONTENTS
DEDICATION iii
ACKNOWLEDGMENTS iv
LIST OF TABLES xiii
LIST OF FIGURES xiv
SUMMARY xviii
CHAPTER 1 TOWARD ACHIEVING ENTERPRISE DESIGN 1
1.1 MOTIVATION 2
1.2 FUNDAMENTAL APPROACH 7
1.3 PARADIGM FOR DECISION SUPPORT 10
1.3.1 Foundations 12
1.3.2 A Human-Computer Partnership for Making Decisions 15
1.3.3 Description of the Hybrid Paradigm for Decision Support 17
1.3.4 Implications for Achieving Enterprise Design 22
1.4 DISSERTATION STRUCTURE: RESEARCH QUESTIONS,
HYPOTHESES AND CONTRIBUTIONS 24
1.5 ISSUES OF VERIFICATION AND VALIDATION 32
1.5.1 What are Verification and Validation? 32
1.5.2 Verification and Validation Strategy 35
1.5.3 Procedures for Testing Hypotheses 36
1.6 GUIDE TO DISSERTATION 41
viivii
CHAPTER 2 ENTERPRISE MODELING AND INTEGRATION:
A REVIEW OF CURRENT LITERATURE 45
2.1 WHAT IS PRESENTED IN THIS CHAPTER 46
2.2 ENTERPRISE MODELING AND INTEGRATION: ACCEPTED
DEFINITIONS AND RESEARCH THRUSTS 47
2.3 DECISIONS, BOUNDED RATIONALITY AND
EMPOWERMENT: IMPLICATIONS FOR INTEGRATION 53
2.3.1 Bounded Rationality 55
2.3.2 Empowerment 57
2.3.3 Integration Through Decisions 60
2.4 FROM ENTERPRISE MODELING TO SYSTEM MODELING 61
2.4.1 Categorization of System Modeling Schemes 63
2.4.2 System Modeling and Analysis in Product Design 68
2.4.3 System Modeling and Analysis in Manufacturing Process
Design 69
2.4.4 System Modeling and Analysis in Organization Design 71
2.5 PLACING THIS CHAPTER IN CONTEXT 73
CHAPTER 3 A DECISION-BASED APPROACH TO
ENTERPRISE DESIGN 75
3.1 WHAT IS PRESENTED IN THIS CHAPTER 76
3.2 INTERDEPENDENCE, DECISIONS, AND EMPOWERMENT 77
3.3 FRAME OF REFERENCE: DECISION SUPPORT AND
SUPPORT PROBLEMS 79
3.3.1 The Decision Support Problem Technique 80
3.3.2 DSPT Palette and Support Problems 81
3.3.3 The Compromise Decision Support Problem 88
3.3.4 DSIDES: Software for Decision Support 91
3.3.5 The Robust Concept Exploration Method 94
viiiviii
3.4 ENTERPRISE DESIGN AS A NETWORK OF SUPPORT
PROBLEMS 97
3.4.1 Refining the Notion of a Task Support Problem 97
3.4.2 Method for Enterprise Design 100
3.4.3 Expand Scope of Decision 104
3.4.4 Identify Modeling Techniques and Determine Input Needed
for Analysis 107
3.4.5 Define Boundaries of Decision 110
3.4.6 Transform and Integrate Models 117
3.4.7 Generate Potential Solutions 120
3.5 COMPUTER INFRASTRUCTURE FOR ENTERPRISE DESIGN 123
3.6 THIS CHAPTER IN CONTEXT 128
CHAPTER 4 METAMODELING TECHNIQUES FOR MODEL
APPROXIMATION AND INTEGRATION 129
4.1 WHAT IS PRESENTED IN THIS CHAPTER 130
4.2 SCENARIOS FOR MODEL INTEGRATION 131
4.3 THE CONCEPT OF METAMODELING 132
4.4 SURVEY OF METAMODELING TECHNIQUES 135
4.4.1 Experimental Design 136
4.4.1.1 A Survey of Experimental Designs 137
4.4.1.2 Measures of Merit for Evaluating Experimental
Designs 141
4.4.2 Model Choice and Model Fitting 143
4.4.2.1 Response Surfaces 144
4.4.2.2 Neural Networks 145
4.4.2.3 Inductive Learning 146
4.4.2.4 Kriging 148
4.4.3 Strategies for Experimentation and Metamodeling 149
4.4.3.1 Response Surface Methodology 150
ixix
4.4.3.2 Taguchi's Robust Design 151
4.4.4 A Closer Look at Metamodeling for Deterministic
Applications 153
4.5 APPLYING METAMODELING TO ENTERPRISE DESIGN 157
4.5.1 Screening for Significant Factors 158
4.5.2 Selecting a Metamodeling Technique 158
4.5.2.1 Evaluation of Model Choice and Model Fitting
Alternatives 159
4.5.2.2 Evaluation of Experimental Design Alternatives 161
4.5.2.3 Initial Recommendations for Metamodeling Uses 163
4.5.3 Building Mean Response Metamodels 164
4.5.4 Building Robustness Metamodels 166
4.5.4.1 “Modeling the Noise” and Taylor’s Expansion 166
4.5.4.2 Product Array Approach 167
4.5.5 Metamodel Verification and Validation 168
4.6 PLACING THIS CHAPTER IN CONTEXT 169
CHAPTER 5 IMPLEMENTING ENTERPRISE DESIGN ALONG
DESIGN TIMELINES 171
5.1 WHAT IS PRESENTED IN THIS CHAPTER 172
5.2 DESIGN TIMELINES AND INTEGRATION 173
5.2.1 The Concept of Design Timelines 173
5.2.2 Viewing an Enterprise from a Design Timeline Perspective 176
5.2.3 Implications of Design Timelines for Enterprise Integration 178
5.3 APPLICATIONS OF ENTERPRISE DESIGN ACROSS
ENGINEERING AND MANAGEMENT 181
5.3.1 Designing Organization Strategy 182
5.3.2 Designing Organization Structure 183
5.3.3 Outsourcing and Core Competencies 185
5.3.4 Business Process Reengineering 186
xx
5.3.5 Manufacturing Facility Location 187
5.3.6 Manufacturing Process Redesign 188
5.3.7 Integrating Cost Estimates into Product Design 189
5.4 PROCEDURE FOR HANDLING DECISION
INTERDEPENDENCIES ALONG A DESIGN TIMELINE 191
5.5 PLACING THIS CHAPTER IN CONTEXT 195
CHAPTER 6 CASE STUDY: DESIGN OF A FORWARD-
LOOKING INFRARED RADAR SYSTEM 197
6.1 STRATEGY AND OVERVIEW OF CASE STUDY SUITE 198
6.2 INTRODUCTION TO FLIR SYSTEM DESIGN 202
6.3 CASE STUDY PHASE A: ISSUES OF PRODUCT
PERFORMANCE 210
6.3.1 Expand Scope of Decision 211
6.3.2 Identify Modeling Techniques and Determine Input Needed
for Analysis 213
6.3.3 Define Boundaries of Decision 214
6.3.4 Transform and Integrate Models 219
6.3.5 Generate Potential Solutions 228
6.3.6 Phase A Results and Recommendations: Designing a High-
Performance System 236
6.4 CASE STUDY B: INTEGRATING MANUFACTURING
PROCESS DESIGN ISSUES 237
6.4.1 Expand Scope of Decision 237
6.4.2 Identify Modeling Techniques and Determine Input Needed
for Analysis 240
6.4.2.1 Focal Plane Array Production Cost 240
6.4.2.2 Afocal Optics Production Cost 242
6.4.3 Define Boundaries of Decision 243
6.4.4 Transform and Integrate Models 248
xixi
6.4.5 Generate Potential Solutions 254
6.4.6 Phase B Results and Recommendations: Trading Off
Product Performance with Production Cost 263
6.5 CASE STUDY C: INTEGRATING ORGANIZATION DESIGN
ISSUES 265
6.5.1 Expand Scope of Decision 265
6.5.2 Identify Modeling Techniques and Determine Input Needed
for Analysis 268
6.5.3 Define Boundaries of Decision 271
6.5.4 Transform and Integrate Models 274
6.5.5 Generate Potential Solutions 279
6.5.6 Results and Recommendations 288
6.6 CRITICAL EVALUATION OF CASE STUDY SUITE 290
6.6.1 Limitations and Generalization 290
6.6.2 Testing of Hypotheses 293
CHAPTER 7 CLOSURE, ACHIEVEMENTS, AND
RECOMMENDATIONS 299
7.1 CLOSURE: ANSWERING THE RESEARCH QUESTIONS 300
7.2 ACHIEVEMENTS: REVIEW OF CONTRIBUTIONS 305
7.3 RECOMMENDATIONS: AVENUES FOR FURTHER
RESEARCH 309
7.4 CONCLUDING REMARKS 311
APPENDIX A DATA AND RESULTS FROM CASE STUDY
PHASE A 312
A.1 SAMPLE FLIR92 INPUT FILE 313
A.2 DESIGN MATRIX AND RESULTS FOR NETD SCREENING 315
A.3 DESIGN MATRIX AND RESULTS FOR NETD METAMODEL 316
xiixii
A.4 NETD FACTOR TRANSFORMATIONS 317
A.5 NETD VERIFICATION RUNS 318
A.6 REPRESENTATIVE PHASE A DSIDES DATA FILE
(FLIRA11H.DAT) 319
A.7 REPRESENTATIVE PHASE A DSIDES FORTRAN FILE
(FLIRA11H.F) 320
A.8 DSIDES OUTPUT FROM PHASE A 323
APPENDIX B DATA AND RESULTS FROM CASE STUDY
PHASE B 324
B.1 REPRESENTATIVE PHASE B DSIDES DATA FILE 325
B.2 REPRESENTATIVE PHASE B DSIDES FORTRAN FILE 326
B.3 DSIDES OUTPUT FROM PHASE B 330
APPENDIX C DATA AND RESULTS FROM CASE STUDY
PHASE C 331
C.1 REPRESENTATIVE SIMAN MODEL FILE 332
C.2 REPRESENTATIVE SIMAN EXPERIMENT FILE 334
C.3 DESIGN VARIABLES AND TRANSFORMATIONS FOR
DEVELOPMENT PROCESS SIMULATION 335
C.4 DESIGN MATRIX AND RESULTS FOR DEVELOPMENT
PROCESS SIMULATION MODEL 336
C.5 VERIFICATION RUNS FOR DEVELOPMENT PROCESS
SIMULATION MODEL 337
C.6 REPRESENTATIVE PHASE C DSIDES DATA FILE 338
C.7 REPRESENTATIVE PHASE C DSIDES FORTRAN FILE 339
C.8 DSIDES OUTPUT FROM PHASE C 343
REFERENCES 344
VITA 355
xiiixiii
LIST OF TABLES
Table 6.1 Sequential Augmentation of C-DSP Formulations 209
Table 6.2 Goals and Constraints for Phase A Decision 212
Table 6.3 NETD Design Variables, Noise Factors and Feasible Regions 216
Table 6.4 NETD Design Variables and Noise Factors after Screening 223
Table 6.5 Deviation Function Scenarios for Phase A 231
Table 6.6 Best Solutions for Each Scenario of Phase A 234
Table 6.7 Verification of Best Solutions for Phase A Scenarios 235
Table 6.8 Goals and Constraints for Phase B Decision 239
Table 6.9 Design Variables for Afocal Optics Production Cost Metamodel 249
Table 6.10 Central Composite Design for Afocal Optics Production Cost
Metamodel 250
Table 6.11 Confirmation Runs for Afocal Optics Production Cost Metamodel 254
Table 6.12 Deviation Function Scenarios for Phase B 256
Table 6.13 Best Solutions for Phase B 262
Table 6.14 Goals and Constraints for Phase C Decision 267
Table 6.15 Design Variables for FPA Design Process Duration Metamodel 274
Table 6.16 Central Composite Design for Focal Plane Array Design Process
Duration Metamodel 275
Table 6.17 Confirmation Runs for FPA Design Process Duration Metamodel 278
Table 6.18 Deviation Function Scenarios for Phase C 282
Table 6.19 Results from Phase C, Scenario 1 through Scenario 4 287
Table 6.20 Results from Phase C, Scenario 5 through Scenario 8 288
xivxiv
LIST OF FIGURES
Figure 1.1 Depiction of an Enterprise 3
Figure 1.2 Product Design and Organization Design (Dixon, et al., 1988,
Hanna, 1988) 5
Figure 1.3 Primary Activities in Making Decisions 14
Figure 1.4 The Relationship Between Designer and Computer (Mistree, et al.,
1990b) 16
Figure 1.5 Hybrid Paradigm for Decision Support 18
Figure 1.6 Decision-Based View of an Enterprise 23
Figure 1.7 Pictorial Representation of Dissertation Argument 26
Figure 1.8 Overall Structure of Contributions 31
Figure 1.9 Road Map 1: Where Each Element of the Dissertation Argument
Can Be Found 43
Figure 1.10 Road Map 2: Where Each Contribution Can Be Found 44
Figure 2.1 General Classification of System Modeling 66
Figure 2.2 Categorization of System Modeling from an Analysis Perspective 66
Figure 2.3 Pictorial Representation of Chapter 2 Context 74
Figure 3.1 The DSPT Palette (Bras and Mistree, 1991) 83
Figure 3.2 A Model of a Conceptual Design Event (Bras and Mistree, 1991) 85
Figure 3.3 Compromise DSP Word Formulation (Bras and Mistree, 1991) 86
Figure 3.4 Selection DSP Word Formulation (Bras and Mistree, 1991) 87
Figure 3.5 Task SP Word Formulation (Bras and Mistree, 1991) 87
Figure 3.6 Mathematical Form of a Compromise DSP 89
xvxv
Figure 3.7 Implementation of the ALP Algorithm for Solving Compromise
DSPs (Mistree, et al., 1989a) 93
Figure 3.8 The Robust Concept Exploration Method (Chen, 1995) 95
Figure 3.9 Revised Task SP Word Formulation 99
Figure 3.10 Decision-Based Method for Enterprise Design 101
Figure 3.11 Task 1: Expanding the Scope of the Decision 104
Figure 3.12 Tasks 2a and 2b: Identifying Modeling Techniques and Determining
Input Needed for Analysis 108
Figure 3.13 Tasks 1, 2a and 2b in System Modeling Context 109
Figure 3.14 Task 3: Defining the Boundaries of the Decision 111
Figure 3.15 Classification Scheme for Input Information 112
Figure 3.16 Two Interdependent Decisions 114
Figure 3.17 Task 4: Transforming and Integrating Models 118
Figure 3.18 Task 5: Generating Potential Solutions 121
Figure 3.19 The RCEM as a Particularization of Enterprise Design 124
Figure 3.20 C-DSP Formulations for Enterprise Design and RCEM 127
Figure 3.21 Pictorial Representation of Chapter 3 Context 128
Figure 4.1 Techniques for Metamodeling 136
Figure 4.2 Basic Three-Factor Designs 137
Figure 4.3 Typical Neuron and Architecture 145
Figure 4.4 Deterministic and Non-Deterministic Curve Fitting 154
Figure 4.5 Task 4: Transforming and Integrating Models 157
Figure 4.6 Product Array Design for Creating Mean Response and Robustness
Metamodels 165
Figure 4.7 Pictorial Representation of Chapter 4 Context 170
xvixvi
Figure 5.1 Manufacturing Process Design (Gaither, 1994) 174
Figure 5.2 System Development Process in the Defense Industry 175
Figure 5.3 Design Timelines in an Enterprise 176
Figure 5.4 Interdependencies Between Design Timelines 177
Figure 5.5 Integration Through Enforcing Coordination 179
Figure 5.6 Integration Through Promoting Empowerment 180
Figure 5.7 Integrating Cost Estimates Into Product Design 191
Figure 5.8 Procedure for Resolving Timeline Interdependencies 193
Figure 5.9 Timeline Interdependencies in a Decision Formulation Context 194
Figure 5.10 Pictorial Representation of Chapter 5 Context 196
Figure 6.1 Strategy for the Three Phases of the Case Study Suite 199
Figure 6.2 Overview of FLIR System 203
Figure 6.3 Schematic of CVTTS FLIR (Anderson, et al., 1996) 204
Figure 6.4 System Design from a Timeline Perspective 206
Figure 6.5 Decision Interdependencies for FLIR System Design 207
Figure 6.6 Pareto Plot of NETD Factor Significance 220
Figure 6.7 Plots of Predicted vs. Actual Values for NETD Mean and Standard
Deviation 227
Figure 6.8 C-DSP Math Formulation for Phase A 229
Figure 6.9 Design Variable Convergence from High and Low Starting Points,
Phase A, Scenario 1, TDI = 1 232
Figure 6.10 Design Variable Convergence from High and Low Starting Points,
Phase A, Scenario 1, TDI = 2 233
Figure 6.11 Design Variable Convergence from High and Low Starting Points,
Phase A, Scenario 1, TDI = 4 234
xviixvii
Figure 6.12 Decision Interdependence for Afocal Optics Production Cost 244
Figure 6.13 Afocal Lens Assembly Cost Model Timeline Interdependency 246
Figure 6.14 Transformation from Optics Transmission (T0) to System
Complexity (CMPLX) 247
Figure 6.15 Plot of Predicted vs. Actual Values for Afocal Optics Production
Cost 252
Figure 6.16 C-DSP Math Formulation for Phase B 255
Figure 6.17 Design Variable Convergence from High and Low Starting Points,
Phase B, Scenarios 3 and 4, TDI = 1 260
Figure 6.18 Design Variable Convergence from High and Low Starting Points,
Phase B, Scenarios 3 and 4, TDI = 2 261
Figure 6.19 Task Structure for the FPA Development Process 269
Figure 6.20 Decision Interdependence for FPA Design Process Duration 271
Figure 6.21 Focal Plane Array Design Process Duration Model Timeline
Interdependency 272
Figure 6.22 Plots of Predicted vs. Actual Values for FPA Development Process
Duration Mean and Standard Deviation 277
Figure 6.23 C-DSP Math Formulation for Phase C 280
Figure 6.24 Design Variable Convergence from High and Low Starting Points,
Phase C, Scenarios 3 and 4, TDI = 1 285
Figure 6.25 Design Variable Convergence from High and Low Starting Points,
Phase C, Scenarios 3 and 4, TDI = 2 286
Figure 6.26 Context and Generalization of Case Study Suite 291
Figure 7.1 Overall Structure of Contributions, Revisited 306
xviiixviii
SUMMARY
In manufacturing organizations there are at least three distinct design areas: product
design, manufacturing process design, and design of the organization itself. Historically
these activities have been carried out in relative isolation, by separate people on different
timelines. What if they could all be brought together into a domain-independent design
process? The heady nature of this prospect is captured by using the term enterprise design.
In this dissertation a decision-based approach is developed for enterprise design,
founded in the notion of formulating and solving Decision Support Problems (DSPs). This
approach to enterprise design is embodied by a method for creating mathematical
formulations of decisions as compromise DSPs, a philosophy for implementing this
method based on the notions of bounded rationality and empowerment, and a foundation in
system modeling and statistical metamodeling techniques. Enterprise design therefore
becomes:
• Identifying and assessing the enterprise-wide effects of each local decision,
• Identifying and assessing the effects of other external decisions on the local
decision at hand, and
• Making each decision to satisfy as many of the enterprise-wide goals as
possible, while being as robust as possible to external decisions that are beyond
the decision-maker’s control.
In effect, formulating these decisions fosters the integration of enterprise design activities.
This approach to enterprise design, once developed, is illustrated through its application to
the design of a Forward-Looking Infrared Radar system.
11
1. CHAPTER 1
TOWARD ACHIEVING ENTERPRISE DESIGN
The heart of this chapter is Section 1.4, in which the structure of research
questions, hypotheses, and contributions for this research is set forth; this structure serves
as the framework and context for the work presented in all of the chapters to follow.
Sections 1.1 through 1.3 provide the foundation for such a structure, essentially in terms of
research motivation, context, and fundamental approach, and also serve to establish a
common ground between author and reader.
In Section 1.1 the concepts of an “enterprise” and “enterprise design” are defined in
terms of integrating product design, manufacturing process design, and organization
design. In Section 1.2 the pathway to integration is established through focusing on the
common thread running through all design domains: the activity of making decisions.
Finally in Section 1.3 a hybrid paradigm for decision support is presented that forms the
core of the approach to enterprise design established in this dissertation. In Section 1.5
issues of verification and validation are discussed, and Section 1.6 serves as a guide to the
contents of each chapter.
22
1.1 MOTIVATION
The term "enterprise" has truly grand and majestic connotations. While it
commonly is used to describe a business organization, it also evokes elements of both
excitement and of risk, and implies an elevated perspective of both encompassing power
and long-term vision. We need only to look to the American Heritage dictionary (1993) for
confirmation, as all four definitions interweave to support this theme. An enterprise is:
1 An undertaking, especially one of some scope, complication, and risk.
2 A business organization.
3 Industrious systematic activity, especially when directed toward profit.
4 Willingness to undertake new ventures; initiative.
In this dissertation, then, what does the term "enterprise" mean? Because this work is
based in engineering design, we begin with the context of an organization that designs and
manufactures an arbitrary set of products. A fitting description is offered by Gale and
Eldred (1996, p. 8), who state that:
An enterprise is a system of business objects that are in functional symbiosis.Symbiosis means that business objects work together to accomplish mutuallybeneficial goals. The business objects of the enterprise include people, ma-chines, buildings, processes, events, and information that combine to producethe products or output of the enterprise.
In other words, an enterprise is a complex system that can be viewed from many perspec-
tives. Such a description is illustrated in Figure 1.1; an enterprise is the sum total of its
people and the jobs and tasks they perform, its products, its processes and facilities, its in-
formation systems and flows, and so on. Additional aspects certainly exist, and their ab-
sence from Figure 1.1 is not intended to imply exclusion; in this research the term
“enterprise” is used for its encompassing nature and its connotations of inclusion.
What is enterprise design? Clearly all of the aspects of an enterprise shown in
Figure 1.1 come into being at some time, and then remain dynamic, changing over time.
One can therefore say that the enterprise as a whole is designed, either implicitly or
33
explicitly. As voiced by Hanna (1988), “all organizations are perfectly designed to get the
results they get.” Because of the potentially overwhelming complexity of dealing with
enterprises in their entirety, the design of each aspect is often treated separately both in
research and in practice. In practice, functional departments such as product design,
research and development, production and distribution, human resources and strategic
planning are often created to design separate aspects of an enterprise somewhat
independently. This trend of partitioning to deal with complexity is also evident in
academia, where at least three well-established research communities address different
aspects of enterprise design: product design, manufacturing process design, and
organization design.
AN ENTERPRISE
FACILITIES
PRODUCTS
PEOPLE
INFORMATION
TASKS
Machining & Assembly III
Machining & Assembly II
Machining & Assembly I
Finished Goods Inventory
Raw Material
InventoryIn-ProcessInventory
Customer
Stamping
Forging
Vendors
JOBS
Figure 1.1 Depiction of an Enterprise
• Product design is the process by which customer requirements are
transformed into a physical artifact that fulfills the customers' stated needs.
44
• Manufacturing process design includes facility location and layout, the
purchasing of new equipment and technology, and the planning, scheduling and
operation of manufacturing facilities. This idea is captured well by the field of
operations management, which may be defined as “the design, operation, and
improvement of the production systems that create the firm’s primary products
or services” (Chase and Aquilano, 1995).
• Organization design refers to the design of the organization itself, through
the setting of variables under management control. These variables include the
organization structure (boundaries, levels), technologies (both product and
process), people (hiring, training), tasks (work design), reward systems, in-
formation systems, and decision-making processes (Hanna, 1988).
These design processes are most often described prescriptively as proceeding in a top-
down fashion from general to specific; in Figure 1.2 representative examples for both or-
ganization design and product design are shown (Dixon, et al., 1988, Hanna, 1988).
An argument can be made that the field of organization design is nearly as encom-
passing as the notion of enterprise design; there are no intrinsic limits set on the scope of
variables that may be under management control. This argument is not disputed in this re-
search and is in fact welcomed. It is undeniable that product design and operations man-
agement offer valuable philosophies, methods, and tools that are distinct from the realm of
organization design. Coupling these contributions with the encompassing nature of organi-
zation design results in a concise definition and vision statement for enterprise design:
Enterprise design is embodied by the notion of integrating the processes of product
design, manufacturing process design, and organization design.
55
For this dissertation, then, the idea of "enterprise design" is an appropriate and
evocative vision statement that captures the ultimate motivation and context for this work.
This research is driven by the heady goal of fostering integration between product design,
manufacturing process design, and design of the organization itself. This motivation has
grown from a foundation in product design and engineering design theory and the continual
observation and awareness that product design issues are strongly intertwined with issues
of manufacturing process design and organization design.
Fundamental Goals
StrategiesWhat, when, why, how.
Organization Form
Organization Culture
Results
Reason for Being. Purpose, Vision.
ORGANIZATION DESIGN
How work actually gets done.
Technology, Structure, Tasks, Rewards, People,etc.
PRODUCT DESIGN
Instance
Configuration
Embodiment
Behavior
Function
Need
Figure 1.2 Product Design and Organization Design (Dixon, et al.,1988, Hanna, 1988)
The motives for integration are clear; the interdependence between the design of
products and processes of an organization with the design of the organization itself is diffi-
cult to argue. In the words of Peter Senge (1990, p. 23),
The most critical decisions made in organizations have systemwide conse-quences that stretch over years or decades. Decisions in R&D have first-orderconsequences in marketing and manufacturing. Investing in new manufacturingfacilities and processes influences quality and delivery reliability for a decade ormore. Promoting the right people into leadership positions shapes strategy andorganizational climate for years.
66
And what’s more, the common practices for dealing with organizational complexity often
directly undermine the possibilities for integration.
Traditionally, organizations attempt to surmount the difficulty of coping withthe breadth of impact from decisions by breaking themselves up into compo-nents... The result: analysis of the most important problems in a company, thecomplex issues that cross functional lines, becomes a perilous or nonexistentexercise. (Senge, 1990, p. 24)
The motivation for this research is therefore captured well by two fundamental research
questions:
Question 1 How can a domain-independent and mathematically supported approach be
implemented for designing products, manufacturing processes, and the
organization itself?
Question 2 How can the design of these interdependent entities be integrated at any point
along a common design timeline?
In digesting these questions, the power and evocative nature of the term
"enterprise" is a double-edged sword, easily leading to misinterpretation and
misunderstanding. This research is not about performing the impossible. The answer is
not to attempt to design entire products, entire manufacturing processes, or entire
organizations all at once. And it is certainly not the idea of integrating the design of all
three at once into a huge design effort. Instead, the tack taken in this research is the
development of an approach to design that:
• is generally applicable at any point and in any domain across the entire
enterprise,
• enables the enterprise-wide effects of any local design issue to be modeled and
assessed, and
77
• provides the capacity to model and assess the impact of external design issues
on the local issue at hand.
Designing an entire enterprise can become such a huge task that it is well beyond the scope
of any one person or small team of people. Therefore, instead of a command-and-control
paradigm (with its image of a lone leader issuing commandments from on high), this
research is based on a philosophy of empowerment and collaboration. In other words,
It’s just not possible any longer to “figure it out” from the top, and have every-one else following the orders of the “grand strategist”. The organizations thatwill truly excel in the future will be the organizations that discover how to tappeople’s commitment and capacity to learn at all levels in an organization.(Senge, 1990, p. 4)
This approach to enterprise design is intended to be applicable across the enterprise as a
whole, so that through the sum total of actions of all of the local design actors, from prod-
uct designers and process planners to middle management to strategic planners, the enter-
prise as a whole is designed in a unified, integrated, and internally consistent manner. This
vision is voiced well by Ray Stata (Senge, 1990, p. 350), who states:
Our fundamental challenge is tapping the intellectual capacity of people at alllevels, both as individuals and as groups. To truly engage everyone -- that’sthe untapped potential in modern corporations.
A key shift in thinking is from visualizing a design process as initiated by a single designer
to recognizing the actual enterprise design process as the combined actions of the empow-
ered group of decision makers that comprise the organization. The importance of decision-
making in this vision is established in the next section.
1.2 FUNDAMENTAL APPROACH
In this dissertation, the road to achieving enterprise design is through the creation of
a domain-independent approach to design that is generally applicable at any point and in
any domain across the entire enterprise. This approach must be applicable to the design of
88
products, the planning, scheduling, and operation of manufacturing processes facilities,
and the setting of organizational policies, strategies, and structure. Clearly, for such an
approach to be possible a common thread or unifying theme must be identified that
permeates all of these varied design domains.
Such a common thread is offered in this dissertation by adopting a decision-based
perspective. A core assertion is that the making of decisions is a central and key activity to
the design of products, to the design and operation of manufacturing processes, and to a
significant fraction of all managerial and executive activity that encompasses organization
design. This assertion makes intuitive sense; in any design activity there occur moments
of choice where competing alternatives are generated and selected, and these moments of
choice are an integral element of the process of making decisions. For completeness, how-
ever, selected quotes are presented below from engineering and management literature that
illustrate how, once designing is framed in terms of making decisions, the lines become
blurred between designing, making decisions, engineering, and managing.
Design as making decisions:
Decision-Based Design is a term coined to emphasize a different perspectivefrom which to develop methods for design (Mistree, et al., 1990b). Theprincipal role of a designer, in Decision-Based Design, is to make decisions.
[A] great deal of real-world decision making -- perhaps most of it -- is con-cerned with creating alternatives among which choices can be made. The ac-tivities that create new alternatives are usually called design activities, and al-though we most often apply the term design to the work of engineers and ar-chitects, it is equally central to the work of managers. Companies and the or-ganizational structures within them must be designed. Investment alternativesmust be discovered, that is, designed. Products and product lines must be de-signed. (Simon, 1987, p. 14)
Managing as making decisions:
[Our aim] is to examine the manager as decision maker. But to understand whatis involved in decision making that term has to be interpreted broadly - sobroadly as to become almost synonymous with managing. (Simon, 1977, p39)
99
Executing policy, then, is indistinguishable from making more detailed policy.For this reason, I shall feel satisfied in taking my pattern for decision making asa paradigm for most executive activity. (Simon, 1977, p. 44)
The jobs of all managers are to plan, organize, staff, direct, control, and makedecisions. (Gaither, 1994)
The examples provided [here] show how attention to the decision-making andcommunication processes provides a viable alternative approach to organizationdesign, which dispenses with the classical 'principles'. (Simon, 1976, p. xxii)
Engineering as designing:
As soon as we introduce 'synthesis' as well as 'artifice', we enter the realm ofengineering. For 'synthetic' is often used in the broader sense of 'designed' or'composed'. We speak of engineering as concerned with 'synthesis', whilescience is concerned with 'analysis'... The engineer, and more generally the de-signer, is concerned with how things ought to be -- how they ought to be in or-der to attain goals, and to function. (Simon, 1981, p. 7)
Managing as designing:
The essence of the new role, I believe, will be what we might call manager asresearcher and designer. What does she or he research? Understanding the or-ganization as a system and understanding the internal and external forces driv-ing change. What does she or he design? The learning processes wherebymanagers throughout the organization come to understand these trends andforces. (Senge, 1990, p. 299)
From a decision-based perspective, an approach to enterprise design becomes framed in
terms of decision support. A paradigm for decision support is explored in the next section,
but first we must develop a working definition of what decision making is. Herbert Simon
in Administrative Behavior (Simon, 1976) states that:
At any moment there are a multitude of alternative (physically) possible actions,any one of which a given individual may undertake; by some process thesenumerous alternatives are narrowed down to that one which is in fact acted out.The words ‘choice’ and ‘decision’ will be used interchangeably in this study torefer to this process. (p. 4)
Similarly, in the same volume Simon states that:
1010
All behavior involves conscious or unconscious selection of particular actionsout of all those which are physically possible to the actor and to those personsover whom he exercises influence and authority. (p. 3)
By approaching design from the perspective of making decisions, it becomes possible to
envision a domain-independent approach to design that spans across any enterprise
domain. This approach could easily also extend to most other areas of management and
engineering, so for this research some focus is required. This focus is developed in the
next section by exploring the logical extension of a decision-based approach to design --
framing design methods and tools in terms of decision support.
1.3 PARADIGM FOR DECISION SUPPORT
The power and nearly universal applicability of a decision-based perspective to
designing is offered in the previous section. Certainly it is clear that decision making
activity is a common thread that runs throughout the design of products, of manufacturing
processes, and of organizations themselves. In this section this decision-based perspective
is applied to the research questions of Section 1.1, resulting in the creation of an approach
to enterprise design founded in terms of decision support. A hybrid paradigm for decision
support is developed based on the melding of engineering design, operations research, and
systems theory, and resulting implications are drawn for integrating product design,
manufacturing process design, and organization design.
The accepted definition of decision making offered in the previous section, the idea
of choosing from alternative courses of action, is powerful in its simplicity and in its
universal applicability to nearly all human endeavor. This applicability is evident in the
general perspective of human activity as a tandem of "decision" and "action", and there is
also a strong parallel to the notion of "thinking" and "doing".
Clearly, though, there are different types of decisions. There is the artist who
places the brush stroke just so; part of this activity could fairly be called a decision. There
1111
is the commuter who selects a specific route home from work, and there is a person intent
on solving a crossword puzzle. There is also the CEO who decides to establish operations
in a foreign country, and there is the engineer who designs an electric motor to meet
specific speed and power requirements. Decisions are embedded in all of these activities.
Although all of these examples contain at some point a moment of choice, there are
differing levels of thought and planning that precede each moment. Commensurately, there
are also differing levels of impact for each decision -- the magnitude of benefit in the
outcome, and the extent to which the outcome is reversible or modifiable. All decisions
inherent in product design, manufacturing process design, and organization design fit
somewhere along this spectrum, each having a greater or lesser impact and each therefore
deserving a greater or lesser amount of thought and planning.
In this research the primary focus is on decisions that have consequences that
extend across the enterprise and through time, whose outcomes will play a significant role
in the enterprise's success. In these decisions the potential benefit of finding an improved
solution would outweigh any added care and preparation required in the decision making
process. Through the notion of decision support, a focus in this research is on developing
methods and tools to help decisions be made more efficiently and effectively. Applying
these tools and methods requires a given amount of thought and effort, so there will
certainly be decisions of low impact to which these tools and methods will not warrant
application. Therefore, in this research the notion of a "decision" is defined in a keyword
sense; this keyword definition is embodied by the combined statements of Sections 1.3.2
and 1.3.3. As a keyword it embodies many of the fundamental assertions and imperatives
of this research effort. In effect this definition serves well to define much of the context
and perspective of this work. However, by adding these details the definition becomes
more refined and specific and therefore becomes primarily intended for a subset of all
decisions. Before developing such a definition a solid foundation must first be established;
1212
this is done by drawing together the fields of operations research, engineering design, and
systems theory in the next section.
1 .3 .1 Foundations
Adopting a decision-based perspective initiates the convergence of operations
research, engineering design, and systems theory into three interrelated bodies of thought,
and this convergence holds truly exciting potential for the expanding realm of design
decisions that can be supported mathematically. The interrelated nature of the three can be
observed by reviewing definitions of each:
Operations Research:
The terms ‘operations research’ and ‘management science’ are nowadays usedalmost interchangeably to refer to the application of orderly analytic methods,often involving sophisticated mathematical tools, to management decisionmaking ... (Simon, 1977, p. 55)
Further,
At a more philosophic level, operations research may be viewed as the applica-tion of the scientific method to management problems... (Simon, 1977, p. 55)
Finally,
Along with some mathematical tools ... operations research brought intomanagement decision making a point of view called the systems approach. Somewhat more concretely, it means designing the components of a system andmaking individual decisions within it in the light of the implications of thesedecisions for the system as a whole. (Simon, 1977, p. 56)
Engineering Design:
Engineering design is a purposeful activity directed towards the goal offulfilling human needs, particularly those which can be met by the technologyfactors of our culture. (Asimow, 1962)
Engineering design is the process of applying various techniques and scientificprinciples for the purpose of defining a device, a process, or a system insufficient detail to permit its physical realization. (Taylor, 1959)
Engineering design is a process performed by humans aided by technical meansthrough which information in the form of requirements is converted into
1313
information in the form of descriptions of technical systems, such that thistechnical system meets the requirements of mankind. (Hubka and Eder, 1987)
Systems Theory:
The essence of the discipline of systems thinking lies in a shift of mind:• seeing interrelationships rather than linear cause-effect chains, and• seeing processes of change rather than snapshots. (Senge, 1990, p. 73)
Systems thinking is the antidote to this sense of helplessness that many feel aswe enter the ‘age of interdependence’. Systems thinking is a discipline forseeing the ‘structures’ that underlie complex situations, and for discerning highfrom low leverage change. (Senge, 1990, p. 69)
We can see how these definitions build upon each other. Systems theory and engineering
design result in a view of design that is domain independent; there are no fundamental
differences between designing products, manufacturing processes, or organizations. All
become system design. Engineering design, by the process of creating technical systems to
satisfy multiple requirements, implies the quantitative and analytical approach voiced by
operations research. Finally, operations research implies both systems thinking and the
scientific method voiced by engineering design.
From this convergence the concept of decision support begins to take shape, both in
terms of the notion of optimization, and also in terms of the activities of modeling,
analysis, and synthesis. (The hybrid paradigm in Figure 1.5 builds on all of these
concepts.) Optimization1 embodies both a mathematical approach to decision making and
also a process of system design in terms of adapting an ‘inner environment’ to an ‘outer
environment’.
The logic of optimization methods can be sketched as follows: The 'innerenvironment' of the design problem is represented by a set of given alternatives
1 The term “optimization” is used here because of its ubiquitous use in describing the application ofmathematical techniques to solving design problems. This is a delicate issue, however, becauseoptimization carries with it certain mathematical and philosophical connotations that are at odds with ouractual use of techniques for solving inherently multi-objective problems. Most often an “optimal” solutionhas no meaning, and instead we seek “satisficing” (Simon, 1981), or good enough, solutions. This topic isreturned to in Section 3.4.7.
1414
of action. The alternatives may be given in extenso: more commonly they arespecified in terms of command variables that have defined domains. The 'outerenvironment' is represented by a set of parameters, which may be known withcertainty or only in terms of a probability distribution. The goals for adaptationof inner to outer environment are defined by a utility function -- a function,usually scalar, of the command variables and environmental parameters --perhaps supplemented by a number of constraints (inequalities, say, betweenfunctions of the command variables and environmental parameters). Theoptimization problem is to find an admissible set of values of the commandvariables, compatible with the constraints, that maximize the utility function forthe given values of the environmental parameters. (Simon, 1981, p. 134)
The activities of modeling, analysis, and synthesis form the core of engineering design,
whether described in the context of “general problem solving” (Pahl and Beitz, 1988) or in
terms of making decisions. The application of modeling, analysis and synthesis to making
engineering design decisions is illustrated in Figure 1.3.
Analysis
Ideation
ProblemStatement
ManyAlternatives
Modeling Synthesis
Recom-mendation
Refinement
Figure 1.3 Primary Activities in Making Decisions
How do modeling, analysis, and synthesis apply to decision support? Given a
problem, some representation of it must be formed, ideally one that captures all the relevant
information and highlights the critical issues. Creating this representation is called
modeling. Once a model is developed it can be exercised and explored in the search for
potential solutions; ideally a large number of competing alternatives are generated in a
process of ideation. Each of these alternatives is then fleshed out and evaluated in a
process of analysis. Once enough information has been generated to compare the
competing alternatives, the best one or few are selected in the process of synthesis. This
recommendation can then be accepted or rejected by the designer, and actions proceed
1515
accordingly. This pattern has been adapted from a basic three-step pattern of diverging,
systemizing and converging proposed by De Boer (1989) for use in general problem
solving and design.
This context of operations research, engineering design and systems theory sets the
tone for the next section in which the concept of a human/computer partnership for making
decisions is developed.
1 .3 .2 A Human-Computer Partnership for Making Decisions
This work is founded in the philosophy of Decision-Based Design, which is based
on the notion that the principal role of an designer, in the design of a system, is to make
decisions (Mistree and Muster, 1990, Mistree, et al., 1989b, Mistree, et al., 1990b,
Mistree, et al., 1993b, Shupe, 1988). The characteristics of decisions are governed by the
characteristics of the design of real-life engineering systems. In part, these characteristics
are summarized by the following descriptive sentences:
• Decisions involve information that comes from different sources and disciplines.
• Decisions are governed by multiple measures of merit and performance.
• All the information required to make the best possible decision may not be
available.
• Some of the information used in making a decision may be hard (science-based)
and some information may be soft (insight-based).
• The problem for which a decision is being made is invariably loosely defined and
open. Virtually none of the decisions are characterized by a singular, unique
(optimal) solution. The decision solutions are less than optimal and are called
satisficing solutions.
Such decisions often involve the evaluation of large numbers of alternatives, and each
evaluation may require a significant amount of computation. These computations also
1616
bring substantial amounts of information into the process. Therefore, at the heart of the
decision-making process, we see the need for a human-computer partnership as illustrated
in Figure 1.4. This union has been symbolized as an adjustable wrench, in which the
computer represents information that is processed and the designer processes the
information by making decisions (Mistree, et al., 1990b). In a design environment a
similar satisfactory contribution by each part, both human and computer, must be achieved.
Figure 1.4 The Relationship Between Designer and Computer (Mistree,et al., 1990b)
There are several assertions upon which this human/computer partnership is founded:
• Decision-making is inherently an individual human activity. Although group
decision-making is widespread, at its core remains the moment of choice for
each actor -- how to cast his or her vote in the process of consensus.
• A mathematical structure, or formulation, of a decision is valuable because it
fosters the use of computers for executing elements of the decision-making
process.
• Therefore, decisions (of sufficient importance) need to be formulated before
they can be solved.
1717
• A human designer is required to guide the decision-making process; human
creativity, values and judgment are indispensable. Although computers alone
have been used to make certain types of routine decisions (process control, for
instance) and to solve problems (such as playing chess), these are accomplished
through the application of given human-generated heuristics. Computers don’t
create new heuristics for making decisions, at least not yet.
• Requiring a mathematical formulation for decisions is a limitation, because not
all decisions are quantifiable. (This issue is returned to in Chapter 7.) But as
with all research there must be a specific starting point. The vision driving this
research is to establish enterprise design in terms of quantifiable decisions and
then to continue to push the boundaries of what is quantifiable.
These implications form the perspective that flows through the development of the hybrid
paradigm for decision support described in the next section.
1 .3 .3 Description of the Hybrid Paradigm for Decision Support
The “keyword” definition of decisions, or hybrid paradigm for decision support, is
illustrated in Figure 1.5. It is a hybrid of decision support activities spanning operations
research, engineering design, and systems theory. The word “paradigm” is used following
the work of Thomas Kuhn, in the sense that a paradigm embodies the underlying beliefs of
a research community, helps to frame the current research issues of importance, and is
subjected to continuing testing and refinement. In the words of Kuhn himself (1960):
In its established usage, a paradigm is an accepted model or pattern, and thataspect of its meaning has enabled me, lacking a better word, to appropriate'paradigm' here...In a science, on the other hand, a paradigm is rarely an objectfor replication. Instead, like an accepted judicial decision in the common law, itis an object for further articulation and specification under new or morestringent conditions. (p. 23)
The new paradigm implies a new and more rigid definition of the field. (p. 19)
1818
Paradigms gain their status because they are more successful than theircompetitors in solving a few problems that the group of practitioners has cometo recognize as acute. To be more successful is not, however, to be eithercompletely successful with a single problem or notably successful with anylarge number. The success of a paradigm -...- is at the start largely a promiseof success discoverable in selected and still incomplete examples. (p. 23)
This hybrid paradigm for decision support shown in Figure 1.5 is labeled a “paradigm”
because it 1) captures the mathematical approach to decision making expressed by
operations research and engineering design, 2) helps define and communicate the ideas of
interdependence between design decisions, and 3) is certainly not a panacea for all decision
making -- its usefulness is aimed more at decisions that are inherently quantifiable. But
there is legitimate promise for success in its potential to apply to a wide range, perhaps a
majority, of design decisions.
Goals & Constraints (Requirements)
f1(x)
Design Variables{x}
Models
T1 T2 Tn
f2(x) fn(x)
Ana
lysi
s
Synt
hesi
s
Figure 1.5 Hybrid Paradigm for Decision Support
In this hybrid paradigm a decision is represented in terms of a set of design
variables {x} and a set of n goals and constraints captured by target values {T1, T2, ... Tn}
1919
and models {f1(x), f2(x), . . . fn(x)}. Regions of interest are specified for each design
variable a priori, and these regions define the realm of potential solutions for the decision.
In product design the variables may represent system parameters or characteristics, while in
organization design the variables may represent different policies or courses of action.
Goals and constraints represent the motives for making the decision, and may encompass
measures of system cost, performance, quality, and so on. The target values Ti represent
the “aspiration space” or what ideally will be achieved with the decision, and the models
fi(x) quantify to what extent these aspirations are achieved by the actual values of the
design variables. Analysis is then the process of computing the values of fi(x) for given
values of {x}, and synthesis is the process of using these computed values to find the best
settings for the design values. A specific value is identified from the region of interest for
each design variable, and this set of values represents the solution that comes closest to
achieving all of the goal targets while not violating any constraints.
This representation builds upon the strengths of operations research, engineering
design and systems theory. It follows the operations research structure of decision
variables, constraints, and an objective function without many of the accompanying
limitations: the decision variables (design variables) can be either continuous or discrete,
the constraints can be either linear or nonlinear equalities or inequalities, and instead of a
single “objective function” to be maximized there are multiple goals to be achieved, each
framed in terms of meeting specified targets. (Both constraints and goals are represented
identically in Figure 1.5 because their mathematical structure is similar, but in actuality
there are subtle and significant nuances that separate their formulation. This is discussed in
Section 3.3.3.)
The hybrid paradigm also builds upon the strengths of engineering design in the
sense that it capitalizes on and integrates many of the existing activities in engineering de-
sign practice, namely those of modeling, analysis and synthesis, where modeling is de-
2020
fined (Gale and Eldred, 1996) as “the construction of a small, inexpensive, and incomplete
artifact that simulates some real-world domain in which we are interested” and analysis and
synthesis are the processes of utilizing the models to make decisions. Analysis is the
process of computing values for fi(x) given values for {x}, and synthesis is the process
of using these computed values to find the best settings for the design variables.
The importance of modeling is voiced by Simon (1990) who states that, "modeling
is the principal -- perhaps the primary -- tool for studying the behavior of large complex
systems." The purposes for modeling are varied, but the primary uses (Askin and
Standridge, 1993) include the following:
1. Optimization -- finding the best values for decision variables.
2. Performance prediction -- checking potential plans and sensitivity.
3. Control -- aiding the selection of desired control rules.
4. Insight -- providing better understanding of systems.
5. Justification -- aiding in selling decisions and supporting viewpoints.
This hybrid paradigm primarily captures two of these modeling uses -- those of
optimization and of performance prediction. The paradigm is itself a model that embodies a
process of “optimization”, and contained within it are models for predicting the
performance of given combinations of the design variables. This distinction is also
described in terms of prescription versus prediction (Simon, 1990) or prescriptive versus
descriptive models (Askin and Standridge, 1993, p. 19):
Mathematical models tend to be descriptive or prescriptive in nature. Simulationmodels tend to be descriptive. Given a set of values for the decision variables,we turn the model on and out comes an estimate of system performance.Mathematical programming models such as linear programming areprescriptive. Turn the model on and out comes the answer of how we shouldset the decision variables.
In addition to modeling, the hybrid paradigm of Figure 1.5 also capitalizes on the
engineering activities of analysis and synthesis. Although existing definitions of these
2121
activities in the context of systems engineering describe them in terms of large, overarching
design processes that encompass many decisions, it is also felt that parallel definitions are
evolving that describe the activities within the process of making each decision. In a
systems engineering context, analysis and synthesis (Blanchard and Fabrycky, 1990) are
defined as:
analysis breaking down a problem into a set of as simple problems as possible,solving each, and assembling their solutions into a solution of the whole
synthesis combining and structuring of parts and elements in such a manner so as toform a functional entity
Similarly, from a computer science context (Sippl and Sippl, 1980) analysis is defined as:
analysis methodological investigation of a problem by a consistent procedure, andits separation into related units for further detailed study
However, the concept of “engineering analysis” has come to mean intensive modeling and
calculation of system behavior and phrases like “analyze the performance of the system” are
ubiquitous in both industry and academia. In summary, the “models” in Figure 1.5
encompass the universe of system modeling techniques that exist to predict system
performance or behavior, “analysis” represents the process of performance prediction, and
“synthesis” represents the prescriptive process of finding the best values of the design
variables. The paradigm itself is a prescriptive model which contains a suite of applicable
descriptive models.
This paradigm for decision support, a hybrid of concepts from engineering design,
systems theory and operations research, is offered as the core of an approach to enterprise
design because of its following characteristics:
• Its domain-independent nature makes it applicable across all enterprise design
domains (explored in Section 2.4).
• The partitioning of activities into modeling, analysis and synthesis fosters the
creation of methods and tools for decision support (developed in Chapter 3).
2222
• It sets the stage for concrete and specific definitions of interdependence and
integration in terms of decisions (described in Section 2.3 and Section 3.2).
The paradigm also sets the stage for developing the framework of research questions and
hypotheses set forth in Section 1.4; this perspective is established in the next section.
1 .3 .4 Implications for Achieving Enterprise Design
As illustrated in Figure 1.1 an enterprise can be an overwhelmingly complex system
of products, processes, people, facilities, information, and so on. Since the employees of
an enterprise work to achieve common goals, the elements of this complex system are
surely interconnected to some degree, thus potentially resulting in an incomprehensibly
interwoven tangle of interdependence. However, applying a decision-based perspective to
enterprise design filters out the redundancy in domain-dependent complexities to identify an
abstracted and general view of designing and interdependence within an enterprise. This
idea is illustrated in Figure 1.6.
In Figure 1.6 all of the varied forms of interdependence are reduced to two axes.
Decisions are made in an enterprise by different people across enterprise domains (product
design, operations, marketing, et cetera) and occur at different points in the steady
progression of time. Each decision formulation is equivalent in that courses of action are
selected from competing alternatives (represented as design variables) to satisfy as much as
possible a set of prioritized goals. These goals may be shared across decisions; for
example a low-cost product may be a common enterprise goal that is affected by decisions
across product design, production, and distribution. It is very likely that these decisions
will be made both by different people and at different points in time. The concept of
enterprise design therefore becomes:
• Identifying and assessing the enterprise-wide effects of each local decision,
2323
• Identifying and assessing the effects of other external decisions on the local
decision at hand, and
• Making each decision to satisfy as many of the enterprise-wide goals as
possible, while being as robust as possible to external decisions that are beyond
the decision-maker’s control.
Design variables
Goals
AN ENTERPRISE
Decision-Based Perspective
Decisions Across Enterprise Domains
Dec
ision
s Thr
ough
Tim
e
Interdependencies
Figure 1.6 Decision-Based View of an Enterprise
As given in Section 1.1 the fundamental research questions explored in this
dissertation are, (1) how a domain-independent and mathematically supported approach can
be implemented for designing products, manufacturing processes, and the organization
itself, and (2) how the design of these interdependent entities can be integrated at any point
along a common design timeline. Applying the perspective of Figure 1.6, the issues
2424
inherent in these questions begin to take shape. Domain-independence is achieved through
a focus on decisions as discussed in Section 1.2. Mathematics is inherent in the hybrid
paradigm for decision support of Section 1.3.3, but the actual process of identifying,
creating and integrating the models to quantify system behavior may be difficult. Further,
the concept of integrating disparate design processes can now be framed in terms of
integration along the two axes of enterprise domains and time. These issues evolve into
research questions of their own as the structure of the dissertation argument is fleshed out
in the next section.
1.4 DISSERTATION STRUCTURE: RESEARCH QUESTIONS,HYPOTHESES AND CONTRIBUTIONS
In this section the overall argument of this dissertation is laid out, embodied in a
structure of research questions, hypotheses, and contributions. As alluded to in Section
1.3.1, this work is founded in the discipline of engineering design and also shares strong
ties with operations research and systems thinking. This work thus proceeds with a
scientific bent; specifically it is the most general intent to contribute to a science of design,
and such contributions are developed by application of the scientific method. Although the
scientific method is explored in some depth in Section 1.5, it is mentioned here because its
structure of observation, hypothesis, test and conclusion is the glue that binds together the
dissertation argument presented herein.
Building upon the notion of enterprise design defined in the context of decision
support from in the previous sections, it is now possible to erect the framework of research
questions, hypotheses, and contributions that together define the scope, the specific
context, and the intended value of this research. This basic structure is as follows:
• Research questions capture the motivation for performing this research.
2525
• Hypotheses capture the proposed context and the expertise and individual
thought contained in the actual work performed, and they set the structure for
the specific tasks performed in carrying out the research effort.
• Contributions grow from the testing of hypotheses, embody the intellectual
value of the research for continued efforts in the field, and should support in
some sense the “answering” of the research questions.
Taken together, these elements form the overall argument for this dissertation; this
argument is presented pictorially in Figure 1.7. The argument begins with reconciling the
general definition of an enterprise and enterprise design as discussed in Section 1.1 with a
philosophy rooted in engineering design, operations research, and systems theory (all from
Section 1.3.1), and the notions of bounded rationality and empowerment (introduced in
Section 2.3). The motivation for this research grows from attempting this reconciliation
and is embodied by two fundamental research questions:
Q 1 How can a domain-independent and mathematically supported approach be im-
plemented for designing products, manufacturing processes, and the
organization itself?
Q 2 How can the design of these interdependent entities be integrated at any point
along a common design timeline?
In this work, “mathematically supported” means quantitative and repeatable.
Repeatability is necessary in order to test, or verify, whatever quantitative solutions that
result. Quantification is mandatory because of the technical nature of products and manu-
facturing processes; designing such systems requires dealing with the laws of natural
science, reconciling the constraints of scarce resources, and calculating performance and
cost measures, which by and large require mathematical treatment. Quantification is also
2626
desired for the traditionally more qualitative realm of organization design; this perspective
grows from the scientific bent of this work and is well voiced by Lord Kelvin (1824-1907):
When you can measure what you are speaking about, and express it innumbers, you know something about it... (otherwise) your knowledge is of ameager and unsatisfactory kind; it may be the beginning of knowledge, but youhave scarcely in thought advanced to the stage of science.
Q 2.1Integration
across Domains
Q 1.1
Unified
Q 1.2
Quantifiable
Q 2.2Integration
through Time
Method for Enterprise Design
Decision-Based Approach to Enterprise Design (Hypothesis 2)
Hybrid Paradigm for Decision Support (Hypothesis 1)
Domains
Tim
e
Timeline Procedure
EXISTING METHODS, TOOLS & TECHNIQUES
• System Modeling
• Statistical Metamodeling
• DSP Technique- DSPT Palette- Compromise DSP- DSIDES- RCEM
CONTRIBUTIONS
Motivation(Research Questions 1 and 2)
Philosophy Based on• Engineering Design• Operations Research• Systems Theory• Bounded Rationality• Empowerment
ANENTERPRISE
Revised Task SP
Guidelines for Metamodeling
Categorizationof System Modeling
H 1.1
H 2.2
H 1.2
H 2.1
Figure 1.7 Pictorial Representation of Dissertation Argument
2727
Similarly, the criteria for domain-independency is met if the approach applies
uniformly across product design, manufacturing process design, and organization design.
Further elucidation of these two research questions is accomplished by applying the
perspective of designing as making decisions (Section 1.2), as well as the fundamental
paradigm of decision support (Section 1.3). These applications are captured formally by
Hypothesis 1:
H 1 A decision-based perspective is a key to achieving a domain-independent and
mathematically rigorous approach to enterprise design.
The application of a decision-based perspective to enterprise design results in a view of the
enterprise as a two-dimensional grid of decisions -- decisions made by different people
across enterprise domains, and decisions made in the same domain through time. (This
idea is shown in more detail in Figure 1.6.) Because these decisions are made by people
with shared goals (such as customer satisfaction and the health of the enterprise), the
decisions will inevitably be knit together into an interdependent web. Such grows the
imperative for the integration expressed in Question 2; the approach to achieving
integration is embodied by Hypothesis 2:
H 2 Integration can be achieved through 1) a unified method for decision support, 2)
mathematical tools for assessing the impact of individual decisions on the enter-
prise, and 3) methods for identifying, modeling and resolving interdependen-
cies between the decision at hand and other decisions across enterprise domains
and through time.
This sequential application of Hypothesis 1 and Hypothesis 2 to the two fundamental
research questions results in the identification of sub-issues of significant importance; two
of these issues pertain to the goals of domain independence and mathematical support
expressed in Question 1 and are thereby denoted Question 1.1 and Question 1.2. The
2828
remaining two issues address the two dimensions of decision interdependence and
integration and are thus denoted Question 2.1 and Question 2.2. These questions are
illustrated in Figure 1.7 as branching out from the decision-based approach to enterprise
design.
Q 1.1 How can decisions be supported in any enterprise domain in a mathematical
and quantitative manner?
Q 1.2 How can mathematical models be created to quantify any and all aspects of
an enterprise?
Q 2.1 How can interdependence be handled between decisions across enterprise
domains?
Q 2.2 How can interdependence be handled between decisions through time?
These four research questions are answered through the proposing and testing of
hypotheses, and in this instance these hypotheses grow from a knowledge of the wealth of
existing methods, tools, and techniques available across a range of scientific research
communities. (These existing methods, tools and techniques are shown in the lower right
corner of Figure 1.7.) System modeling techniques, reviewed in Section 2.4, span many
engineering science applications as well as much of operations management and a sample
of organization design applications. Statistical metamodeling techniques, reviewed in
Section 4.4, span such approaches as the design of experiments and regression, neural
networks, inductive learning and kriging. The Decision Support Problem (DSP)
Technique, reviewed in Section 3.3, is an open system of methods and tools for decision
support in engineering design; relevant aspects are methods for modeling design processes
(the DSPT Palette), a mathematical construct for formulating decisions (the compromise
2929
DSP), software for solving DSPs (DSIDES), and a method for incorporating robust design
issues into the early stages of design (the RCEM). The hypotheses are, respectively:
H 1.1 A method for implementing mathematically rigorous decision support in any
enterprise domain can be created using the hybrid paradigm for decision
support and the compromise DSP.
H 1.2 Existing system modeling techniques can be used to quantify the behavior
of several enterprise domains, and statistical metamodeling techniques meet
the necessary conditions for transforming this enterprise behavior in a
format amenable to decision support and design.
H 2.1 The combination of the compromise DSP and statistical metamodeling
techniques meets the necessary conditions for handling decision
interdependence across enterprise domains.
H 2.2 The combination of the compromise DSP and statistical metamodeling
techniques meets the necessary conditions for handling decision
interdependence along a design timeline.
Testing these hypotheses accounts for a large fraction of the work contained in this
dissertation; the specific testing procedures are discussed in depth in Section 1.5.3, and the
location of each element is given in Section 1.6, the Guide to the Dissertation. This testing
results in a distinct set of contributions:
• a hybrid paradigm for decision support in enterprise design,
• a rigorous and industry-tested method for implementing this decision support (a
method for formulating and solving enterprise design decisions as compromise
DSPs),
3030
• a categorization of system modeling techniques in a format amenable to decision
support,
• a revised formulation of the Task Support Problem, to aid in the rigorous
representation and support of the enterprise design method itself,
• guidelines and recommendations for selecting and applying statistical
metamodeling techniques for model approximation and integration,
• a procedure, based on the compromise DSP and statistical metamodeling tech-
niques, for handling interdependent decisions along a design timeline,
• definitions for integrating models into decision formulations, interdependence
between decisions, and the integration of decisions and design processes, and
• an overall philosophy for implementing the method of enterprise design.
These contributions combine to form a decision-based approach to enterprise design that is
offered as the fundamental contribution of this research. This idea is shown in Figure 1.8.
At the bottom of Figure 1.8 is a foundation both of philosophies (empowerment,
bounded rationality, engineering design, et cetera) and of existing methods and tools
(metamodeling, the Decision Support Problem Technique, and so on). From this
foundation a body of contributions is grown, both from the perspective of a method for
enterprise design and an overall philosophy for implementing the method. The cornerstone
of the philosophy is the hybrid paradigm for decision support of Section 1.3.3. This
paradigm fosters the formation of definitions for integrating models into decisions, for
interdependence between decisions, and for the integration of decisions and design
processes. These definitions all feed into an overall philosophy for implementing the
method for enterprise design.
In parallel a detailed step-by-step method is offered for formulating and solving
enterprise design decisions as compromise DSPs. Enablers to this method are a
3131
categorization of system modeling schemes, a revised Task Support Problem, guidelines
for metamodeling, and a procedure for resolving interdependencies between decisions
through time. Finally, as shown in Figure 1.8, both philosophy and method combine to
form the fundamental contribution of this research, a decision-based approach to enterprise
design.
Method for Enterprise Design
Timeline Procedure
Revised Task SP
Guidelines for Metamodeling
Categorizationof System Modeling
Definition of Interdependence
between Decisions
Integration of Models into Decisions
Integration of Decisions and Design Processes
Overall Philosophy for Implementing Method
Empowerment and Bounded Rationality
Engineering Design / DBD/ OR / Systems Theory
Metamodeling, System Modeling, Decision Support Problem Technique
Hybrid Paradigm for Decision Support
Decision-Based Approach to Enterprise Design
Figure 1.8 Overall Structure of Contributions
The two figures shown in this section give the highlights of the dissertation,
skimming over the bulk of more methodical tasks churning in the background that give
each chapter its girth. These tasks are elements of the process of hypothesis testing which
is addressed in more detail in the next section.
3232
1.5 ISSUES OF VERIFICATION AND VALIDATION
The question of whether testing the proposed hypotheses really “answers” the
research questions is a thorny one, and it falls squarely into the domain of verification and
validation. In this section the meanings of both “verification” and “validation” are
explored, and then this resulting light is used to examine the actual testing processes for
each hypothesis in this dissertation.
1 .5 .1 What are Verification and Validation?
In accordance with the Concise Oxford English Dictionary (1982), to validate is to
make valid, to ratify or confirm. The root, valid, is then defined as
• (of reason, objection, argument, etc.) sound, defensible, well-grounded;
• (law) sound and sufficient, executed with proper formalities (valid contract);
• legally acceptable (valid passport).
With respect to engineering design research, the intent of the validation process is to show
the research and its products to be sound, well grounded on principles of evidence, able to
withstand criticism or objection, powerful, convincing and conclusive, and provable. This
process is an integral part of the scientific method, which comprises:
• the observation of phenomena,
• the formulation of a hypothesis intended to explain the phenomena,
• experimentation to test the hypothesis, and
• reaching a conclusion that validates or modifies the hypothesis.
Thus validation is as much a part of the scientific method as oxygen is a constituent of
water. The scientific method hinges on the concept of validation, that is, sound convincing
argument. Without an almost “science of validation”, the wealth of knowledge we have
today would be relatively miniscule. In a fundamental sense the concept of validation is
3333
tied to the seeking of “truth”, the establishment of an objective reality, and the art of
persuasion. It is the bridge through which an individual’s knowledge can be
communicated, evaluated and perhaps accepted by a larger community.
In common usage there is a tightly interconnected web of definitions that describe
the process of hypothesis testing; in the most general sense it describes the seeking of
“truth” and encompasses such concepts as verification, accuracy, evidence, proof,
confirmation, substantiation, and authenticity. In the Merriam-Webster hypertext
dictionary under a discussion of the synonyms for “confirm”, “verify” is used in the
context of establishing the correspondence of actual facts or details with those proposed or
guessed at, while “validate” is used in the context of establishing validity by authoritative
affirmation or by factual proof. The boundary between verification and validation is thus
shifting and often open to interpretation; in many cases the two words are used
interchangeably.
In this research definitions for “verification” and “validation” are applied that, while
not inconsistent with the general usages above, are more specific and tailored for efforts in
engineering design research. In practice, the verification and validation of design methods
is much more than a debugging process. Three primary phases can be identified: firstly,
problem justification; secondly, completeness and consistency checks of the methodology,
and thirdly, validation of performance. (This classification is based on a discussion of the
validation of expert systems by Ignizio, 1990). Verification then refers to the second
phase of the process and is focused primarily on internal consistency and completeness,
while validation as the third phase of the process is focused on consistency with external
evidence, ideally through testing the design method on actual case studies. This validation
of performance is perhaps the area most open to interpretation by peers and experts in the
field alike.
3434
If what is to be validated is a closed form mathematical expression or algorithm, it
can be proven, or validated, in a traditional and formal mathematical sense. For example,
the case of showing a solution vector, x , belongs to the set of feasible solutions for a given
mathematical model is a closed problem. Alternatively, if the problem is open, if the
subject is dealing with some “heuristic”, nonprecise scheme, the issue of validation
becomes one of “correctness beyond reasonable doubt.” The validation of design methods
falls into this category. In this case it is achieved ultimately by results and usefulness and
through a convincing demonstration to (and an acceptance and ratification by) one’s peers
in the field. An analogy with mathematics and the concept of “necessary” and “sufficient”
conditions can be drawn here with respect to the validation of heuristics. Heuristics are
aimed toward satisfying the necessary conditions only. There is not a requirement to “dot
all the i’s and cross all the t’s.” Indeed, by definition, it is not possible to develop an
absolute proof for an open problem.
As anticipated, the operations research literature provides some useful insight into
the validation of heuristics, in the context of heuristic programming. In discussing the
nature of problem solving by heuristic programming Lin (1975) makes the following
remarks:
We therefore define a valid heuristic algorithm (to solve a given problem) as anyprocedure which will produce a feasible solution acceptable to the designengineer, within limits of computing time, and consider the problem solved ifwe can construct a valid heuristic procedure to solve it. We see that in thedomain where a heuristic algorithm operates, there are elements of technique,experimentation, judgment and persuasion, as well as compromise.
The issue of justification is addressed by Ignizio, et al. (1972):
Specific heuristic programs are justified, not because they attain an analyticallyverifiable optimum solution, but rather because experimentation has proven thatthey are useful in practice.
In summary, while noting that judgment is subjective and based on faith, the validation of a
heuristic, and therefore the validation of design methods, can be established if:
3535
• the solutions are feasible and acceptable to the design engineer,
• the time and consumed resources are within reasonable limits, and
• the solutions are, above all, useful.
It is against these three issues that the verification and validation strategy is developed in the
next section.
1 .5 .2 Verification and Validation Strategy
As stated in Section 1.1, the power and encompassing nature of enterprise design is
a double-edged sword. Enterprises can often become huge. Although the dissertation
structure of Section 1.4 serves to focus this research somewhat, testing these hypotheses
could still potentially cover a lot of ground. In this section we grapple directly with this
issue of scope in order to set realistic bounds on the work contained in this dissertation.
This is accomplished by laying out a verification and validation strategy for the testing of
the hypotheses.
In the simplest of terms, this dissertation will stand as a complete and self-contained
entity if answers can be found for both of the two fundamental research questions, and if
these answers can be verified and validated. These “answers” combine to form an
approach to enterprise design as illustrated in Figure 1.8; this approach must be
mathematically rigorous, it must be applicable to the design of products, manufacturing
processes, and organizations, and it must foster the integration of these three entities along
a common design timeline. Hard proof would require illustrating the approach across all
aspects of enterprise design, from product design and manufacturing process design to
facility location, marketing, sales, distribution, and even strategic management.
To keep the scope of this work reasonable, the primary concession made is in the
area of establishing the integration of the three design processes along a common design
timeline. There are a countless number of different design processes that could be used as
3636
a starting point, and within any of these processes there are a nearly limitless number of
decisions that could be used to branch out and illustrate integration. Because of the
author’s background and experience the perspective of product design is taken, with a
special emphasis on the decisions made in the early stages of a design project (early in the
design timeline). Therefore, the focus in this research is on how manufactur-
ing process design and organization design issues can be integrated into the
early stages of a product design timeline. There are many additional applications
further down a design timeline, and more applications exist in the remaining enterprise
domains. These areas will be surveyed and marked off, but their development will be left
for further research. The strategy for verification and validation is as follows:
• Illustrate the domain-independence of a decision-based perspective of design,
• Build a mathematically rigorous and domain-independent method for decision
support in enterprise design through the testing of hypotheses,
• Apply the enterprise design approach to integrating product design,
manufacturing process design, and organization design in the early stages of a
product design timeline,
• Verify and validate the approach in this domain through a case study suite, and
• Discuss how the approach can be extended to additional applications (decisions)
throughout any and all enterprise domains.
With this strategy fresh in mind the specific procedures for testing each hypothesis can be
introduced; this is done in the next section.
1 .5 .3 Procedures for Testing Hypotheses
As discussed in Section 1.5.1, for research to be considered valid it must be sound,
defensible, and well-grounded. These adjectives apply directly to the testing of hypotheses
3737
and the arguments used for their ultimate acceptance or rejection. Therefore, in this section
the specific procedures used for testing each hypothesis in this dissertation are presented so
that the validity of the procedures themselves can be evaluated.
Testing Hypothesis 1:
(A decision-based perspective is a key to achieving a domain-independent
and mathematically rigorous approach to enterprise design.)
This testing procedure starts with establishing the universality of decisions across
engineering, managing, and designing; this is addressed in Section 1.2. A keyword
definition for decisions, framed in terms of a hybrid paradigm for decision support, is then
established in Section 1.3 through Section 1.3.3. (A hint of the mathematical rigor of this
paradigm is also given in Section 1.3.3.) The domain-independent nature of a decision-
based perspective in an enterprise design context is illustrated in Section 1.3.4, and for
completeness other approaches to enterprise integration are reviewed in Section 2.2.
Finally, the domain-independence and mathematical rigor of a decision-based approach to
enterprise design is illustrated by example throughout the case study suite in Chapter 6.
Testing Hypothesis 1.1:
(A method for implementing mathematically rigorous decision support in
any enterprise domain can be created using the hybrid paradigm for decision
support and the compromise DSP.)
This procedure begins with the development of the hybrid paradigm for decision support in
Section 1.3 through Section 1.3.3. Issues for realizing such a method are then given in
Section 3.3; these issues include how to formulate decisions mathematically and multi-
objectively , and also how to represent the method itself rigorously. The compromise DSP
is introduced in Section 3.3.3 as a mathematical, multi-objective decision model, and the
3838
software package DSIDES is introduced in Section 3.3.4 as a means for achieving the
synthesis of compromise DSPs. Similarly, the foundation of the DSPT Palette is given in
Section 3.3.2 as a vehicle for developing a formal and rigorous representation of the
method. The method itself is developed in detail throughout Section 3.4, building directly
on the formalism of using the decision support paradigm to build compromise DSPs. The
mathematical rigor of the method is then demonstrated across multiple enterprise domains
throughout the case study suite in Chapter 6.
Testing Hypothesis 1.2:
(Existing system modeling techniques can be used to quantify the behavior
of several enterprise domains, and statistical metamodeling techniques meet
the necessary conditions for transforming this enterprise behavior in a
format amenable to decision support and design.)
This hypothesis is addressed squarely in Section 2.4. Reviews of system modeling
techniques across product design (Section 2.4.2), manufacturing process design (Section
2.4.3), and organization design (Section 2.4.4) yield a significant number of examples in
each of these domains. An overall categorization of these modeling techniques in Section
2.4.1 establishes how these models can be integrated into decision formulations. In the
context of statistical metamodeling, five scenarios are introduced in Section 4.2 as a
spanning set of model characteristics that occur in industry applications, and in Section 4.3
the concept of metamodeling is defined and then applied to each of these scenarios.
Reviews of a range of metamodeling options are given in Section 4.4, and they are
integrated into the method for enterprise design in Section 4.5. Detailed examples for
quantifying enterprise behavior in a format amenable to decision support are given in the
case study suite in Chapter 6, and additional examples of the range of system modeling
applications are offered throughout Section 5.3.
3939
Testing Hypothesis 2:
(Integration is achieved through 1) a unified method for decision support, 2)
mathematical tools for assessing the impact of individual decisions on the
enterprise, and 3) methods for identifying, modeling and resolving interde-
pendencies between the decision at hand and other decisions across
enterprise domains and through time.)
The concept of achieving integration through decisions is introduced in Section 1.3.4, and
the idea is solidified in Section 2.3 by illustrating the notion of integrating models into
decision formulations. This approach to integration is built upon the concepts of bounded
rationality (Section 2.3.1) and empowerment (Section 2.3.2) and is tied back in a concrete
sense to the hybrid paradigm for decision support in Section 2.3.3. This overall
philosophy for integration is described in Section 3.2, and the meaning of integrating
decisions and design processes is explored in Section 5.2. The options of enforcing
coordination or promoting empowerment for enterprise integration are developed formally
in Section 5.2.3, and examples of how to achieve this integration across varied enterprise
domains are given throughout Section 5.3. Detailed examples of implementing this
approach to integration are also given throughout the case study suite in Chapter 6.
Testing Hypothesis 2.1:
(The combination of the compromise DSP and statistical metamodeling
techniques meets the necessary conditions for handling decision
interdependence across enterprise domains.)
Decision interdependence across enterprise domains is illustrated in broad strokes in
Section 1.3.4, and it is defined in more detail in Section 3.4.5. Options that grow out of
this definition are enforcing coordination or promoting empowerment as discussed in
4040
Section 3.2. Coordination is achieved by integrating models from different enterprise
domains into decision formulations as described in Section 2.3.3, and if models are not
compatible then metamodeling techniques are applied as discussed in Section 4.3.
Coordination is not always possible but promoting empowerment is; decisions are made to
be robust to the outcomes of other decisions as illustrated in Section 4.5.4. Conceptual
examples of these are given throughout Section 5.3, and detailed examples are given
throughout the case study suite in Chapter 6.
Testing Hypothesis 2.2:
(The combination of the compromise DSP and statistical metamodeling
techniques meets the necessary conditions for handling decision
interdependence along a design timeline.)
Decision interdependence along a design timeline is illustrated in broad strokes in Section
1.3.4, and it is defined in more detail in Section 3.4.5. An even more in-depth method is
developed in Section 5.4. Integration is ultimately achieved either through enforcing
coordination or promoting empowerment as defined in Section 5.2.3, and examples of each
are given throughout Section 5.3. Finally, detailed examples are developed throughout the
case study suite in Chapter 6.
In the previous sections the structure of this dissertation is laid out in terms of
research questions, hypotheses, and contributions, and we have addressed the issues of
verification and validation with the ideal of establishing the potential soundness and
defensibility of the work to come. The stage is now set for launching into the body of this
work, but before doing so a final element of structure and guidance is offered in this
chapter -- that of a detailed guide to finding the “what” and the “where” in this dissertation.
Such a guide is given in the next section.
4141
1.6 GUIDE TO DISSERTATION
This section is intended for those who do not wish to read this dissertation in order.
Brief summaries of each chapter are given, and then the two figures of Section 1.4 are
brought back to illustrate where each element of the dissertation structure can be found and
where each contribution resides. These “road map” figures are fairly complex, so the
chapter summaries are given first in the hopes of a more gentle transition.
Chapter 2 - Enterprise Modeling and Integration: A Review of Current Literature
In this chapter a review of ongoing work in the fields of enterprise modeling and enterprise
integration is given to establish the context of the work in this dissertation. This reviewed
work is then contrasted to pursuing a decision-based approach to integration, which results
in a shift from enterprise modeling to system modeling. A review of system modeling
techniques across enterprise domains brings the chapter to a close.
Chapter 3 - A Decision-Based Approach to Enterprise Design As the title of this
chapter implies, the approach to enterprise design offered in this dissertation is developed
here. The overall philosophy of such an approach is given first, and then the foundation of
methods and tools within the DSP Technique is given. Finally the method for enterprise
design is developed in detail, step-by-step.
Chapter 4 - Metamodeling Techniques for Model Approximation and Integration
Although the system modeling techniques reviewed in Chapter 2 may span a significant
number of enterprise domains, they are not always easy to integrate into a decision
formulation. In this chapter a range of statistical metamodeling techniques is reviewed,
evaluated, and ultimately incorporated into the enterprise design method to foster the
integration of any type of system model.
Chapter 5 - Implementing Enterprise Design Along Design Timelines In this
chapter the notions of integrating design processes and integrating decisions through time
are explored in detail using the concept of design timelines. A wide range of potential
4242
applications of integrating decisions across design timelines are explored for a substantial
span of enterprise domains, and a detailed procedure is presented for achieving such
integration through time.
Chapter 6 - Case Study: Design of a Forward-Looking Infrared Radar System In
this chapter the enterprise design approach developed in this thesis is verified and validated.
The context of Forward-Looking Infrared Radar system design is first established, and an
early-stage system design decision is selected for study. The enterprise design approach is
then applied to the decision using the same goals that exist in current industry approaches,
thus giving a measure of comparison with which to verify the solutions that result. The
decision formulation is then expanded to encompass manufacturing process design and
organization design issues, thus illustrating the benefits of applying this enterprise design
approach and serving as its validation.
Chapter 7 - Achievements, Recommendations and Summary In this chapter closure
is reached for the dissertation. We return to the research questions and examine the
contributions in this dissertation to evaluate whether the questions have indeed been
answered in the affirmative. Potential avenues for further research are then discussed, with
the hope that the torch will be passed and carried on. Concluding remarks bring the
dissertation to a close.
With these chapter summaries fresh in mind we can now observe where each
element of the dissertation argument is located. This first “road map” is shown in Figure
1.9. Each element is labeled with a shadowed balloon which indicates the section in which
it can be found. (In each balloon the symbol “§” is used as shorthand for “Section.”) The
second “road map” offered here is in terms of the location of each contribution in this
dissertation. These locations are shown in Figure 1.10, again with a shadowed balloon
illustrating where the element can be found. These figures bring Chapter 1 to a close, and
ideally will serve as jumping-off points for the rest of this dissertation.
4343
Q 2.1Integration
across Domains
Q 2.2Integration
through Time
Method for Enterprise Design
Decision-Based Approach to Enterprise Design (Hypothesis 2)
Hybrid Paradigm for Decision Support (Hypothesis 1)
Domains
Tim
e
Timeline Procedure
EXISTING METHODS, TOOLS & TECHNIQUES
• System Modeling
• Statistical Metamodeling
• DSP Technique- DSPT Palette- Compromise DSP- DSIDES- RCEM
CONTRIBUTIONS
Motivation(Research Question 1 and Question 2)
Philosophy Based on• Engineering Design• Operations Research• Systems Theory• Bounded Rationality• Empowerment
ANENTERPRISE
Revised Task SP
Guidelines for Metamodeling
Categorizationof System Modeling
§ 1.3.1
§ 2.3.1
§ 2.3.2
§ 1.3.2
§ 1.3.3
§ 1.3.4
§ 3.2
§ 2.4
§ 4.4
§ 3.3.2
§ 3.3.3
§3.3.4
§ 3.3.5§ 4.5
§ 3.4
§ 5.4§ 2.4.1§ 3.4.1
Q 1.1
Unified
Q 1.2
Quantifiable
Figure 1.9 Road Map 1: Where Each Element of the DissertationArgument Can Be Found
4444
Method for Enterprise Design
Timeline Procedure
Revised Task SP
Guidelines for Metamodeling
Categorizationof System Modeling
Definition of Interdependence
between Decisions
Integration of Models into Decisions
Integration of Decisions and Design Processes
Overall Philosophy for Implementing Method
Empowerment and Bounded Rationality
Engineering Design / DBD/ OR / Systems Theory
System Modeling, Metamodeling, Decision Support Problem Technique
Hybrid Paradigm for Decision Support
Decision-Based Approach to Enterprise Design
§ 5.2.3
§ 1.3.3
§ 1.2 § 1.3.2§ 1.3.1§ 2.3.1 § 2.3.2
§ 2.3.3
§ 3.1
§ 4.4§ 2.4.2 § 3.3§ 2.4.3 § 2.4.4
§ 2.4.1
§ 3.4.1 § 5.4
§ 3.4
§ 3.4.5
§ 4.5 § 4.5.2
Figure 1.10 Road Map 2: Where Each Contribution Can Be Found
4545
2. CHAPTER 2
ENTERPRISE MODELING AND INTEGRATION:A REVIEW OF CURRENT LITERATURE
The words “enterprise”, “modeling”, and “integration” are fundamental to this
research as is established in Chapter 1. Together these words also represent the substantial
research community of “Enterprise Modeling and Integration”, and the work generated in
this community is similar in intent to the work offered in this dissertation. Therefore in this
chapter the ongoing work in enterprise modeling and integration is surveyed, thus
establishing a reference point to the alternate approach to enterprise integration offered in
this dissertation. The review of work in enterprise modeling and integration is presented in
Section 2.2, and the alternate path to integration based on empowerment, bounded
rationality and decision modeling is presented in Section 2.3. In this alternate approach the
fundamental enterprise model is a decision model, and within this decision model all other
modeling schemes may play a role in formalizing and quantifying various aspects of a
decision. Finally in Section 2.4 a review of system modeling techniques is presented; this
review spans applications across product design, manufacturing process design, and
organization design. The commonalities of these modeling techniques are highlighted as a
preface to how these techniques can be integrated into a decision-based method for
enterprise design; this method is the focus of Chapter 3.
4646
2.1 WHAT IS PRESENTED IN THIS CHAPTER
H 1.1
H 2.2
H 1.2
H 2.1
Q 2.1Integration
across Domains
Q 1.1Unified
Q 1.2Quantifiable
Q 2.2Integration
through Time
Method for Enterprise Design
Decision-Based Approach to Enterprise Design (Hypothesis 2)
Hybrid Paradigm for Decision Support (Hypothesis 1)
Domains
Tim
e
Timeline Procedure
EXISTING METHODS, TOOLS & TECHNIQUES
• System Modeling
• Statistical Metamodeling
• DSP Technique- DSPT Palette- Compromise DSP- DSIDES- RCEM
CONTRIBUTIONS
Motivation(Research Questions 1 and 2)
Philosophy Based on• Engineering Design• Operations Research• Systems Theory• Bounded Rationality• Empowerment
ANENTERPRISE
Revised Task SP
Guidelines for Metamodeling
Categorizationof System Modeling
§ 2.3.1
§ 2.3.2
§ 2.4
§ 2.4.1
§ 2.2
§ 2.3
The context of enterprise modeling and integration is established in Section 2.2, and
an alternate path to integration, based on bounded rationality and empowerment, is given in
Section 2.3. Hypothesis 1.2 is addressed squarely in Section 2.4, resulting in a
categorization of system modeling techniques in Section 2.4.1 and a review of these
techniques across product design, manufacturing process design, and organization design.
4747
2.2 ENTERPRISE MODELING AND INTEGRATION: ACCEPTEDDEFINITIONS AND RESEARCH THRUSTS
The central theme that evolves in this section is that although this research is
focused on integrating product design, manufacturing process design and organization
design into a unified approach to enterprise design, and although modeling is a central
activity in achieving such integration, there are not many parallels to existing work in
enterprise modeling and integration. How is this so? Bernus and Nemes (1996) state that
enterprise models are built for:
• communication between various enterprise engineering activities (and people
involved in these),
• analysis of the aspects of the enterprise (e.g., to evaluate design alternatives), and
• coordination or direct control of business processes.
The majority of research in enterprise modeling and integration appears to be focused
primarily on the first and third of these modeling purposes. Although analysis is
supported, it appears to be considered an isolated activity and is not considered a pathway
to integration. Subsequently, the research presented in this section is primarily directed
towards constructing general modeling schemes and architectures that encompass any and
all aspects of an enterprise. The alternative approach offered in this research, one in which
all varied analysis models are integrated into a decision-based framework, is introduced in
Section 2.2.
Much of the roots of Enterprise Modeling (EM) and Enterprise Integration (EI) can
be traced back to Computer Integrated Manufacturing (CIM), and this context helps define
the implicit goals of EI and EM as well as the standard approaches employed to bring these
goals to life. It is fair to say that CIM, and therefore EM and EI, have grown into fields at
the intersection of several disciplines, including management science, information
4848
technology, industrial engineering and control, and the management of technology. It is
therefore not surprising that many different definitions of CIM are offered in the literature,
each embodying individual assumptions and perspectives. To establish a holistic
perspective to CIM a set of representative definitions is offered below with the intent that
the summation of all the voices will capture common trends and highlight shared
assumptions.
From an industrial engineering perspective (Nagata, et al., 1993), CIM is, “an
integrated system which combines the areas of production, marketing, and R&D, to
manage and operate them under a single management strategy with the support of
computers so that the production operation can be efficient and flexible.” Alternatively,
from a management of technology perspective (Noori, 1990), “Standalone manufacturing
systems are often referred to as ‘islands of automation’. Ultimately, these cells will be
linked with each other, with other manufacturing activities, and finally with other
departments. This approach is known as CIM.” Similarly, from an information
technology perspective (Joryz and Vernadat, 1990), the objective of CIM is the appropriate
integration of enterprise operations by means of efficient information exchanges within the
enterprise with the help of information technology. Similarly (Ngwenyama and Grant,
1994), “CIM incorporates a wide range of information technologies, such as electronic data
processing, management information systems, decision support systems, expert systems,
computer-aided design, computer-aided manufacturing, computer-aided process planning,
and flexible manufacturing systems.” Finally (Bessant, 1991), “CIM is the integration of
computer based manufacturing processes, drawing on a common database and
communicating via some form of computer networks.”
What is the essence of these definitions of CIM? The trend of continuing
automation of manufacturing, or computer-controlled manufacturing, is taken as given. In
addition a systems perspective is clearly evident, in the sense that “islands of automation”
4949
are akin to sub-optimization, and that communication and coordination are a necessity for
achieving efficiency, flexibility, and other system-wide goals. Integration is offered as the
key to avoiding inefficiencies; by integrating islands of automation into a unified
manufacturing system, the overall system performance can be quantified and high-leverage
points of change identified. Finally, this integration is couched in terms of electronic
information transfer -- computers talking to computers.
As the goals of CIM have extended to include business functions outside of the
manufacturing realm, the need for a more encompassing approach to integration,
integrating the entire enterprise, has become apparent. Yet still the foundation in CIM is
evident; enterprise modeling can be used to provide a top-down view of the enterprise’s
information requirements and a ‘road map’ for guiding the development of integrated
enterprise-wide information systems (Martin, 1983). Similarly Ngwenyama and Grant
(1994) state that, “enterprise modeling provides methods, tools, techniques, and a
philosophy for describing and analyzing relevant aspects of the business enterprise, and
deriving a conceptual architecture upon which the development and implementation of
[CIM] could be based.”
What then is the goal of enterprise integration? As stated by Vernadat (1996), “EI
is concerned with facilitating information, control, and material flows across organizational
entities by connecting all the necessary functions and heterogeneous functional entities
(information systems, devices, applications, and people) in order to improve
communication, cooperation, and coordination within this enterprise so that the enterprise
behaves as an integrated whole, therefore enhancing its overall productivity, flexibility, and
reactivity or capacity for management of change.” Recent work of the Joint International
Task Force on Architectures for Enterprise Integration has resulted in a taxonomy of
ingredients for enterprise integration (Bernus and Nemes, 1996), which include the
following elements:
5050
a) Generic enterprise reference architecture for describing elements of enterprise-
related life-cycles,
b) Enterprise engineering methodologies, or guidelines for implementing an
enterprise integration program,
c) Enterprise modeling languages,
d) Enterprise modeling tools,
e) Generic theories for enterprise modeling languages which describe their
meaning and semantics,
f) Generic enterprise models, which are reusable, generic or prototypical models
of functional ingredients of enterprises, and
g) Generic enterprise modules, which are products that implement one or more
generic models.
The imperative relationship between enterprise integration and enterprise modeling is clear
from the above list -- modeling is a central aspect of integration. Again turning to Vernadat
(1996), “The prime aim of enterprise models is to provide a ‘map’ or common vision of
what happens in the enterprise.” Enterprise modeling is primarily focused on creating
architectures, modeling tools, and modeling methodologies; models are created by using
the modeling tools in accordance with a prescribed methodology. An architecture is
defined (O'Sullivan, 1994) as, “a body of rules that define those system features which
directly affect the manufacturing environment into which the system is placed. These
features include system configuration, component locations, interfaces between the system
and its environment, and modes of operation.”
Several enterprise modeling architectures have been proposed in the literature, and
surveys can be found in (Kateel, et al., 1996) and (O'Sullivan, 1994). A brief summary of
three selected architectures follows:
5151
CIM-OSA : This Computer Integrated Manufacturing - Open Systems
Architecture has three levels of model derivation: requirements definition, design
specification and implementation description (Joryz and Vernadat, 1990). It also
has four views (functional, informational, resource, and organizational) and three
levels of model instantiation (generic, partial, and particular).
NBS : Developed by the National Bureau of Standards, this architecture is based
on the concept of hierarchical control. The NBS architecture was developed to
consist of five levels of hierarchy: facility, shop, cell, work station, and machine.
The decomposition is based on procedures, functions, and rules (O'Sullivan,
1994).
ICAM : The Air Force’s Integrated Computer Aided Manufacturing project offers a
suite of IDEF (ICAM DEFinition) modeling methods, from functional modeling
(IDEF0), information modeling (IDEF1), data modeling (IDEF1x), systems
dynamics modeling (IDEF2), process description capture (IDEF3), object oriented
design (IDEF4), and ontology capture (IDEF5), among others (Liles and Presley,
1996).
In sum, enterprise architectures are integrated modeling schemes -- they can be applied to
any and all aspects of an enterprise at nearly any level of abstraction. For each application,
all that is required is that the appropriate model view (in CIM-OSA) or modeling method (in
IDEF) is selected. Basically, then, enterprise integration is achieved by constructing and
applying integrated modeling schemes (architectures). In other words, huge and complex
solutions are offered for huge, complex problems.
As an option to imposing a modeling architecture, enterprise integration can also be
achieved by integrating existing models. Arguments exist against building many small
models of (perhaps the same) system. The first argument is that of efficiency. In the
5252
context of CIM, the "traditional" approach to modeling is to first identify the purpose for
the model (Kateel, et al., 1996). The modeler then constructs an abstract structure based
upon all system features germane to the purpose, and uses this model to aid in the
integration of system components. However, "[this] purpose driven, tool dependent
modeling has the inherent disadvantage that one has to construct different models for
different purposes even though the system being represented is the same." Similarly,
Delen et al. (1996) note that, "this single-use, throw-away mentality of modeling is very
expensive, time consuming and wasteful."
A proposed alternative to the inefficiencies of many models is a return to the single
integrated model, in this context called a "base model" (Duse, et al., 1993). Such a base
model is suggested so that a model of the enterprise can be constructed that is problem-
independent and analysis tool-independent. The base model is not static (Delen, et al.,
1996) and instead "evolves with the organization and is persistent in time" and therefore,
"maintaining the base model is thus a modeling activity for the sake of modeling and is not
pursued with an immediate specific purpose in mind." However, the additional effort
required to maintain such an encompassing model would surely not be negligible, and so it
is by no means clear that this approach is indeed more efficient. The authors do not
address this point.
Other arguments against building many small system models, also from a CIM
perspective, are that local models may inherently lead to suboptimization, and that getting
the models to communicate with each other may be difficult. Both of these concerns are
legitimate. In terms of suboptimization Kateel et al. (1996) discuss development of an
integrated model versus model integration. Their argument for "one big-picture (global
view)" model is that building many local models separately will result in models that are
"myopic" and "naturally strive towards local optimization." However, they go on to
recognize the sheer effort required to build such a big-picture model for existing
5353
organizations and recommend its consideration only when establishing new systems.
Though on a broader note, while it is true that a big-picture model would be internally
consistent, it is not clear that this consistency would be sufficient to avoid sub-
optimization. The model must also be easy to understand and implement at each local
decision point so that it can be used effectively, and it is not obvious that such models
would display these characteristics. So although sub-optimization is a concern, perhaps it
is better addressed by establishing a comprehensive and effective communications network
for the democratization of information dissemination, as discussed in terms of the
principles for enterprise integration in Section 2.3.2.
In terms of the difficulties of integrating existing models, problems that appear with
such model integration is that they often have been developed on different computer
platforms and may operate in different languages. The first International Conference on
Enterprise Modeling Technology (ICEMT) identified three types of approaches to the
problem of syntactic and semantic model integration (Petrie, 1992): master models are a
single reference model from which all other models are derived, unified models are
metamodels which translate between models, and federated models are loosely coupled
models. In the sections to follow such an approach to model integration is developed based
on a “unified model” approach. Metamodeling is the focus of Chapter 4.
2.3 DECISIONS, BOUNDED RATIONALITY AND EMPOWERMENT:IMPLICATIONS FOR INTEGRATION
The creation of the enterprise modeling schemes and architectures in Section 2.2
can be traced to a central assertion. Vernadat (1996) states that, "in order to integrate
enterprise operations, models of relevant parts of the enterprise must be developed. These
models must cover the what, when, how, and by whom aspects of what has to be done in
the enterprise." The fundamental goals for such integration are to improve communication,
5454
cooperation, and coordination within the enterprise. Recognizing that many aspects of
enterprise operations are automated, it makes sense to develop the encompassing
formalisms of enterprise modeling schemes and architectures to facilitate their computer
implementation.
However, from an enterprise design perspective, instead of modeling to integrate
we are more interested in modeling to improve (or design). This context does exist in the
enterprise modeling community; Andrew Blyth (1996), in the ACM SIGOIS Bulletin
Special Issue on Enterprise Modeling, states:
Enterprise modeling is widely used as a catch all title to describe the activity ofmodeling any pertinent aspect of an organisation's structure and operation inorder to improve, and/or reposition, selected parts of the organisation. Typicallyenterprise modeling has been applied to the gathering and reasoning aboutvarious aspects of organisations, such as: Processes, Information flows,Organisational boundaries, Organisational policies, Strategy and CorporateVision, Job design, Security and finally Ontologies.
This view is also supported by Gale and Eldred (1996):
“Enterprise modeling is the tool of business reengineering. During anenterprise-reengineering project, a model of the enterprise is constructed,usually in two views: present methods of operations and future methods ofoperations.” This model is then used for diagnosis and to guide thereengineering process.
These motives are also evident from the perspective of enterprise engineering:
Enterprise engineering deals with the analysis, design, implementation andoperation of an enterprise. The enterprise engineer addresses a fundamentalquestion: how to design and improve all elements associated with the totalenterprise through the use of engineering and analysis methods and tools tomore effectively achieve its goals and objectives. (Liles and Presley, 1996)
From this enterprise design context, the focus shifts from the integration of models to the
integration of design efforts. As established in Sections 1.2 and 1.3, the fundamental unit
for achieving enterprise design is the decision, performed by employees at all levels and
across all aspects of the enterprise. At the heart of this process is the concept of a
human/computer partnership tackling each decision. So although communication,
cooperation and coordination remain important, the human presence makes the formalism
5555
of an enterprise modeling language not strictly necessary. In this section we discuss the
nearly inarguable concept of "bounded rationality", the notion that there are limits to the
powers of human cognition. Applying bounded rationality to enterprise design, it becomes
clearly evident that design decisions cannot encompass all enterprise variables and instead
should focus on the local issues at hand. What might be perceived as a potential limitation
is instead turned to a strength by the complimentary concept of "empowerment", which
expresses the critical importance of involving all employees across the enterprise in the
decision-making process. Through the sum total of their combined efforts, the huge and
complex task of enterprise design suddenly becomes tractable.
Through the discussion of bounded rationality and empowerment in this section,
two central elements of the approach to enterprise design are introduced:
• the language of decisions is the pathway to integration, and
• models are built as required, subordinate to the decision-making process.
The first element is developed in more detail in Section 3.1, and the second element is the
focus of Section 2.4.
2 .3 .1 Bounded Rationality
What is bounded rationality? The foundations of bounded rationality are
established in Herbert Simon's text Administrative Behavior (Simon, 1976). Simon draws
a comparison between objective rationality, the ideal state commonly employed in economic
analyses that assumes perfect and complete information, and bounded rationality, the actual
state of the world in which information is uncertain and incomplete, situations are
overwhelmingly complex, and the number of potential courses of action is nearly infinite.
Objective rationality would imply that the behaving subject molds all hisbehavior into an integrated pattern by a) viewing the behavior alternatives priorto his decision in panoramic fashion, b) considering the whole complex ofconsequences that would follow on each choice, and c) with the system ofvalues as criterion singling out one from the whole set of alternatives. (p.80)
5656
Actual behavior fall short, in at least three ways, of [this definition of] objectiverationality:
1) Rationality requires a complete knowledge and anticipation of theconsequences that will follow on each choice. In fact, knowledge ofconsequences is always fragmentary.
2) Since these consequences lie in the future, imagination must supply thelack of experienced feeling in attaching value to them. But values can beonly imperfectly anticipated.
3) Rationality requires a choice among all possible alternative behaviors. Inactual behavior, only a very few of all these possible alternatives evercome to mind. (p. 81)
When the limits to rationality are viewed from the individual’s standpoint, theyfall into three categories: he is limited by his unconscious skills, habits, andreflexes; he is limited by his values and conceptions of purpose, which maydiverge from the organization goals; he is limited by the extent of hisknowledge and information. (p. 241)
In human terms, Simon (1976, pp. xxix-xxx) speaks of the differences between
"administrative man" and "economic man":
Economic man deals with the “real world” in all its complexity. Administrativeman recognizes that the world he perceives is a drastically simplified model ofthe buzzing, blooming confusion that constitutes the real world.
Whereas economic man maximizes -- selects the best alternative from among allthose available to him, his cousin, administrative man, satisfices -- looks for acourse of action that is satisfactory or “good enough”.
Because he satisfices rather than maximizes, administrative man can make hischoices without first examining all possible behavior alternatives and withoutascertaining that these are in fact all the alternatives.
This concept of bounded rationality also appears in the cognitive science and enterprise
modeling literature; in both it can be observed as a natural consequence of adopting a
systems perspective to solving problems:
Evidence is overwhelming that human beings have “cognitive limitations”.Cognitive scientists have shown that we can deal only with a very small numberof separate variables simultaneously. Our conscious information processingcircuits get easily overloaded by detail complexity, forcing us to invokesimplifying heuristics to figure things out. (Senge, 1990, p. 365)
This lack of emphasis on system issues [in manufacturing enterprises] is not theresult of a lack of appreciation for the importance of the problem. Rather,
5757
anyone attempting to address these system issues is immediately confronted bythe overwhelming complexity of the problem. (Heim and Compton, 1992)
What does bounded rationality imply for enterprise modeling and enterprise design? The
attempt to build unified and encompassing models that capture all aspects and variables of
an enterprise is certainly a Herculean task, and perhaps can be only imperfectly achieved.
One need only to look at the information-processing tenets of Galbraith's approach to
organization design (Galbraith, 1977) to observe that the complexity of all of the
information flows in an enterprise would certainly overwhelm any one actor in the
organization. So perhaps, instead of one huge complex model, many smaller and tractable
models are more appropriate. Little (1992) says it best:
We indulge in modeling myopia if we believe as system analysts that we can (orshould) be building complete models of our system and setting all the controlvariables. Doing so misses the major opportunities for system improvementthat are possible by finding new ways to empower the people on the front linesof the organization by giving them information, training and tools with which toimprove their own performance... Thus, a hundred different models areneeded, not one big model.
Designing an enterprise is a huge task, and clearly through bounded rationality it is beyond
the scope of any one designer. Similarly, building an encompassing model of an enterprise
would be just as daunting a task. Yet every existing enterprise has come into being at some
time and all have evolved over time, so all have in fact been designed. The key shift in
thinking is from visualizing a design process as initiated by a single designer to recognizing
the actual enterprise design process as the combined actions of the empowered group of
decision makers that comprise the organization. In other words, through employee
empowerment enterprise design becomes tractable.
2 .3 .2 Empowerment
What is empowerment? The concept of empowerment stems from the fundamental
belief the employees of an organization are its most important asset; this belief is prompted
by the recognition that when properly challenged, informed, integrated, and empowered,
5858
the employees are a powerful force in achieving the goals and objectives of the
organization. Empowerment is framed by Badore (1992) in terms of the two
complementary elements of employee involvement and employee empowerment:
The term employee involvement means inclusion of the employee in theoperation of the system. The two principal objectives of employee involvementare as follows:
1. To create, share, and make ‘real’ for all employees a vision of the goalsfor the overall enterprise as well as for each organization unit.
2. To seek and share the knowledge possessed by individual employees inachieving that vision.” (to identify and solve problems)
On empowerment: “If proper advantage is to be taken of the knowledge that theemployee possesses, it is necessary for an organization to empower theemployee to implement the improvements that they know to be necessary. Byso doing, the enterprise is making the employee an integral part of the processof staying competitive. (Badore, 1992)
Empowerment holds great potential to achieving enterprise integration; Hanson (1992)
states that a successful implementation of an Integrated Enterprise exhibits five principles of
leadership in the management of people and technology.
• “The first principle asserts that when people understand the vision, or largertask, or an enterprise and are given the right information, the resources, and theresponsibility, they will ‘do the right thing’.”
• The second principle addresses empowerment of the individual. “Empoweredpeople... will have not only the ability but also the desire to participate in thedecision process.”
• The third principle is the “existence of a comprehensive and effectivecommunications network”, and the fourth principle is the democratization anddissemination of information throughout the network.
• “The results of the first four principles imply the fifth -- distributed decisionmaking. Information freely shared with empowered people who are motivatedto make decisions will naturally distribute the decision-making processthroughout the entire organization.”
These principles serve well to capture the philosophy embedded in the decision-based
approach to enterprise design offered in this dissertation.
Empowerment can also be a powerful force for both job satisfaction and continuous
improvement. Senge (1990) states: “[it is a] mistaken belief that fundamental change
5959
requires a threat to survival. This crisis theory of change is remarkably widespread... [but]
then I ask, 'What is the first thing you would seek if you had a life of absolutely no
problems?' The answer, overwhelmingly, is 'change -- to create something new.' Or, as
one seasoned organization change consultant once put it, ‘People don’t resist change. They
resist being changed.’” Similarly Heim and Compton (1992) recognize that, “[the] placing
of responsibility and authority with the individual -- the empowerment of the individual
employee -- is critical to accomplishing the objective of continuous improvement.”
Finally, empowerment is often framed in terms of decision-making and authority.
“Authority is exercised whenever a person allows his decisions to be guided by decision
premises provided to him by some other person.” (Simon, 1977) “Acceptance of authority
denies participation in the decision making process. Participation in decision, however, is
essential to achieve understanding and enthusiasm in carrying out decisions.” (Dressler,
1976) Similarly (Senge, 1990), “People learn most rapidly when they have a genuine
sense of responsibility for their actions. This is why learning organizations will,
increasingly, be “localized” organizations, extending the maximum degree of authority and
power as far from the “top” or corporate center as possible. Localness means moving
decisions down the organizational hierarchy; designing business units where, to the
greatest degree possible, local decision makers confront the full range of issues and
dilemmas intrinsic in growing and sustaining any business enterprise.”
This perspective on authority highlights a strong link between empowerment and
making decisions: a decision-maker is by default empowered, and empowerment is framed
in terms of making decisions. This link provides a strong transition from the concepts of
bounded rationality and empowerment to the notion of achieving integration through
decisions. Such a concept is presented in the next section.
6060
2 .3 .3 Integration Through Decisions
Through the previous discussions it becomes clear that both bounded rationality and
empowerment are phenomena that are difficult to deny in today's organizations. Bounded
rationality is an inevitable consequence of our cognitive abilities, and empowerment is
nearly universally recognized as critical for the competitive success of the enterprise. What
does this imply for enterprise modeling and integration? Both imply that many small
models are more appropriate for enterprise modeling than a single encompassing model.
There must be a common thread with which to integrate these disparate models; in
this section such a thread is again offered in terms of the activity of making decisions.
Instead of enterprise modeling being the pathway to enterprise integration, in this section
we explore two central elements of the approach to enterprise design:
• the language of decisions as the pathway to integration, and
• models are built as required, subordinate to the decision-making process.
These statements embody a philosophy of viewing modeling as a critical support activity to
the process of making decisions. It is important to recognize that this philosophy has a
foundation in the existing literature in enterprise design and management science; the tenor
of such a perspective is illustrated nicely by the representative quotes that follow:
Models provide a rational basis for predicting the impact of decisions beforetheir implementation by (quantitatively) describing the important elements,interactions, and dependencies. (Heim and Compton, 1992)
Generally, modeling serves policy. We construct and run models because wewant to understand the consequences of taking one decision or another.(Simon, 1990)
Models are important in capturing the critical variables and the relationships thathave been discovered by members of the organization. Models, therefore, offera broad basis for conveying shared experience and knowledge in manufacturingenterprises. (Heim and Compton, 1992)
World-class manufacturers seek to describe and understand the interdependencyof the many elements of the manufacturing system, to discover new rela-
6161
tionships, [and] to explore the consequences of alternative decisions... Modelsare an important tool to accomplish this goal. (Heim and Compton, 1992)
In concrete terms, this perspective of modeling as subordinate to the decision-making
process is illustrated well by the hybrid paradigm for decision support defined in Section
1.3.3 and shown in Figure 1.5. In fact, one of the strengths of this paradigm becomes
clear in this discussion of model integration. By its inherently multi-objective nature,
models are created separately for each goal and constraint of a decision. This is clear in
Figure 1.5. The ties that bind the models together are the common design variables that are
shared as input to all models, so in other words integration is achieved by grouping these
models into a single decision formulation.
Looking to Figure 1.5 the requirements for integrating a model into a decision
formulation become clear. First it must be a function, at least in part, of a subset of the
design variables identified for a decision. Second, the output of the model must contribute
to evaluating a goal or constraint deemed important in the decision formulation. Finally,
the model must be represented in a format that is compatible with the process of
formulating and solving the decision.
It is clear from these requirements that all-encompassing or complex modeling
schemes are not required; instead the emphasis is on utilizing any and all existing modeling
schemes used to evaluate design alternatives. These existing modeling schemes are best
described under the title of “system” modeling, so through the application of a decision-
based perspective to enterprise design a shift occurs from enterprise modeling to system
modeling. A review of existing system modeling techniques is given in the next section.
2.4 FROM ENTERPRISE MODELING TO SYSTEM MODELING
The message intended for this section is very simple and comes in two parts. The
first point is that the realm of existing system modeling techniques is broad enough to
6262
encompass a substantial number of issues related to product design, manufacturing process
design, and organization design. The second point is that it is feasible to integrate a
significant portion of these models into a decision formulation, thus establishing a decision-
based approach to enterprise design.
However, the simplicity of this message creates curious contradictions. On the one
hand, the validity of both points of this message may be brazenly obvious, especially given
an operations research or management science perspective. (These two points perhaps
define these fields.) On the other hand, any attempt to prove the first point by example
quickly balloons to encompass most fields of engineering as well as physics, management,
and perhaps computer science.
Approaching from another angle, these two points embody the issue of the extent to
which decisions are quantifiable. In other words, they address the extent to which
(mathematical) modeling can be applied in decision making. This line of thought is
introduced in Section 1.3.2 and is revisited in Section 6.6, but the short answer is that there
is no answer as yet; there are only opinions. The issue is in flux. Therefore, the approach
taken in this section is not to attempt to formally prove these points but instead to establish
their reasonableness and legitimacy. The intent is to establish the soundness and promise
of this research, and as with the development of any paradigm, its absolute validity will
only be established over time.
The reasonableness of the first point is established by highlighting some examples
and applications of system modeling techniques across product design, manufacturing
process design, and organization design in Sections 2.4.2, 2.4.3, and 2.4.4, respectively.
The reasonableness of the second point is established by creating a categorization of system
modeling schemes that illustrates their commonalities and how they meet the requirements
for model integration as described in Section 2.3.3. This categorization is developed in the
next section.
6363
2 .4 .1 Categorization of System Modeling Schemes
Looking to the American Heritage Dictionary (1993), a model is defined as:
• A small object, usually built to scale, that represents in detail another, often
larger object.
• A schematic description of a system, theory, or phenomenon that accounts for
its known or inferred properties and may be used for further study of its
characteristics.
Another important issue is the purpose to which modeling is employed; Simon (1990)
states that, “modeling is the principal -- perhaps the primary -- tool for studying the beha-
vior of large complex systems." Finally the engineering context of modeling is important;
models are intended to have real-world impact but accuracy must be balanced with
efficiency. This balance is struck by applying the concept of abstraction (Dictionary of
Computing, 1986), “the principle of ignoring those aspects of a subject that are not relevant
to the current purpose in order to concentrate more fully on those that are.” We can begin
now to appreciate the simplicity and elegance of the definition of modeling offered in
Section 1.3.3; it captures all of these issues (as long as the definition of “artifact” is
understood to include schematic descriptions). This definition is repeated below:
Modeling is “the construction of a small, inexpensive, and incomplete artifactthat simulates some real-world domain in which we are interested.” (Gale andEldred, 1996)
From these statements and definitions above it is easy to grant the nearly universal
applicability of modeling to all engineering and design activities. Does this mean that all
these activities can be integrated into a decision-based framework for enterprise design?
The answer is almost certainly not, at least not at this time. The reason can be found by
examining the requirements for model integration into a decision formulation as given in
Section 2.3.3:
6464
• The model must be a function, at least in part, of a subset of the design
variables identified for a decision.
• The output of the model must contribute to evaluating a goal or constraint
deemed important in the decision formulation.
• The model must be represented in a format that is compatible with the process
of formulating and solving the decision.
The implicit assumption of the first of these requirements, emphasized explicitly in Section
1.3.2, is that these models must be represented mathematically. In other words, model
input and output must be represented numerically, and the model itself must be amenable to
computer representation.
With these requirements fresh in mind, the realm of system modeling can be
explored in order to determine the subset that is amenable to integration into decision
formulations. This exploration is accomplished by reviewing the categorizations of system
modeling schemes offered in the literature and in the end by offering an overarching
categorization that captures all of these issues.
How are system models categorized? Models can be categorized by their intent or
purpose. In this vein Simon (1990) labels models as either predictive or prescriptive, while
Bernus and Nemes (1996) state that models are built for communication, analysis,
coordination or control (of business processes). Similarly recall from Section 1.3.3 that
Askin and Strandridge (1993) offer a list of model uses that include optimization,
performance prediction, control, insight, and justification, and they also offer a
categorization of the modeling process in terms of efficiency and effectiveness: “The
efficient modeler builds a mathematical description of the system and finds the optimal
solution to that model. The effective modeler builds a mathematical model of the system,
uses it to find a good solution to the model, and then modifies it with knowledge of
6565
relevant externalities that are not included in the model to find a very good solution to the
real system!”
Models are also categorized by the type of system or behavior they are intended to
represent. A categorization of function, behavior and structure for models in design is
offered both by Gero (1995) and by Welch and Dixon (1992). Models are categorized by
Vernadat (1996) in terms of their representation of processes, activities, or functional
entities, and a categorization of modeling “organizations, processes, products, and
objectives” is offered by Christensen and coworkers (1996).
Finally, models are also categorized in terms of the characteristics of the modeling
technique itself. A representative categorization is offered by Gordon (1978) in the context
of system simulation, where models are categorized as being physical versus mathematical,
deterministic versus stochastic, and warranting continuous simulation versus discrete event
simulation.
When combined, these categorizations form a more general, and perhaps an even
simpler or intuitive, categorization of system modeling schemes. The three issues of
importance are WHAT is being modeled, WHY it is being modeled, and HOW it is
modeled. The ‘what' is represented by the input to the model, and the ‘why’ is represented
by the output from the model, while the ‘how’ is represented by the actual model structure
or technique employed. A summation of the ‘what’ and ‘why’ categorizations is offered in
Figure 2.1. (The ‘how’ classification is omitted for clarity). Although the scope of this
research is intended to cover the entire range of the ‘what’s in Figure 2.1, system modeling
is employed to fill only the analysis role of the ‘why’s. Recall that synthesis is achieved
through the decision support paradigm shown in Figure 1.5, and that communication,
control and insight, while important, are not the central aspects of enterprise design under
consideration here. Therefore the categorization of Figure 2.1 can be expanded into more
detail focusing on analysis; this is illustrated in Figure 2.2.
6666
Function
Behavior
System
Process
Structure
Communication (Justification)
Analysis Control Insight Synthesis (Optimization)
Efficiency Effectiveness
WHAT
WHY
Instances of System Models
Products, Processes and Organizations
Figure 2.1 General Classification of System Modeling
WHAT WHYHOW
Input OutputModel
Product ArchitectureGeometric Features
Physical LayoutJobs
TasksFacilities
PeopleFunctions
Information FlowsPolicies
Strategies
PerformanceSafetyCycle TimeTime to MarketCostEnvironmental ImpactCustomer SatisfactionQualityJob SatisfactionMarket ShareReturn on Investment
System DynamicsDiscrete Event Simulation
Queueing TheoryLiving Systems Theory
RegressionControl Theory
Monte Carlo SimulationProbability Theory
Finite Element AnalysisExpert Opinion
Physical Prototypes
Figure 2.2 Categorization of System Modeling from an AnalysisPerspective
6767
Perhaps the primary component of the analysis perspective in Figure 2.2 is the idea
of using models to produce specific output measures; for other system modeling uses such
as communication or insight a model’s “output” may be more ephemeral (such as a
viewer’s “increase in understanding”). This perspective of analysis is inherent in common
engineering phrases such as “analyze the performance of the system”.
The categorization in Figure 2.2 also illustrates in a concrete sense how system
models can be integrated into a decision formulation such as shown in Figure 1.5. The
input to a model is framed in terms of the decision’s design variables, and the model’s
output is used to evaluate the extent to which a given goal or constraint of the decision is
met. The format or “how” of the model determines in a large part the ease to which the
model can be implemented on computer.
A list of examples is given in Figure 2.2 for the “what”, “how” and “why” of
system modeling from an analysis perspective. These lists are independent, and therefore
any combination of items from each list may represent a particular instance of system
modeling. In product design applications, the design variables may be the product’s
architecture or its geometric features, which for example could be used as input for finite
element analysis to determine measures of the product’s performance and safety. Similarly
in organization design the design variables may be the specific jobs, tasks and people for a
given business process which are used as input to a discrete event simulation or queuing
theory model to compute measures of process time, cost and quality. In manufacturing
process design the facilities and their physical layout may be used as input to a
manufacturing simulation or used with probability theory to estimate the return on
investment for new equipment under consideration. These examples are by no means
comprehensive and are intended to give a flavor of the wide range of system modeling
applications.
6868
Even though the examples in Figure 2.2 are contained to an analysis perspective, it
would be incorrect to assume that they all could easily be integrated into a decision-based
approach to enterprise design. The reason, as discussed in the beginning of this section, is
again that not all modeling schemes can (or perhaps should) be represented mathematically.
For example, models constructed using Living Systems Theory (Miller, 1978) are used to
represent the relationship between functional components of a system, connected by flows
of matter/energy and information. These models are then used to diagnose the system’s
functionality and to engage in a process of designing at the function level of abstraction
(Peplinski, 1994). Models can be created to represent systems across product design,
manufacturing process design and organization design (Peplinski, et al., 1996b), and these
models may be extremely useful in a given design process, but representing them
mathematically has proven to be a substantially difficult research issue.
It may be true that all instances and categories of system modeling techniques can
be represented by, or transformed into, a mathematical representation. As an example, the
behavior of physical prototypes can be represented mathematically through a process of
statistical experimentation and regression analysis. Pushing the boundaries of what is
quantifiable is a continuing quest that exists on a larger scale than any one research effort,
and so its ultimate conclusion is beyond the scope of this work.
With this categorization and context of system modeling firmly in mind it is now
meaningful to survey system modeling applications across product design, manufacturing
process design, and organization design.
2 .4 .2 System Modeling and Analysis in Product Design
The sheer number of examples and applications of system modeling in product
design is overwhelming. Examples can be found in nearly every textbook in engineering,
whether it be in dynamics, controls, heat transfer, fluid or structural mechanics, materials
6969
science, thermodynamics, or digital or analog circuit design. Examples also abound in
engineering journals and conference proceedings; it is a fair assumption to expect to find at
least one example in every issue.
There are a wealth of examples of applying system modeling to decision
formulations in product design; these examples span applications in the design of ships
(Mistree, et al., 1990b, Smith, 1988), automobiles, composite materials (Karandikar and
Mistree, 1992), aircraft (Chen, et al., 1996, Lewis and Mistree, 1996, Simpson, et al.,
1996), and gas turbine engines (Koch, et al., 1996), to name a few.
2 .4 .3 System Modeling and Analysis in Manufacturing Process Design
System models exist to support all levels of manufacturing process design, from
day-to-day scheduling and material releases, to medium-term planning and scheduling, to
long-term investment decisions in manufacturing capacity, facilities and equipment. These
system models fall into the categories of simulation (discrete event or system dynamic),
analytical queuing theory, and decision support systems and are generally aimed at the
efficiency of process alternatives in terms of process throughput, cost, and cycle times.
(Models also exist to quantify the effectiveness or technical feasibility of manufacturing
processes, but these models seem to fall into the areas of process planning or materials
science, which are outside the scope of this research.)
Application of discrete-event simulation to manufacturing processes has grown into
a research area in its own right; excellent textbooks on the subject are by Gordon (1978),
Law and Kelton (1991), Tumay and Harrell (1995), and Banks, et al. (1996). However,
perhaps the most comprehensive and up-to-date source for simulation modeling
applications in manufacturing are the Proceedings of the Winter Simulation Conference,
published annually by IEEE. As points of interest, description of simulation and
scheduling systems such as MRP and MRP II can be found in the management of
7070
technology literature (Noori, 1990), and a historical review of simulation languages can be
found in (Nance, 1995). An example of integrating manufacturing simulation models into
the type of decision formulations advocated in this thesis can be found in (Peplinski, et al.,
1996a). Examples of applying system dynamic simulation to manufacturing problems can
also be found in (Forrester, 1962) and in (Richmond, 1994).
An excellent discussion of queuing theory models and other analytical (closed form)
models of manufacturing is given by Askin and Standridge (1993); these models are
intended for applications where the intensive effort and detail required to build simulation
models is not warranted. These models are usually intended as back-of-the-envelope
calculations that may preface a more detailed study, but interestingly if the manufacturing
system under study does meet the simplifying assumptions required for these techniques,
then queuing theory models actually predict the system behavior nearly equivalently to
discrete-event simulation models2.
Finally, system models also exist to support the long-range planning and strategic
investment processes in manufacturing. Cost/benefit analyses are performed to facilitate
the design of the decision process for strategic investment in advanced manufacturing
systems in (O'Brien and Smith, 1993), and similarly analytic methods are employed for
manufacturing investment decisions in (Swann and O'Keefe, 1990). Manufacturing
simulation and analysis is also employed in an enterprise-wide context in (Mujtaba, 1994).
In sum there are clearly applications of system modeling to all levels of
manufacturing process design, and these models, whether they be in the form of discrete-
event simulations, queuing theory models, or cost/benefit analyses, are amenable to
integration into a decision-based approach for enterprise design as is developed in this
dissertation.
2 Personal communication with M. Berryman, 1996.
7171
2 .4 .4 System Modeling and Analysis in Organization Design
The span of issues encompassed by organization design is daunting. It can be
focused on the redesign of a specific business process or the updating of a local
information technology system, but it also encompasses the realm of sweeping and long-
term issues such as facility location, new product introductions, and strategic planning. It
is not the intent in this section to argue that all of these aspects of organization design are
amenable to system modeling and analysis; certainly the more human, interpersonal and
ethical issues do not lend themselves to mathematical treatment and should remain in the
more traditional management domains. But just as certainly there are organization design
issues that do lend themselves to system modeling and analysis, in this section two areas of
application are introduced -- those of strategic management and of the design and
management of product development processes.
In strategic management the trend of using system modeling and analysis may be
only beginning, but there are a handful of solid instances that serve to establish its
potential. Waddock and Isabella (1989) describe a computer banking simulation which has
been used to test assumptions about 1) the importance of the environment on the strategies
selected by strategic management, and 2) the impact of these managers’ beliefs about the
environment on the bank’s performance. The simulations are performed in a competitive
gaming environment in which the performance of multiple competing banks is recorded in
response to decisions made by strategic management over time. (Preliminary results have
indicated that both the environment and beliefs about the environment are important.)
Further, in strategic logistics planning, a computer simulation tool has been
developed (Hameri and Paatela, 1995) to provide fast and reliable appraisal of various
potential logistics scenarios considering customer requirements, supply constraints,
physical material flows, and measures of economic and ecological performance. Similarly,
strategic planning at Canon implemented a managerial simulation model for long-range
7272
planning as early as 1970 (Nakahara and Isonon, 1992); this model has been used to
compute and revise figures such as sales volume, number of employees, production
capacity, and funds in correspondence with changes in the environment.
Finally, a literature analysis by Clark (1992) has indicated that in a survey of nearly
800 published reports in the areas of strategic planning and strategic management from the
years of 1980 through 1990, there were 90 instances of simulation being employed, 48
applications in which mathematical programming was cited, and 119 instances of the use of
decision support systems.
In the literature concerning the design and management of product development
processes, however, there has been a considerable amount of activity and the references
cited here are only a representative sample. Eppinger (Eppinger, 1991, Eppinger, et al.,
1994) has used the concept of the Design Structure Matrix (Steward, 1981) to represent the
tasks of a design process as well as the coupling between tasks; matrix operations can then
be performed in order to restructure the design process, ideally reducing the impact of
potential iterations between tasks. In related work (Nukala, et al., 1995), design process
models are represented using signal flow graphs, and these models are analyzed to yield
measures of process duration or lead time.
Duffey and Dixon (1993) have created a model to help evaluate development
process cost and schedule for new product designs based on relational matrices that link
product attributes with associated realization activities and resources.
An interesting tack is taken by Christian, et al. (1996); they have developed an
event-based simulation that “uses agents to represent engineers working on a design project
within a virtual office environment, exchanging information and making decisions.”
Project durations are thus computed, as well as their sensitivities to factors such as task
size, agent efficiency, and communication efficiency. Simulations have been run and have
compared well against actual data of project durations. This simulation system is also
7373
being employed in the offshore oil industry (Christensen, et al., 1996) to simulate project
team performance in terms of project duration and cost and process quality.
In these last few sections a substantial number of system modeling techniques have
been reviewed, spanning applications in product design, manufacturing process design,
and organization design. Because the sheer number of actual applications is immense, this
review has only skimmed the surface of this complete body. However, the majority of the
applications reviewed here share a common trait: they are all amenable to integration into a
decision formulation, as set forth in Section 2.4.1. Therefore these reviews serve as
support for the claims of domain-independence and mathematical rigor embodied by
Hypotheses 1.1 and 1.2; it should now appear reasonable and legitimate to establish a
decision-based approach to enterprise design, as argued for in Section 2.4. This approach
to enterprise design is developed in the next chapter.
2.5 PLACING THIS CHAPTER IN CONTEXT
This context is illustrated in Figure 2.3. In this chapter the philosophical
foundations of bounded rationality (Section 2.3.1) and empowerment (Section 2.3.2) have
been established and thus used to create a formal description of integrating models into
decisions in Section 2.3.3. This approach fosters a shift away from enterprise modeling to
system modeling, and so reviews of system modeling techniques are given in Sections
2.4.2, 2.4.3, and 2.4.4. These modeling techniques feed into a categorization scheme,
developed in Section 2.4.1, that details how they can be integrated into decision
formulations and thus used in a method for enterprise design.
7474
Method for Enterprise Design
Timeline Procedure
Revised Task SP
Guidelines for Metamodeling
Categorizationof System Modeling
Definition of Interdependence
between Decisions
Integration of Models into Decisions
Integration of Decisions and Design Processes
Overall Philosophy for Implementing Method
Empowerment and Bounded Rationality
Engineering Design / DBD/ OR / Systems Theory
Hybrid Paradigm for Decision Support
Decision-Based Approach to Enterprise Design
Metamodeling, System Modeling, Decision Support Problem Technique
Developed in This Chapter
Developed in Previous Chapters
Developed in Chapters to Come
Figure 2.3 Pictorial Representation of Chapter 2 Context
7575
3. CHAPTER 3
A DECISION-BASED APPROACH TOENTERPRISE DESIGN
In this chapter an approach to enterprise design is developed, framed in terms of
formulating and solving multi-objective decisions. This approach to enterprise design is
embodied by a philosophy rooted in empowerment and bounded rationality (set forth in
Section 3.2) and methods and tools to:
• model and quantify the enterprise-wide effects of each local decision,
• identify and resolve the natural interdependencies that exist between decisions
across enterprise domains and through time, and
• rigorously formulate and solve each decision to identify the best solution
considering its multiple enterprise-wide effects.
If this approach is implemented at all levels and across all domains of an enterprise, it is
envisioned that the enterprise as a whole will be designed in a holistic and internally
consistent manner.
Tools and methods from the rich technology base of the Decision Support Problem
Technique are presented in Section 3.3, and the heart of the enterprise design approach, a
method for formulating and solving enterprise design decisions, is developed throughout
Section 3.4. Finally, a discussion of the computer infrastructure implicit in the approach to
enterprise design is given in Section 3.5.
7676
3.1 WHAT IS PRESENTED IN THIS CHAPTER
H 1.1
H 2.2
H 1.2
H 2.1
Q 2.1Integration
across Domains
Q 1.1Unified
Q 1.2Quantifiable
Q 2.2Integration
through Time
Method for Enterprise Design
Decision-Based Approach to Enterprise Design (Hypothesis 2)
Hybrid Paradigm for Decision Support (Hypothesis 1)
Domains
Tim
e
Timeline Procedure
EXISTING METHODS, TOOLS & TECHNIQUES
• System Modeling
• Statistical Metamodeling
• DSP Technique- DSPT Palette- Compromise DSP- DSIDES- RCEM
CONTRIBUTIONS
Motivation(Research Questions 1 and 2)
Philosophy Based on• Engineering Design• Operations Research• Systems Theory• Bounded Rationality• Empowerment
ANENTERPRISE
Revised Task SP
Guidelines for Metamodeling
Categorizationof System Modeling
§ 3.2
§ 3.4.1
§ 3.4
§ 3.3.1
§ 3.3.2
§ 3.3.3
§ 3.3.4
§ 3.3.5
The philosophy for implementing enterprise design is developed in Section 3.2, the
Decision Support Problem Technique is presented in Section 3.3, and a unified and
rigorous method for formulating and solving enterprise design decisions is described
throughout Section 3.4. Hypothesis 1.1 is addressed continually throughout Section 3.3
and 3.4, and both Hypotheses 2.1 and 2.2 are addressed specifically in Section 3.4.5.
7777
3.2 INTERDEPENDENCE, DECISIONS, AND EMPOWERMENT
Enterprise design is an activity that encompasses nearly all aspects of a
manufacturing organization. Whether or not this design activity is explicitly recognized,
decisions continue to be made, products are conceived and manufactured, management
policies are set, and strategies are formulated and pursued. Through the sum total of these
decisions the enterprise thus changes and evolves over time and experiences a greater or
lesser degree of success.
The employees of an enterprise are all working for common goals, so their activities
are naturally interdependent. However, because of the complexity of real-world enterprises
these interdependencies can often go unaddressed, thus leading to decisions that are locally
beneficial but may not support the more critical or long-term interests of the enterprise as a
whole. Thus grows the motivation for the integration of design efforts. If all design
activities could be coordinated and integrated, then all the interdependencies could be
explicitly addressed and resolved, thus enforcing decisions that are good for the enterprise
as a whole. However, such coordination would require centralized planning and authority,
and such a command-and-control solution can not be the answer. At the very least it runs
counter to the current management wisdom of employee empowerment (as discussed in
Section 2.3.2), and it actually may not be possible in terms of bounded rationality and the
inherent limitations of individual human cognition (as discussed in Section 2.3.1).
Instead, for enterprise design to succeed the concept of empowerment must be
turned from a limitation into a strength. Regardless of whether employees are aware of the
design implications of their actions, they continue to make decisions and this powerful
engine of enterprise design rumbles along. However, if each local decision maker were
equipped with methods and tools to identify the interdependencies among decisions and to
quantify the enterprise-wide effects of each decision, then perhaps all of the benefits of
7878
integration could be achieved while still fostering an open environment and maintaining an
engaging and fulfilling workplace. This is the vision that drives this research.
In this section we return to the theme established by the paradigm of decision
support of Section 1.3 and elaborated by the discussions of bounded rationality and
empowerment in Sections 2.3.1 and 2.3.2. This theme is centered on the intimate
relationship between decision-making and most creative human activity, and its
implications for self-expression, authority, motivation, and collaboration in an
organizational context. Specifically in enterprise design there arises a direct paradox:
• Many design activities (decisions) in an enterprise are interdependent; that is,
the ultimate value of the outcome of a decision is dependent on the outcome of
other decisions.
• Interdependence can be handled through integration -- the best outcome can be
obtained by explicitly coupling the decisions.
• However, because of the complexity of real-world enterprises it is not possible
to integrate all interdependent design activities into one all-encompassing
decision.
There are really two options, then, for developing an approach to enterprise design. Either
a prescriptive process can be specified that puts each design process in lock-step with the
others, thus enforcing coordination, or an empowered approach can be implemented in
which support is provided for identifying interdependencies, tools are provided for
modeling across boundaries, and a grass-roots approach is fostered for enterprise
improvement. This research follows the latter option. Interdependence is then addressed
in a “loose” manner -- decisions are coupled where possible, but some decisions must be
made separately; they can’t be made concurrently. (This is especially true for
interdependencies along a timeline as discussed in Section 5.2.) So the approach
7979
developed here is to make each decision taking into account the effects of the others and to
try to be robust to what can’t be controlled.
Through this discussion the five principles of leadership for an Integrated
Enterprise (given in Section 2.3.2) take on an even greater significance. The method for
implementing enterprise design is presented in Section 3.4; clearly, essential ingredients
for the success of such method are building a shared vision among employees,
empowering the individual, establishing a comprehensive communications network,
fostering the democratization of information, and promoting distributed decision-making.
Their importance within a philosophy for enterprise design can not be overemphasized.
Connecting the high-level philosophy of this section with the detailed method of
Section 3.4 is the body of research which is the Decision Support Problem Technique,
some relevant aspects of which are discussed in the next section.
3.3 FRAME OF REFERENCE: DECISION SUPPORT AND SUPPORTPROBLEMS
In this section key aspects of the Decision Support Problem (DSP) Technique are
presented to establish the foundation of the approach to enterprise design. These aspects
feed directly into the development of a task-based method for enterprise design; several
issues arise that must be addressed for such a method to be realized:
• The first issue is how the enterprise-wide effects of local decisions can be
modeled and quantified.
• The second issue is how such local decisions can be formulated so that they
handle multiple goals and allow models that may range from rules of thumb to
complex mathematical relationships.
• The third issue is how interdependencies among decisions can be resolved.
8080
• The fourth issue is how a method that addresses the above issues can itself be
represented in a rigorous and consistent format.
The issue of quantification has already been introduced; the system modeling literature
reviews in Section 2.4 illustrate how far the boundaries of quantification have been
expanded across product, process, and organization design. However, these models must
still be integrated into a common decision formulation; this issue is introduced in Sections
3.4.4 and 3.4.6 and is the primary concern of Chapter 4.
The issue of formulating and solving multi-objective, mathematically rigorous
decisions has been a primary focus of previous work in the DSP Technique, and the
solution that is offered is the compromise DSP formulation in Section 3.3.3 and the
DSIDES solution software in Section 3.3.4.
The issue of handling interdependence is discussed in Section 3.4.5, and much of
the work has grown from the area of robust design. The foundation for this work lies in
the Robust Concept Exploration Method (RCEM), presented in Section 3.3.5. Additional
extensions are addressed throughout the remaining chapters, such as recommendations for
building robustness metamodels in Section 4.5.4 and a procedure for handling inter-
dependencies through time in Section 5.4. The issue of representing the method itself is
introduced in Section 3.3.2 through the notion of the DSPT Palette. The Palette is used as
a modeling scheme for representing the method of Section 3.4.
3 .3 .1 The Decision Support Problem Technique
The DSP Technique is rooted in the philosophy of Decision-Based Design and is
built around the concept of a human-computer partnership, both as discussed in Section
1.3.2. Therefore the DSP Technique consists of three principal components: a design
philosophy expressed in terms of paradigms, an approach for identifying and formulating
8181
decision support problems, and the software necessary for their solution. These
components are embodied in part by the following:
• Methods for modeling, evaluating and improving design processes (Bras,
1992, Bras and Mistree, 1991)
• A formal structure for representing and formulating decisions as Decision
Support Problems (Mistree, et al., 1990b; Mistree, et al., 1991)
• Computer software for solving Decision Support Problems (Mistree, et al.,
1993a, Mistree, et al., 1989a)
• A holistic computer environment that fosters concurrent engineering called the
DSPT Workbook (Allen, et al., 1989).
The methods and tools of the DSP Technique are constructed to be domain-independent so
that they can apply uniformly to the design of products, processes, and organizations, all
from the perspective of "system" design. By implication, then, a design process itself is a
system that can be designed. This activity of designing the design process is an integral
part of the DSP Technique and is called metadesign. In order to facilitate metadesign,
techniques must be established for modeling design processes so that they can be analyzed,
manipulated and implemented. This modeling of design processes is accomplished with
the DSPT Palette, described in the next section.
3 .3 .2 DSPT Palette and Support Problems
It is a central goal in the DSP Technique to support designers across all design
activities. Therefore, a modeling scheme must be developed to represent the activities that
comprise a design process, including entities such as phases, events, decisions, tasks, and
the like. The set of basic entities used to model a design process is called the DSPT Palette
(Bras, 1992, Bras and Mistree, 1991), analogous to the palette of a painter. A designer
8282
working within the DSP Technique has the freedom to use submodels of a design process
(prescriptive models) created and stored by others and to create models (descriptive
models) of the intended plan of action using the aforementioned entities. These
descriptions along a time-line are used as prescriptions in implementing the design process.
The fundamental motivation for the DSPT Palette is the belief in the importance of
representing design processes on a computer in a form that can be manipulated. This
representation then facilitates analysis and debugging of the processes before
implementation. More importantly, it facilitates improvement by finding and eliminating
redundancies, inconsistencies, and detecting (sub)processes that are independent of each
other and can therefore be implemented concurrently. The DSPT Palette contains three
different classes of entities, namely, Potential Support Problem entities, Base entities and
Transmission entities (see Figure 3.1). The most complex entities in the DSPT Palette are
the Potential Support Problem entities, being phases, events, tasks, decisions and systems.
Base entities are the most elementary entities for modeling design processes. Base entities
are implementable on a computer and/or are easy to understand by designers.
Transmission entities are used to achieve the connections between the aforementioned
entities, usually embedding a list of other DSPT Palette entities that are transmitted from
one entity to another.
Because of the importance of potential support problem entities, they warrant a
continued discussion. The phase icon is identified by a “P” and is used to represent pieces
of a partitioned process. Events occur within a phase and the event icon is identified by an
“E”. Phases and events are accomplished by performing tasks and making decisions.
Tasks and decisions require direct involvement of human designers and/or systems, in
contrast to phases and events on which human designers do not have direct influence. A
task is an activity to be accomplished. The design process itself is a task for the design
team, namely, “design a suitable product”. A task itself may contain other tasks and
8383
decisions, even phases and events, as in the design task. However, simple tasks like “run
computer program A” do not involve decisions. In the palette a task is identified by a “T”.
DSPT Palette Entities
CompromiseDecision?
PreliminarySelection Decision?
SelectionDecision?
P Phase
E Event
T Task
? Decision...
System
Goal...
Constraint...
Bound...
? Rule
? Loop
= EqualityRelationship
:= Assignment
f(x) Function
<
=
>
≥
≤
<>
=
>
≥
<
≤
<>
>
≥
<
≤
<>
=
Greater ThanInequality
Greater Than or Equal ToInequality
Less ThanInequality
Less Than or Equal ToInequality
Inequality
Equality
SystemVariable
AnalyticalRelationship...
LimitingRelationship...
ConditionalRelationship...?
a AuxiliaryParameter
i Information
Energy
Matter
i Information+Energy
iInformation+Matter
Energy+Matter
iInformation+Energy+Matter
PotentialSupport Problem
Entities
BaseEntities
TransmissionEntities
Figure 3.1 The DSPT Palette (Bras and Mistree, 1991)
A decision icon is defined by a rectangle with a question mark within it. This
choice is natural as a question mark often connotates a call for a decision. Currently we
8484
have included the compromise, selection, and preliminary selection decisions in the palette.
The corresponding icons are a combination of the basic decision icon with some further
symbolism (Mistree, et al., 1990b). In the DSP Technique,
• selection is the process of making a choice between a number of possibilities
taking into account a number of measures of merit or attributes.
• compromise is the process of determining the “right” values (or combination) of
design variables, such that, the system being designed is feasible with respect to
constraints and system performance is maximized.
Selection is a converging activity since the number of alternatives is reduced. The icons for
both selection and preliminary selection characterize this in the palette. The selection
decision icon has a single point, indicating that on solving a selection decision a single
alternative is identified for further development. The preliminary selection icon is similar
but it does not end with a point indicating that on making a preliminary selection decision a
number of most-likely-to-succeed concepts are identified for further development. The
icon for a compromise decision ends in a “C”. A compromise represents a trade-off
between conflicting goals. When there is no conflict between the goals the solution could
be represented by the upper or lower extremes of the C in the rectangle. However, when
there is a conflict, which invariably is the case, the solution emerges from the middle; it
represents a compromise between two extremes.
Using the DSPT Palette, design processes themselves can be designed in a process
called metadesign. A process network is created by assembling the potential support
problem entities through flows of the transmission entities. For example, a model of a
conceptual design event in ship design is illustrated in Figure 3.2.
In Figure 3.2 the primary goal of the conceptual design event is to establish the
basic concept. Therefore, concepts have to be generated from general design knowledge.
8585
A preliminary selection decision is used to identify the most suitable concepts for further
development. This is to be followed by making a compromise decision to improve these
concepts through modification. Finally, a selection decision is to be made to identify the
basic concept. Attributes are needed for both the preliminary selection and the selection
decisions. Therefore, a task is introduced in the model for determining the most influential
attributes from the Naval Staff Requirements. For the compromise decision we need to
know the areas to be improved and what the goals and constraints are. The goals and
constraints are extracted from the Naval Staff Requirements. The task of finding the areas
of possible improvement depend on the concept to be improved and general design
knowledge. After having improved the concepts, the basic concept is selected.
ConceptualDesign
E
BasicConcept
T
T
General DesignKnowledge
DetermineInfluencingAttributes
Attributes
NSR
T
T
? ??
Extract Goalsand Constraints
Goals andConstraints
DetermineAreas for
Potential Improvement
GenerateConcepts
ConceptsSuitable
Concepts
ImprovedConcepts
Measures ofImprovement(variables)
BasicConcept
i
i
i
i i i i
i
i
i
Figure 3.2 A Model of a Conceptual Design Event (Bras and Mistree,1991)
8686
Given this capability to model and manipulate a design process, the next step is to
support designers as they perform each phase, event, task, and decision in the process.
Such support is offered through the notion of support problems. The completion of a
support problem for a process entity signifies the representation of the entity in terms of
base entities, which can be represented on a computer. Formulation of a support problem
begins with a word formulation then proceeds to a mathematical formulation (in terms of
base entities).
Keywords DescriptorsGiven Symbolic and mathematical Base entities and Support
Problems necessary for evaluating the goals, constraints and bounds and the deviation function.
Find System variables (symbolic and mathematical).Satisfy Goals, constraints and bounds, i.e., symbolic and
mathematical limiting relationships.Minimize A deviation function.
COMPROMISE DSP
Figure 3.3 Compromise DSP Word Formulation (Bras and Mistree,1991)
The word formulation of a SP consists of keywords and descriptors. The
keywords and descriptors for the compromise DSP, the selection DSP, and the task SP are
given in Figure 3.3, Figure 3.4, and Figure 3.5, respectively. Note that we express the
descriptors in DSPT Palette entities. Phase and event support problem keywords and
descriptors reduce to identifying the tasks and decisions contained within each phase and
event and are thus not repeated here; SP descriptions for phases, events and systems are
given in (Bras and Mistree, 1991). Similarly the preliminary selection DSP is a special
type of selection DSP and is discussed in (Mistree, et al., 1990a). The heuristic DSP and
8787
its keywords and descriptors are presented in (Kamal, 1990). Dealing with issues of
information uncertainty creates modified word formulations; fuzzy compromise is tackled
in (Allen, et al., 1990), Bayesian compromise is addressed in (Vadde, et al., 1991), and
selection using interval arithmetic is explored in (Reddy and Mistree, 1992). These word
formulations illustrate well the concept of task and decision support; they embody
prescriptive procedures aimed at supporting a designer through each activity.
Keywords DescriptorsGiven Alternatives, i.e., DSPT Palette entities.
Identify • The principal attributes influencing selection.• The relative importance of attributes.
Rate Alternatives with respect to attribute.Rank The feasible alternatives in order of preference
based on the computed merit function values.
SELECTION DSP
Figure 3.4 Selection DSP Word Formulation (Bras and Mistree, 1991)
Keywords DescriptorsGiven • The task.
• The object of the Task (the transmission from the task).• Palette entities necessary for describing and
performing the task.
TASK SP
Figure 3.5 Task SP Word Formulation (Bras and Mistree, 1991)
For both the compromise DSP and the selection DSP the word formulations above
provide the foundations for rigorous mathematical formulations that can be implemented
and solved on computer. These rigorous mathematical formulations of DBD design are
8888
satisfied by DSPs, and the solution software employed is discussed in Section 3.3.4. The
word formulation for the Task SP provides the basis for representing the method for
enterprise design in a rigorous and consistent format as called for at the outset of Section
3.3; the Task SP is revisited in Section 3.4.1.
Of central importance in this method for enterprise design is the compromise DSP.
A continuing theme throughout this research has been the importance and worth of
formulating and solving a decision mathematically, handling multiple goals and allowing
models that may range from rules of thumb to complex mathematical relationships. This
formulation is embodied by the compromise DSP and is described in the next section.
3 .3 .3 The Compromise Decision Support Problem
In this section some of the issues that were raised in Sections 1.3.1, 1.3.2, and
1.3.3 are brought to their conclusion, and this conclusion is the compromise DSP. There
is a strong correlation between the paradigm for decision support illustrated in Figure 1.6
and the compromise DSP; in fact, the paradigm itself facilitates the creation of compromise
DSPs. This relationship is made clear in Figure 3.10.
Much of the mathematical foundation of the compromise DSP is drawn from
operations research as discussed in Section 1.3.1; the compromise DSP is a multiobjective
decision model which is a hybrid formulation based on Mathematical Programming and
Goal Programming (Mistree, et al., 1993a). The compromise DSP is a proven vessel for
such human/computer partnerships in design as described in Section 1.3.2; it has been
successfully used in designing aircraft, thermal energy systems, mechanisms, damage
tolerant structural systems, ships, and material composite design.
The compromise DSP may be used as an “optimization” model as discussed in
Section 1.3.3, although because of its inherently multiobjective nature it also serves well as
a satisficing model. Within the DSP Technique it is the primary model for achieving
8989
synthesis as described in Figure 1.6; it is used to determine the values of design variables
to satisfy a set of constraints and to achieve as closely as possible a set of conflicting goals.
GivenAn alternative that is to be improved through modification.Assumptions used to model the domain of interest.The system parameters:
n number of system variablesl number of discrete/integer system variablesp+q number of system constraintsp equality constraintsq inequality constraintsm number of system goals
gi(X) system constraint functionsgi(X) = Ci(X) - Di(X)
fk(di) function of deviation variables to be minimized at priority level k for the preemptive case
Wi weight for the Archimedean caseFind
The values of the independent system variables (they describe the physical attributes of an artifact).
Xi i = 1,..., nThe values of the deviation variables (they indicate the extent to which the goals are achieved).
di-, di
+ i = 1,..., mSatisfy
The system constraints (linear, nonlinear) that must be satisfied for the solution to be feasible. There is no restriction placed on linearity or convexity.
gi(X) = 0; i = 1,..., p gi(X) ≥ 0; i = p+1,...,p+q
The system goals that must achieve a specified target value as far as possible. There is no restriction placed on linearity or convexity.
Ai(X) + di- - di
+ = Gi ; i = 1,..., mThe lower and upper bounds on the system. Ximin ≤ Xi ≤ Ximax; i = 1,..., n di- , di+ ≥ 0 and di- • di+ = 0
MinimizeThe deviation function which is a measure of the deviation of the system performance from that implied by the set of goals and their associated priority levels or relative weights:
Case a: Preemptive (lexicographic minimum)Z = [ fl( di- , di+), . . ,fm( di- , di+) ]
Case b: ArchimedeanZ = Σ Wi(di
- + di+) ; Σ Wi = 1; Wi ≥ 0; i = 1,...,m
Figure 3.6 Mathematical Form of a Compromise DSP
A mathematical formulation of the compromise DSP is given in Figure 3.6. It is
considered a hybrid formulation in that it incorporates concepts from both mathematical
9090
programming and goal programming, and it also makes use of some new ones. What
distinguishes the compromise DSP formulation from goal programming is the fact that it is
tailored to handle common engineering design situations in which physical limitations
manifest themselves as system constraints (mostly inequalities) and bounds. It is similar to
goal programming in that the multiple objectives are formulated as system goals (involving
both system and deviation variables) and the deviation function is solely a function of the
goal deviation variables. Constraints and bounds are handled separately from the system
goals, contrary to the goal programming formulation in which only goals are used.
The terms compromise DSP and Mathematical Programming (for example see
Vanderplaats, 1984 and Arora, 1989) are synonymous to the extent that they refer to
system constraints that must be satisfied for feasibility. They differ in the way the goodness
of the solution is modeled and evaluated. In the compromise DSP the goodness is modeled
by the system goals (which are a function of both the system and the deviation variables)
and a measure of the goodness is provided by the deviation function. The deviation
function is modeled using deviation variables only. This is in contrast to traditional
mathematical programming where multiple objectives are modeled as a weighted function
of the system variables only.
As shown in Figure 3.6, in the compromise DSP special emphasis is placed on the
system variable bounds, unlike in traditional mathematical programming and goal
programming. In effect, the compromise DSP then is a hybrid formulation. The traditional
mathematical programming formulation is a subset of the compromise DSP (an indication
of the generality of the compromise formulation) and the compromise DSP is a subset of
goal programming. In the compromise DSP goals may be rank-ordered into priority levels
to effect a solution on the basis of preference. The lexicographic minimum concept
(Ignizio, 1985) is used to evaluate different design scenarios quickly by changing the
priority levels of the goals.
9191
The preceding discussion serves to give a basic understanding of the formulation of
Figure 3.6; further details can be found in the comprehensive discussion in (Mistree, et al.,
1993a). Hand in hand with the compromise DSP, software has been created to facilitate
the solution of DSPs on computer. The software package is entitled Decision Support in
the Design of Engineering Systems or DSIDES and is described in the next section.
3 .3 .4 DSIDES: Software for Decision Support
Solutions to compromise DSP templates can be found using different optimization
methods. The choice of optimization method depends, to a certain extent, on the problem.
Solution algorithms fall into two categories, namely,
• those that solve the exact problem approximately, and
• those that solve an approximation of the problem exactly.
Gradient-based methods, pattern search methods, and penalty function methods fall into the
first category whereas methods involving sequential linearization fall into the second
category. Within the DSP Technique the DSIDES (Decision Support In the Design of
Engineering Systems) system has been created to facilitate the solutions of compromise
DSPs (Mistree, et al., 1989a). The primary solution algorithm embodied in DSIDES is the
ALP (Adaptive Linear Programming) algorithm which is an extension of sequential linear
programming (Mistree, et al., 1981) to address multilevel, hierarchical problems. The
sequential linear programming approach was chosen in 1981 because it had arguably the
highest potential for being used to develop a single algorithm for solving a range of DSPs
in engineering design. More recently, Azarm et al. (1988) report that this is one of the
most widely used approaches. Three important features contribute to the success of the
ALP algorithm, namely,
• the use of second-order terms in linearization,
9292
• the normalization of the constraints and goals and their transformation into
generally well-behaved convex functions in the region of interest,
• an “intelligent” constraint suppression and accumulation scheme.
These features are described in detail in (Mistree, et al., 1981). A block diagram of the
implementation of the ALP algorithm is shown in Figure 3.7. A user specifies the input to
the software implementation of the algorithm in the form of a DSP template. This template
consists of data and user-provided Fortran routines. The data is used to define the problem
size, the names of the variables and constraints, the bounds on the variables, the linear
constraints, and the convergence criteria. The Fortran routines are used to evaluate the
nonlinear constraints and goals, to input data required for the constraint evaluation routines
and the design-analysis routines, and to output results in a format desired by the user.
Access is provided to a design-analysis program library from the analysis/synthesis cycle
and also within the synthesis cycle. In the design of major systems it is desirable to use the
design-analysis interface associated with the analysis/synthesis cycles (e.g., for using finite
element programs). It has been found necessary to use both the interfaces for solving
large, analysis-intensive problems (Karandikar and Mistree, 1991).
Once the nonlinear compromise DSP is formulated, it is approximated by
linearization. At each stage the solution of the linear programming problem is obtained by a
Revised Dual Simplex or a Multiplex algorithm (Ignizio, 1985). The choice among these
algorithms depends on the form of the deviation function. The deviation function that is
given in the mathematical form of the template can be implemented in two ways:
1. In the Preemptive form as a lexicographic minimum of the goal deviation variables.
In this case the Multiplex algorithm is used.
9393
2. In an Archimedean form as a weighted function of the goal deviation variables.
This reduces the formulation of the template to a traditional single objective
optimization one and the Revised Dual Simplex or the Multiplex algorithm is used.
COMPROMISE DSP
TEMPLATESOLVE
LINEARIZED DSP
INITIAL DESIGN
EVALUATE DESIGN CONSTRAINTS AND
GOALS
SYN
TH
ESI
S C
YC
LE
CONVERGED ?
STOP
No
Yes
CONVERGED ?
Yes
ANALYSISPROGRAMS
Revised Dual Simplexalgorithm
orMultiplex algorithm
FORMULATE
LINEARIZED DSP
REFORMULATE
LINEARIZED DSP
AN
AL
YSI
S C
YC
LE
EVALUATIONROUTINES
DATA FILENONLINEAR
COMPROMISE DSPM
OD
IFY
AD
APT
AT
ION
No
Figure 3.7 Implementation of the ALP Algorithm for SolvingCompromise DSPs (Mistree, et al., 1989a)
The ALP Algorithm itself, shown in Figure 3.7, is only capable of handling
continuous and Boolean variables. However, in recent work by Lewis (Lewis, 1996,
9494
Lewis and Mistree, 1996), the ALP Algorithm has been extended to handle discrete and
integer variables using an intelligent search engine. The new solution algorithm to solve
mixed discrete/continuous design problems is called the Foraging-directed Adaptive Linear
Programming (FALP) Algorithm. FALP contains three primary constructs: the ALP
Algorithm, a search engine, and the compromise DSP. The ALP Algorithm uses gradients
to move through the continuous design space, while the search engine, based on the notion
of foraging of animals in the wild, intelligently searches the discrete solution space for
promising regions. Information is passed from the discrete solver to the continuous solver
using the common mathematical construct of the compromise DSP.
In the preceding sections an in-depth discussion has been provided for how
multiobjective, nonlinear, mixed discrete/continuous decisions can be formulated
mathematically and solved on computer within the DSP Technique. An important
extension of these capabilities is the Robust Concept Exploration Method (RCEM), which
addresses the issue of adding robustness to such decision formulations. Part and parcel
with this development has been the shift from developing point solutions to identifying
ranged sets of specifications. The notion of robustness is central to the methods for
handling interdependence between decisions addressed in Section 3.4.5, and the notion of
ranged sets of specifications has become a standard component of DSP solution as
discussed in Section 3.4.7. The RCEM is introduced in the next section.
3 .3 .5 The Robust Concept Exploration Method
Recent work within the DSP Technique has seen the development of a Robust
Concept Exploration Method (RCEM) (Chen, 1995; Chen, et al., 1995). Central to this
development is the integration of robust design techniques, design of experiment
techniques, and response surface methodology within the framework of the compromise
DSP. The RCEM facilitates an efficient and effective concept exploration process as robust
9595
top-level design specifications are identified for the design of complex systems. In this
context, robustness of specifications is measured in terms of sensitivity to changes in
requirements - thus the focus is to minimize the effects on the conceptual design of
downstream design changes.
C. Simulation Programs
(Rigorous Analysis Tools)
Overall Design Requirements
D. Experiments Analyzer
Eliminate unimportant factorsReduce the design space to the
region of interestPlan additional experiments
Robust, Top-LevelDesign Specifications
A. Factors and Ranges
Product/ Process
Noise zFactors
yResponse
xControlFactors
F. The Compromise DSP
Find Control VariablesSatisfy Constraints Goals "Mean on Target" "Minimize Deviation" “Maximize Independence” BoundsMinimize Deviation Function
B. Point Generator
Design of ExperimentsPlackett-Burman
Full Factorial DesignFractional Factorial DesignTaguchi Orthogonal ArrayCentral Composite Design
etc.
E. Response Surface Model
y=f(x, z)
µˆy = f(x,µz)
σ2ˆy=Σi=1
k ∂f∂zi
( )2σ2ˆz i i=1
l ∂f∂xi( )
2σ2ˆxi
+Σ
Input and Output
Processor
Simulation Program
Figure 3.8 The Robust Concept Exploration Method (Chen, 1995)
The computer infrastructure for implementing the RCEM is composed of four
generic processors surrounding a central ‘slot’ for inserting existing, domain-dependent
analysis tools as simulation programs, as shown in Figure 3.8. The simulation programs
(existing analysis programs) are used to evaluate the performance of a minimum number of
conceptual designs. The RCEM processors increase computational efficiency and facilitate
the generation of top-level design specifications. The point generator (processor B) is used
to design the necessary screening experiments. The experiments analyzer (processor D) is
9696
used to evaluate the results of the screening and to plan additional experiments. The
response surface model processor (E) is used to create response surface models, and the
compromise DSP processor (F) is used to develop robust top-level design specifications.
A Factors and Ranges: Design variables are classified following the
terminology and principles used in Taguchi’s robust design to define the initial concept
exploration space. Design variables are defined as either control factors (under designer’s
control) or noise factors (not under designer's control), and the appropriate range of values
for each is specified. The responses (performance measures) are also identified, along with
the performance goals (signals). The range of interest for each response is also determined
for use in reducing the problem. The means for predicting the performance must also be
identified. The focus in robust design is to reduce both the effect on performance of the
noise factors, and the effect of variations in control factor values on performance.
B, C, D Sequential Experimentation: A low order experiment is
designed, the experiments simulated (conceptual designs generated), and the results
analyzed. Significant design variables are identified (design drivers), and insignificant
parameters are fixed. Higher order experiments are designed and conducted as necessary
and the results analyzed. Thus, the number of experiments and order of the experiments is
gradually increased while the size of the problem is gradually reduced.
E Elaborate Response Surface Models: Response surface models are
created to replace the original analysis tools when exploring concepts to generate top-level
specifications. The response surface equations map the factor-response relationship.
When the order of experimentation is satisfactory, the results are analyzed using regression
analysis and analysis of variance to determine the significance of the fit. When the fit is
significant, the final response surface models are defined.
F Determine the Top-Level Specifications: The response surface
models and overall design requirements are formulated within the compromise DSP to
9797
generate the top-level design specifications. The values of control factors identified in this
step become the top-level design specifications. Different design scenarios can be rapidly
explored by changing the priority levels of the goals.
Using the RCEM a design space can be quickly and efficiently populated in the
early stages of design. Simulations are run at a set of design points, and response surfaces
are generated that relate product performance to design variable values. These fast analysis
modules are then integrated into the compromise DSP and the best regions of design
solutions are determined based on multiple measures of merit.
Armed with these elements of the DSP Technique a method for implementing
enterprise design can be constructed, and this method is laid out in the next section.
3.4 ENTERPRISE DESIGN AS A NETWORK OF SUPPORTPROBLEMS
In this section a formal method is presented for implementing enterprise design in
terms of formulating and solving decision support problems. This method is itself
composed of a series of tasks and decisions connected by flows of information, so the
formalism of the DSPT Palette is employed to bring the method to life. The overall method
is laid out in Section 3.4.2, and each of its tasks is fleshed out in more detail in the
subsections that follow. However, because of its importance in this method, we must first
return to the formulation of task support problems. This is done in the next section.
3 .4 .1 Refining the Notion of a Task Support Problem
Tasks play a central role in this method for enterprise design, as they do for perhaps
all prescriptive and descriptive design methods. Following the DSP Technique, it is
important that a designer is supported in performing these tasks. Returning to the
formulation of the task support problem in Section 3.3.2, there is only the “given”
keyword and no active verbs of support. This warrants further consideration.
9898
Recall that in the introduction of the DSPT Palette in Section 3.3.2, it is interesting
to note the complexity of the relationship between tasks and decisions. As discussed in
(Bras and Mistree, 1991), a task itself may contain other tasks and decisions, even phases
and events, as in the design task. Similarly, a decision may contain other phases, events,
tasks and decisions. For instance, “design a ship” is clearly a task, and this task can be
partitioned into a complex design process, only a part of which is illustrated in Figure 3.2.
However, if we add to this formulation (in the “given” part) the task “satisfy the client’s
requirements” then the task SP becomes a compromise DSP because the compromise DSP
keyword “satisfy” is used. The expression “design a ship” thus becomes a compromise
DSP with an incomplete formulation. By the same logic, “make a decision” could also be
classified as a task.
It is clear that these potentially overlapping definitions of task and decision support
problems must be refined to resolve any ambiguity, and it is equally clear that the work
falls in the arena of refining the definition of a task support problem. The formulation of
the compromise DSP has received a considerable amount of attention and refinement and is
very well established, whereas the task support problem has not been hardened.
The heart of the solution lies in the definition of “decisions” in a keyword sense.
As discussed in Section 1.3, this definition is primarily focused on complex, high-impact
decisions. This should be clearly evident given the procedure for formulating and solving
DSPs -- the compromise DSP math formulation of Section 3.2.4 and the DSIDES software
of Section 3.2.5. The power of such tools would be squandered on decisions that require
little thought.
The spectrum that arises for classifying decisions is the level of thought required for
their solution. If a decision is rote or routine then in effect the solution is known a priori,
and the actual decision-making may require almost no thought. Decision and action
become nearly inseparable. In the extreme, then, a decision that requires almost no thought
9999
is equivalent to a task. Just as the keyword definition of a decision applies to the subset of
all decisions that requires a greater level of thought, the keyword definition of a task
addresses the other end of the spectrum. Of course, this spectrum is not sketched in
absolute terms of black and white; it is instead described in shades of grey. The
application of the task support problem and the decision support problem thus extend to
cover the whole spectrum and overlap somewhere in the middle. This overlap accounts for
the intimate relationship between tasks and decisions discussed earlier in this section and is
expressed by the revised word formulation for a task support problem in Figure 3.9.
If the applications of task and decision support problems overlap to some degree,
then in this grey area one task or decision could easily become the other. So how can a
task become a decision? As shown in Figure 3.9, after a task is performed it is prudent to
check to ensure that the proper outcome has occurred. If for some reason it hasn’t then
perhaps the task can be redone or another task employed. If these also fail, then perhaps
new alternatives need to be generated. As this level of thought increases, we move from a
task support problem to a decision support problem.
Keywords DescriptorsGiven • The task.
• The object of the Task (the transmission from the task).Develop Plan of action, i.e. the palette entities necessary for
describing and performing the task.Implement Plan of action (perform task)
Verify Actual creation of the task object.
TASK SP (REVISED)
Figure 3.9 Revised Task SP Word Formulation
100100
Similarly, how can a decision become a task? Consider the universal and familiar
process of “trial and error” for making decisions. Each “trial” is really assumed to be the
answer, so in effect this process is little more than a series of tasks. A decision becomes a
task when its solution is known (or believed to be known) ahead of time. One only needs
to verify when done and fix errors if necessary. In other words, a “decision” whose
outcome is known a priori or whose course of solution is an established pattern may be
supported by a task support problem. This relationship also illustrates well the difference
between problem solving and decision making.
The task support problem word formulation of Figure 3.9 allows a true process of
task support to be developed. A task requires certain inputs before it can be implemented,
and its intent is to generate specific outputs. This generation or transformation is
performed by implementing a given plan of action. A task is therefore supported by:
• listing the inputs needed to perform the task,
• specifying the outputs required from the task,
• describing how the task may be performed, and
• identifying the exit criteria used to verify task completion.
With this spectrum and overlap between tasks and decisions established, it is easy to see
how tasks can contain decisions and decisions can contain tasks. It has also become
explicit how tasks can be supported. These final two elements complete the foundation for
the method for enterprise design; this method can now be constructed.
3 .4 .2 Method for Enterprise Design
Based on the philosophy of empowerment described in Section 3.2, this method for
enterprise design is intended to be applied to any design decision throughout the enterprise
by the local decision makers themselves. Through applying this method the enterprise-
101101
wide effects of each decision are modeled and assessed, and interdependencies among
decisions are resolved. Each decision can thus be made in the best interests of the
enterprise as a whole. This method is described in broad strokes as a series of tasks in
Figure 3.10. These tasks correspond to the development, formulation and solution of an
enterprise design decision according to the decision support paradigm of Section 1.3.3;
this progression is illustrated down the left side of Figure 3.10. The method is represented
as a relatively straight sequence of tasks, but the “verify” keyword of the task support
problem captures the potential for iteration. Iteration will likely occur within each task and
across tasks, but for simplicity these arrows are not shown.
Task 1 Expand Scope of Decision
Baseline Decision
Recommendation
Task 2a Identify Modeling Techniques
Task 2b Determine Input Needed for Analysis
Task 3 Define Boundaries of Decision
Task 4 Transform and Integrate Models
Task 5 Generate Potential Solutions
ii
TT
ii
TT TT
TT
TT
TT
Figure 3.10 Decision-Based Method for Enterprise Design
102102
In Figure 3.10 a designer begins with a decision represented in terms of design
variables, goals and constraints, and models for analysis. This decision is likely geared
toward local measures of merit and may not capture its enterprise-wide implications. Task
1 of the method is employed to explore the potential impact of this baseline decision. A
wide range of enterprise-wide effects are generated by brainstorming both with the design
variables and with an awareness of overall customer requirements. From this process
additional goals are identified to expand the scope of the decision. (If none are identified,
then the baseline decision can be dropped from consideration and left unchanged.) This
task is described in detail in Section 3.4.3.
During Tasks 2a and 2b modeling schemes are identified to quantify the
relationships between the baseline design variables and the additional goals from Task 1.
This process is described in Section 3.4.4. The two tasks are represented separately to
imply a process of iteration; modeling techniques are identified both by examining the
goals (from the top down) and by examining the design variables (from the bottom up).
Iteration may be required to resolve inconsistencies. At the conclusion of these tasks a
modeling scheme is selected for each goal and constraint in the decision. These modeling
schemes are tentative and require further hardening and so are represented with dashed
lines. Also, these modeling schemes may require additional input information, represented
by additional clouds. The decision is expanding!
The additional goals and input information from Tasks 2a and 2b may overlap with
other decisions made by different people or at different times throughout the enterprise.
Thus the interdependencies between decisions become explicit. Because of empowerment
it is not the intent to gather all decisions under some centralized control, and because of
bounded rationality the limitations on the size of any one decision is recognized, so at some
point the boundaries of the decision must be drawn. This is described by Task 3. These
boundaries may be forced to slice through interdependencies, so they may be resolved
103103
through collaboration with other decision makers or though formulating a decision that is
robust to external issues. The information that remains inside the decision boundary is then
classified into constants, design variables, and noise factors. These options are explored in
Section 3.4.5.
As Task 4 is initiated, all of the elements of the decision are potentially in place.
Modeling techniques have been identified for each goal and constraint, and the input needed
for each model is captured by the identified sets of design variables, noise factors, and con-
stants. The issues that remain are whether the models are efficient enough, whether models
to quantify robustness need to be created, and whether the models are in a format that al-
lows integration into a decision formulation. During Task 4 statistical metamodeling tech-
niques are employed to resolve these issues. This task is described in detail in Section
3.4.6, and a review and discussion of a range of metamodeling techniques forms the basis
for Chapter 4.
During Task 5 the enterprise design decision is formulated mathematically and
solved; this process is described in Section 3.4.7. “Solution” occurs through an in-depth
process of exercising the formulation -- changing parameters, testing assumptions, and
exploring “what-if” scenarios -- thereby generating sets of potential solutions for consid-
eration. In the end one solution is selected, one that satisfies as many of the enterprise-
wide goals as possible and one that is as robust as possible to the effects of external deci-
sions outside the designer’s control. This solution is the recommendation submitted for
implementation.
In the sections that follow each of these tasks is explored in greater detail.
Following the task support problem formulation their inputs and outputs are identified, a
task description is given, and the exit criteria that define task completion are discussed.
104104
3 .4 .3 Expand Scope of Decision
This task is fleshed out into a network of support problems in Figure 3.11. The
basic purpose in executing this task is to explore and identify the enterprise-wide effects of
a given design decision. Input to this task is a baseline decision, represented in terms of
design variables, goals, and constraints, and also a general awareness of the customers of
the enterprise, both internal and external.
DesignVariables
Goals andConstraints
Awareness of InternalCustomers
Awareness ofExternal Customers
Brainstorm Potential Effects
GenerateAdditional
Goals and Constraints
Raw List of Potential Goals and Constraints
Final List of Potential Goals and Constraints
TTii
TT
ii
ii
ii
ii ?? iiPreliminary Selection
Figure 3.11 Task 1: Expanding the Scope of the Decision
The terminology for “goal” and “constraint” is taken directly from the context of C-
DSP math formulations, as discussed in Section 3.3.3. These terms serve well to capture
the broad range of customer requirements that originate from both within the enterprise and
from without. Constraints are “hard” requirements that must be met; these determine the
feasible region of the design space. Examples are constraints due to the laws of physics
and the constraints imposed by scarce resources. As such, constraints are embodied by
some relationship between design variables that must either be less than, greater than, or
equal to a constraint value. Goals, on the other hand, embody more “soft” requirements;
105105
“wishes” instead of “demands”. Examples span the entire range of performance metrics,
cost measures, quality indices, and so on for any system being designed. Goals are repre-
sented by relationships among the design variables that should ideally hit a “target” value,
and there is a penalty associated with how far a solution comes from meeting these targets.
What is the motivation for performing this task? Quite often a design decision is
made considering only a limited number of goals. Product design decisions are often con-
cerned primarily with product performance or functionality; manufacturing investment de-
cisions are often made by calculating a return on investment based on given forecasts of
demand, and order fulfillment processes are commonly redesigned by focusing primarily
on reducing costs. Just as often, however, these decisions may have additional enterprise-
wide effects that are not considered. Many product design decisions are made without
computing their impact on production cost. The implications of manufacturing investments
on the potential for new product development often go unexplored, and the interface be-
tween a customer and the order fulfillment process may significantly determine the image of
the enterprise.
As shown in Figure 3.11, this exploration of the enterprise-wide effects of a deci-
sion is performed in two parts. The first part stems from an enlightened focus on customer
satisfaction. This perspective is captured well by the following statement (Hanson, 1992):
The successful manufacturer will need to view itself from a new perspective. Itwill need to view itself in the broader context if a manufacturing enterprise andto understand that the factors that contribute to its manufacturing effort go farbeyond the traditional production cycle. Developing this worldview begins byrecognizing that even though all the company’s internal organizations -- sales,marketing, engineering, and manufacturing -- operate interdependently, theyhave a common focus and are committed to delivering customer satisfaction.
Once these varied effects the decision may have on customer satisfaction are identified, they
can be added to a raw list of potential goals and constraints to be included in the decision.
This process is made more complex by the recognition that today’s manufacturing enter-
106106
prises have many customers, both external to the organization and internal as well. The
satisfaction of all of these customers should be considered (Heim and Compton, 1992):
A manufacturing organization serves a variety of customers. In addition to thecustomers who expect to purchase high-quality products and services, the own-ers or stockholders may also be thought of as customers in that they expect areasonable return on the investment that they have made in the company. Theemployees are customers in that they expect an employer to recognize their con-tribution to the success of the company and to provide them with a reasonablereward for their efforts. These are the stakeholders in the organization in thateach has made a personal commitment to its success. The stakeholders havespecial expectations and needs that must be met.
We see that the span of these potential goals and constraints is wide-ranging and can
include product performance, cost, and quality, costs and schedules for both development
and production, measures of job enrichment, worker fulfillment, motivation, the strategic
position of the company, and so on.
The second part of the exploration of the enterprise-wide effects of a decision is
performed by considering the design variables of the existing decision and ideating
(brainstorming is an example of this) all of their potential effects on the enterprise. The re-
sults from both explorations are combined into a raw list of potential goals and constraints
for the decision, and this list is then examined and pared down to a final list of potential
goals and constraints. This paring down can be accomplished by a Preliminary Selection
DSP, or alternatively the goals and constraints can be evaluated on a case-by-case basis.
Issues for such selection are whether the decision at hand has sufficient impact on the goal
under consideration and also whether the overall importance of the goal to the enterprise
itself is sufficient.
The output of this task is a final list of potential goals and constraints to be added
to the baseline decision. For each constraint a hard value should be specified, and each
goal requires a target value. Exit criteria are the successful generation of such a list, en-
suring that each goal and constraint on the list has an effect on customer satisfaction, and
verifying that the design variables of the decision do indeed have an impact on each goal
107107
and constraint. These goals and constraints still may or may not be included in the decision
formulation depending, for example, on whether models exist to quantify such effects of
the design variables. These issues are addressed by the tasks to follow.
3 .4 .4 Identify Modeling Techniques and Determine Input Needed forAnalysis
These tasks are fleshed out into a network of support problems in Figure 3.12.
Inputs to these tasks are the design variables of the baseline decision, the list of potential
goals and constraints generated for the decision in Task 1, and a general awareness of the
existing modeling techniques available within the enterprise. The basic purpose in execut-
ing this task is to identify a modeling option for each of the potential goals and constraints
for the decision, so this task is therefore applied to each goal and constraint from the list
generated in Task 1. In the system modeling context of Section 2.3, Task 1 is used to
identify the “why” for building system models, and Tasks 2a and 2b are used to determine
“how” the models will be built and “what” input data will be required. This idea is illus-
trated in Figure 3.13.
What is the motivation for performing these tasks? Identifying the enterprise-wide
effects of a decision is an empty task unless these effects can be explicitly quantified in
terms of the decision’s design variables. In the tasks described here, these effects are made
explicit by building models that capture the effects and implications of the design variables.
In Figure 3.12 the generation of modeling options is accomplished in two parallel
phases: from the top down and from the bottom up. From the top down, the realm of ex-
isting modeling techniques is explored to identify options that address the effects desired
for the goal or constraint. From the bottom up, modeling options are identified that operate
on the given design variables. Ideally these two phases converge to identify common
modeling options, but even if not the options are retained for further consideration.
108108
Goal orConstraint
ExistingModeling
Techniques
Generate Modeling
Alternatives
DesignVariables
List of Alternatives
Evaluate Each Alternative
Select Modeling Option
Model Accuracy, Cost, Input Data, etc.
Satisfactory?
Create NewModeling Techniques
Model for Goal or
Constraint
Additional Input Data
Needed
Yes
No
Create Model
ii
ii
ii
ii
ii
ii
ii
TT
TT TT
TT ????
Figure 3.12 Tasks 2a and 2b: Identifying Modeling Techniques andDetermining Input Needed for Analysis
There are multiple criteria that drive the selection of a modeling option for a goal or
constraint, and in broad strokes these criteria encompass the model’s accuracy, its
efficiency both in terms of the cost to create the model and the cost of using the model, its
portability, and importantly the amount and availability of input data required. Additional
criteria can of course exist, and the actual list of criteria used to evaluate each option will
vary from application to application.
What makes the choice of modeling option particularly difficult is the differing
amount of input information required for alternative models. (It is unlikely that any mod-
eling option will be a function only of the design variables. Inevitably additional input data
is required.) For example, in the early stages of product design one requirement may
specify a desired system cost. A legitimate modeling option is then a bill-of-materials esti-
mator that provides costs for each element or component of the product. Clearly, however,
this kind of detailed information is just not available in design’s early stages. Based on
what information actually is available, an appropriate modeling option can be identified, but
109109
then this model may not support the type of output information required. Dealing with
these interdependencies through time is addressed in Section 3.4.5 and is also the focus of
Chapter 5.
WHATHOW
System Models
Task 1
Task 2a Task 2b
WHY
Figure 3.13 Tasks 1, 2a and 2b in System Modeling Context
Armed with these evaluations of each modeling option, one option is then selected
for model creation. This selection may be a foregone conclusion if one modeling option is
clearly better than the rest, but if the situation is more complex then a Selection DSP can be
applied to determine the best alternative given multiple potentially conflicting attributes.
A desirable situation is when specific models have already been constructed for the
decision at hand; the cost of creating these models is therefore zero, and the accuracy of the
models is usually well-established. In this case the actual task of creating the model
becomes a formality. There are times, however, when the capability of creating a model is
recognized but the model itself has not yet been created. Examples of such cases are when
historical data is available and can be used to build regression models, and when expert
opinions are known to exist but that have not yet been represented formally. In these cases
the model itself must be constructed before continuing.
110110
Once a model has been selected and constructed for a given goal or constraint, it is
prudent to check to ensure that the model is itself satisfactory in terms of the evaluation
criteria already discussed. Of primary concern is the accuracy of the model; while
efficiency and portability are important, they can be addressed separately in Task 4. If the
model is not satisfactory then it is back to the drawing board, and perhaps a new modeling
technique can be created. (This is also equivalent to the case where no modeling options
are identified for the given goal or constraint.) This is not as dire as it may sound; in
actuality much of what is engineering research is devoted to the creation and application of
such modeling techniques. If the benefit of generating a new modeling technique is
believed to offset the costs of creation, then such activities can be pursued as illustrated in
Figure 3.12. If, however, the costs outweigh the benefits then the goal or constraint may
be dropped from the formulation, or a more approximate model may be substituted.
The outputs of these tasks are a model for each goal and constraint that captures
the effects of the design variables and a list of additional input data needed for each model.
Exit criteria are the successful creation of such models of satisfactory accuracy and the
successful creation of lists of all of the input data needed to evaluate each model. Two
primary concerns are then whether such input data can be obtained and whether such
models are of sufficient efficiency and portability. These are addressed Task 3 and Task 4
respectively; these tasks are discussed next.
3 .4 .5 Define Boundaries of Decision
This task is fleshed out into a network of support problems in Figure 3.14. The
basic purpose in executing this task is to resolve the interdependencies between decisions,
both across enterprise domains and through time. These interdependencies occur through
the additional input data required to evaluate the models for each goal and constraint; this
data may actually be design variables in external decisions made by other parties or made at
111111
different times along a design timeline. Input to this task are the lists of additional input
data required for the models for each goal and constraint.
What is the motivation for performing this task? Simply put, this task is required to
draw a realistic boundary around a decision. A balance must be struck between integration
and empowerment. On the one hand, a designer must have a complete and internally con-
sistent description of a decision in order to proceed with its formulation and solution. All
design variables must be under the designer’s control, and all models should be complete.
On the other hand, however, attempting to formulate these complete models may encroach
upon the authority of external decision makers and thus conflict with their desire for
empowerment. In a broad sense this task is employed to redesign the decision-making
processes of an enterprise.
Controlled by someone else?
AdditionalInput Data
Fixed Parameters
Gather and Classify
Data
Known now in timeline?
Yes
No
NoYes
Incorporate or Make Robust
Transform or Make Robust
Make Robust
Incorporate
Noise Factors
1 3
2 4
ii
ii
iiii
TT TT
TTTT
TT
??
??iiPotential
Design Variables
iiAdditional
Design Variables
AdditionalNoise Factors
Figure 3.14 Task 3: Defining the Boundaries of the Decision
112112
The initial element of Figure 3.14 is the task of gathering and classifying the
additional input data. The classification scheme is shown in Figure 3.15. Fixed
parameters are any data that help describe the system or its performance but are constant
throughout the design process. Examples are physical constants, constrained design
choices, and fixed operating conditions. Noise factors are all data that are out of the
designer’s control, such as operating environments and perhaps manufacturing capabilities.
Design variables are the focus of the design effort; each variable embodies a choice to be
made about the system design. The notion of independence is critical, because routines for
solving these decision formulations must be able to vary each design variable
independently. Any correlation between design variables must be resolved by choosing
one as an independent variable. The other is then a dependent variable and must be defined
as a function of the independent variables. Discrete design variables are used to represent
discrete design choices such as the number of lenses in an optics layout or the number of
teeth in the design of a gear. Continuous design variables can be varied over a continuous
range, such as lengths or voltage values.
FixedParameters
IndependentVariables
DependentVariables
DesignVariables
NoiseFactors
Discrete
ContinuousALL
INPUT DATANEEDED
FORANALYSIS
Figure 3.15 Classification Scheme for Input Information
113113
Following this classification of input information, the issue of interdependencies
between decisions is tackled. Before delving into a method for resolving them, however, a
precise definition of interdependence between decisions is warranted. Succinct definitions
for interdependence and integration are developed in a decision making context by applying
the decision support paradigm presented in Section 1.3.3. This concept is illustrated in
Figure 3.16. Interdependence, or mutual dependency, can exist both between design
variables and between decisions.
• Two design variables are interdependent if they both affect a common objective
or constraint in a decision.
• Two decisions are interdependent if the output of one decision serves as input to
the other, and vice versa.
The existence of interdependencies between design variables may not be
immediately clear, because it is also true that each design variable is independent of the
others and can be controlled individually. In an unconstrained problem each design
variable can be set to any value desired regardless of the values of the other design
variables. However, interdependence is embodied in the process of setting these values.
Because the variables all affect common goals, it is prudent to consider them all at once
instead of individually. Therefore in Figure 3.16 interdependencies exist between variables
x1, x2, and x4; x1 and x3; x4 and y2; x3 and y1; y1 and y3; and y2 and y3. If in a decision
there are no interdependencies between any design variable and the others, then perhaps
this variable would be better served by addressing it in a separate decision.
Interdependence exists between Decision 1 and Decision 2 in Figure 3.16 because
model f3 of decision 1 requires a value for y2, so in effect the output of decision 2 is needed
as input for decision 1. Similarly model g1 of decision 2 requires a value for x3, so in
effect the output of decision 1 is needed as input for decision 2. Although two
114114
interdependent decisions may in fact be made separately, the interdependence still arises in
the resulting iteration required to achieve mutually shared goals.
Goals & Constraints (Requirements)
Design Variables{x1, x2, x3, x4}
Models
T3
f1(x1,x2,x4) f2(x1,x3) f3(x4,y2)
T1 T2
Goals & Constraints (Requirements)
Design Variables{y1, y2, y3}
Models
T6
g1(x3,y1) g2(y1,y3) g3(y2,y3)
T4 T5
DECISION 1 DECISION 2
Figure 3.16 Two Interdependent Decisions
As described in Section 1.3.4, interdependencies between decisions can exist both
across enterprise domains and through time. In a practical sense, the true barrier with
interdependencies across enterprise domains is that the decisions are made by separate
people, and this decision-making power can be something that is held dear. Simon (1977)
draws a parallel between authority and decision making by stating that “authority is
exercised whenever a person allows his decisions to be guided by decision premises
provided to him by some other person.” Similarly Dressler (1976) notes that “acceptance
of authority denies participation in the decision making process. Participation in decision,
however, is essential to achieve understanding and enthusiasm in carrying out decisions.”
The method illustrated in Figure 3.14 reflects these issues. As illustrated in the
upper left corner of the figure, the expanded scope of a design decision requires additional
115115
input data represented by additional clouds of information. Each piece of data can be
grouped into a few general categories. It can be known and available at the current point of
a design timeline, or it may not be able to be determined until some point in the future.
(For example, the list of customer requirements and a general system architecture are often
known from the very beginning of a design project, while specific subsystem parameters
and characteristics are commonly not determined into well into the design effort.) Similarly
it can be under the control of the makers of the current decision, or perhaps it is dictated by
the efforts of other decision makers. Depending on the answers to these questions, each
piece of additional input data can be placed into one of four categories as shown in Figure
3.14, and each category is addressed by a separate task.
Option 1: The simplest case is when the data is under the control of the local
decision maker and is known at the current point in time. It is then simply incorporated
into the formulation without further ado.
Option 2: It becomes more complicated when the piece of data is controlled by
another decision maker. Perhaps the data can be incorporated through the creation of a
cross-functional team, where each team member comes to the table with the responsibility
(and authority) of making at least one of a set of interdependent decisions. This method for
enterprise design could then be followed by the team as a whole, creating a single
formulation for the set of interdependent decisions. Alternatively, it is also possible that
another form of group decision-making could be pursued.
At some point, however, for reasons of efficiency, expedience, logistics or others,
it will not be feasible to incorporate data controlled by external decision makers. In this
case the concept of robustness is employed. Although the data are most likely the design
variables in an external decision, they are treated as noise factors in the local decision
formulation. The local design variables are then set so as to reduce (or minimize) the
116116
variation of the goals as the noise factors are varied across their ranges of feasible values.
This concept is developed in more detail in Section 4.4.5.
Option 3: Both of the previous categories are invoked when the data is known at
the current point in time. The other intriguing possibility is that of when the data cannot be
known with certainty until much later in a design timeline. (This can occur when modeling
schemes are selected (in Tasks 2a and 2b) based primarily on their accuracy in computing a
given goal or constraint and only secondarily on whether they are functions of the current
design variables.) Again if the data can be controlled by the local decision maker, it may be
possible to construct transformations that map or correlate the potential values of down-
stream design variables with the values of the current design variables. This technique is
discussed in Section 5.4. However, if such transformations can not be constructed with
confidence, then the concept of robustness can be employed, and the downstream design
variables can be treated as noise factors in the current decision formulation.
This option thus falls into the domain of decision making under uncertainty. In a
general sense, decision making under uncertainty is required when any of the input pa-
rameters to the decision are stochastic or probabilistic to some degree and are therefore rep-
resented by probability distributions (as in Vadde, et al., 1991), fuzzy numbers (as in
Allen, et al., 1990 and Wood, et al., 1989) and so on. While a review of these techniques
is beyond the scope of this work, the concept of robustness allows a fair amount of flexi-
bility in treating these uncertain parameters; this is developed in more detail in Section
4.5.4.
Option 4: The final category is when the data is controlled by someone else and
can not be known with certainty at the current point in time. The only option identified in
this method is to treat the data as noise factors and to formulate the decision to be as robust
as possible to changes in their values. This is a worst-case scenario, assuming neither
rationality nor cooperation by the external decision makers. If, however, assumptions
117117
could be made as to the intentions and authority of the external decision makers, then there
is the potential to create game-theoretic decision formulations based on the possibilities of
competitive, collaborative, or leader-follower behavior, similar to work presented in
(Lewis, 1996). This is left as an interesting possibility for further research.
The outputs from this task are: a value assigned to each fixed parameter, a prob-
ability distribution and its characteristics specified for each noise variable, a function de-
fined for each dependent variable, a set of possible values given for each discrete design
variable, and maximum and minimum values given for each continuous design variable.
Exit criteria are simply the completion of this classification and gathering of additional
input data.
At this point all of the elements of the decision formulation are potentially in place.
The remaining issues are whether the models are efficient enough, whether models to
quantify robustness need to be created, and whether the models are in a format that allows
integration into a decision formulation. These issues are addressed in the next section.
3 .4 .6 Transform and Integrate Models
This task is fleshed out into a network of support problems in Figure 3.17. The
basic purpose in executing this task is to improve the efficiency and portability of any
modeling options that are too costly or cumbersome and to create robustness models if they
are called for in the decision formulation. This task is applied to each goal and constraint in
the decision, so its inputs are the selected modeling option from Tasks 2a and 2b and the
lists of design variables, noise factors, and fixed parameters from Task 3. Because this
task is accomplished through the application of statistical metamodeling techniques, a
general knowledge of these techniques is also required as a task input.
What is the motivation for performing this task? This task basically embodies a
trade-off between model efficiency, compatibility, and accuracy. Through application of
118118
metamodeling techniques, approximations of a given modeling option are created, and the
format of these approximations can be specified to achieve a given level of efficiency in
calculation and can also be easily integrated into a decision formulation. The cost for these
gains in efficiency and portability is a potential loss in model accuracy, which is carefully
considered in the verification and validation portion of this task.
SelectedModeling
Option
Screen for Significant
Factors
Efficiency or Integration an
Issue?Robustness
Needed?
Significant Factors
Select Metamodeling
Technique Build Mean Response
Metamodel
Build Robustness Metamodel
Verify & Validate
σ model^
y model^Models
No No
Yes Yes
Use existing model as is
DesignVariables
NoiseFactors
MetamodelingTechniques
FixedParameters
ii
ii
ii
ii
ii
ii
ii
ii
ii ?? ??
??TT
TT
TT
TT
Figure 3.17 Task 4: Transforming and Integrating Models
This task begins by examining the selected modeling option for a given goal or
constraint in terms of efficiency and ease of integration. The modeling options that exist in
an enterprise can be grouped into a handful of general categories. Focusing on
quantification, these mathematical models can be:
• analytical relationships (equations) gathered from textbooks and such,
• computer analysis routines or simulations with specified inputs and outputs,
119119
• relationships constructed from historical data,
• relationships constructed by running experiments and gathering data, and
• relationships constructed using expert opinions.
Some of these options, especially existing computer analysis routines, can be fairly
expensive to run in terms of computing time. Single evaluations of aerodynamic or finite-
element codes can take from minutes to hours, if not longer. In some cases approximations
of these codes may need to be created to reduce their computing times to reasonable values.
Similarly these modeling options must be able to be integrated into the decision
formulation. As discussed in Section 3.3.4, modeling options in DSIDES are represented
either within user-provided Fortran routines or as separate computer analysis codes. If a
modeling option can not be represented in these formats, then perhaps approximations can
be created that themselves exist in an amenable format.
The final question in the examination of a given modeling option is whether
robustness models are needed. Most often a model has been constructed to predict certain
behavior (output) given specific inputs and does not automatically yield measures of
robustness to variations in the inputs. Such robustness models are also created through the
application of metamodeling techniques.
In summation, if a given modeling option is too expensive to run, or if it exists in a
format that is not amenable to integration into a decision formulation, or if measures of its
robustness need to be created, then statistical metamodeling techniques can be employed to
build model approximations that address these issues. As shown in Figure 3.17 such
metamodeling techniques are applied through a process of screening for significant factors,
selecting a metamodeling technique, and building mean response metamodels and
robustness metamodels as necessary. These models are then validated to ensure a
120120
satisfactory level of accuracy. This process owes its foundation to the RCEM as discussed
in Section 3.3.5 and illustrated in Figure 3.8. It will be elaborated in detail in Section 4.5.
The outputs from this task are efficient and portable analysis models for each goal
and constraint that warrants them and the set of robustness models required for the decision
formulation. Exit criteria are whether the model for each goal and constraint has attained
sufficient levels of accuracy, portability, and efficiency.
At the conclusion of this task the decision formulation is complete and ready to be
implemented on computer. This activity is addressed in the next section.
3 .4 .7 Generate Potential Solutions
This task is fleshed out into a network of support problems in Figure 3.18. The
basic purpose in executing this task is to generate a range of potential “solutions” to the
enterprise design decision and then to select the best one for implementation. Inputs to
this task are the list of goals and constraints from Task 1, the selected modeling options
from Task 2a that have perhaps been transformed through Task 4, and the design variables,
noise factors, and additional input information gathered in Task 3. With this information a
C-DSP math formulation (or C-DSP template) is then constructed; in addition to the above
input data each template requires the specification of relative priorities between goals.
These templates follow the basic structure illustrated in Figure 3.6.
Solutions are generated by feeding the C-DSP template as input into the software
package DSIDES, but of course it would be incongruous to assume that the richness and
complexity of the design space could be captured in only one run and only one solution.
Therefore a process of exercising the C-DSP is initiated, which can be the heart of a
detailed trade study activity. Starting from the baseline solution, different scenarios are
run. Goal priorities are changed, target values are moved, constraint values are relaxed,
and constraints and goals are added or removed. Different starting points for the design
121121
variables are specified, so that the design space is more completely covered and the chances
are increased of finding all of the local optima. For each change in the model the
optimization code is re-run and a new solution found, and the trends in these resulting
solutions are used to identify the truly “best” solution.
Goals andConstraints
Models
DesignVariables
NoiseFactors
Additional Input
Create Math Formulation
Exercise C-DSP
Many Potential Solutions
Verification& Validation
Final Competing Solutions
Select Best Solution
C-DSP Template
Solutions Acceptable?
YesNoii
ii
ii
ii
ii ii ii
ii
TT
TTTT
?? ??
Figure 3.18 Task 5: Generating Potential Solutions
Of course, the word “best” deserves some attention here. Because multiple goals
are employed in most C-DSP formulations, it is unlikely that any one “best” solution is an
optimum for any one of the goals. Instead, it is best in terms of the deviation function --
the differences between all of the goal targets and the values actually achieved by the
solution. Similarly, it is nearly certain that in any real problem the information for the
decision formulation will be incomplete. Limitations and assumptions are inherent in
almost every system model of tractable size, and even when factors are known to be
important, they can’t always be controlled. (Thus robustness is central to this method.)
122122
Therefore, as noted in Section 1.3.1, with this method satisficing solutions are
sought instead of optimizing solutions. Instead of seeking the one true peak in
performance for a single objective, solutions are sought that are “good enough” for a set of
multiple goals. And when robustness enters into the formulation, the idea is to seek more
flat and stable regions in the solution space that perform well enough, as opposed to
unstable points that are highly sensitive to changes in input information. Again, with
robust solutions “best” is defined in terms of the deviation function, where one or more
goal targets call out ideal levels of solution variability.
As shown in Figure 3.18, the results from this process of exercising are a set of
many potential solutions, one for each DSIDES run performed. These solutions are each
represented by a specific vector of values for the design variables, and measures as to how
well each goal target was reached and whether any constraints were violated. With this set
of solutions a process of verification and validation can be performed. Verification is
accomplished by ensuring that each solution “makes sense” according to engineering
judgment and that the trends that appear between solutions are justifiable. Validation is
accomplished by comparing the actual solution values against external sources of
information, such as historical data, consensus engineering opinion, or separate computer
analysis codes.
Through this process of verification and validation some or all of the solutions may
be found acceptable; if not then perhaps the math formulation can be revised, or perhaps
iteration farther back into the enterprise design method is required. The attractive solutions
that result from validated formulations are passed on to a final set of competing solutions,
each of which likely have particular advantages and disadvantages over the others. Only
one solution can be carried through to implementation, so somehow a selection must be
performed with this set. There are a wide range of options for performing this selection,
ranging from an informal discussion and reaching of consensus to a rigorous procedure of
123123
performing value trade-offs. It is also possible to generate hybrid solutions that combine
the attractive features of a subset of competing solutions. Unfortunately it is beyond the
scope of this dissertation to delve into the varied treatments of values and preferences in
making multi-objective decisions; an excellent treatment of the subject is found in (Keeney
and Raiffa, 1976).
Is there a way to verify that the solution chosen was the best choice? The answer
here is most likely not. Mistakes are still possible. Through the philosophy of bounded
rationality (Section 2.3.1) we recognize that there are limitations and assumptions inherent
in nearly all reasoning, but yet, decisions must still be made. Thus we do the best we can
given the best estimates of current and forecasted situations and given the best information
available. After all this effort, a “wrong” solution could still be reached. It is hoped,
however, that by executing such a structured method that the probabilities of such failures
are reduced.
Regardless of the particular solution selected, it is intended that this set of solutions
will include solutions that will offer improvements over the status quo, and these
improvements will encompass a wider range of enterprise effects than previously
considered. These solutions are offered in a common language of mathematical decision
formulation that is uniformly applicable across the enterprise, so it is our vision that if this
approach is implemented at all levels and across all domains of an enterprise, the enterprise
as a whole will be designed, and improved, in a holistic and internally consistent manner.
3.5 COMPUTER INFRASTRUCTURE FOR ENTERPRISE DESIGN
The enterprise design method of the previous sections has been described in fairly
general terms, so that a fair amount of flexibility is maintained. This generality is
intentional, thereby allowing a number of different implementations (in terms of tools and
math constructs) to potentially be pursued. However, the cost of such generality is that the
124124
road to any one implementation may not be obvious. In this section the particular
implementation of enterprise design used in this dissertation is given, based upon the
computer infrastructure of the RCEM as introduced in Section 3.3.5.
Task 1 Expand Scope of Decision
Baseline Decision
Recommendation
Task 2a Identify Models
Task 2b Determine Input
Task 3 Define Bound-aries of Decision
Task 4 Transform and Integrate Models
Task 5 Generate Potential Solutions
ii
TT
ii
TT TT
TT
TT
TT
Robust, Top-Level Design Specifications
A. Factors and Ranges
C.Simulation Programs
B. Point Generator
D.Experiments
Analyzer
F. The Compromise
DSP
Given Find Satisfy Minimize
Overall Design Requirements
E. Response Surface Model
Figure 3.19 The RCEM as a Particularization of Enterprise Design
One legitimate way to view this method for enterprise design is as an expansion of
RCEM; in this case RCEM becomes one potential realization of enterprise design for a
given class of design decisions. This relationship is shown in Figure 3.19. On the left
side of Figure 3.19 the tasks of the enterprise design method are shown (extracted from
Figure 3.10), and on the right side of Figure 3.19 the computer infrastructure of the RCEM
125125
is shown (extracted from Figure 3.8). Dashed lines indicate particular relationships
between enterprise design tasks and RCEM processors. Processor C of the RCEM
represents domain-dependent computer simulation programs, which are just one type of
system modeling scheme addressed in enterprise design Task 2a.
Similarly, in defining the boundaries of a decision in enterprise design Task 3, one
activity that may occur is identifying noise factors in the formulation. This activity is
captured by Processor A of the RCEM. Processors B, D, and E of the RCEM are used for
creating response surface approximations of simulation programs, and are thus one way to
build statistical metamodels. These activities are therefore contained within Task 4 of the
enterprise design method and may be invoked if the need arises.
In sum, it may be possible to implement enterprise design using only the activities
contained within the RCEM, if a number of conditions hold true. Enterprise design may
reduce to robust concept exploration if:
• all of the overall design requirements can be classified as 1) design variables
under the designer’s control, 2) design variables that may vary over time
according to given probability distributions, and 3) noise factors that can not be
controlled,
• computer simulation packages exist as functions of these design variables, and
• sufficiently accurate approximations or these simulation packages can be built
with response surfaces.
Clearly, though, these conditions will not always hold true. The RCEM is not intended to
resolve interdependencies between decisions through time, so if a design variable is not
known with certainty at the current point in time, then the only option is to classify it as a
noise factor. The idea of creating variable transformations or correlations to predict the
downstream effects of design variables is not considered. Similarly, the RCEM is intended
126126
as a shell to wrap around computer simulation tools, which limits its applicability to design
decisions throughout an enterprise. The notion of approximating and integrating the entire
realm of system modeling techniques is not addressed. Finally, only one metamodeling
technique is offered in the RCEM, that of creating response surfaces. A much wider
selection of metamodeling techniques is in fact possible, and these options are incorporated
into the enterprise design approach.
The differences between enterprise design and RCEM can also be seen by looking
back to Figure 3.19. Tasks 1, 2a, and 2b of the enterprise design method have no real
counterpart in the RCEM, and exercising them may result in the identification of computer
simulation programs, but a variety of other modeling options are available. These tasks
address the issue of building quantitative models for the decision, which is taken as given
in the RCEM. Similarly in Processor A of the RCEM robust design is pursued as a matter
of course, while in Task 3 of the enterprise design method robust design is just one option
for resolving interdependencies between decisions across enterprise domains and through
time. Finally in Processors B, D, and E of the RCEM only response surface
approximations are constructed, when in fact a wide range of metamodeling techniques are
available. Guidelines and recommendations for choosing among these metamodeling
techniques are given as part of Task 4.
The strongest similarity between the enterprise design method and the RCEM is that
both detail the creation of decision formulations in terms of compromise DSPs. This is
shown by Task 5 of the enterprise design method and Processor F of the RCEM.
However, because of the extensions of RCEM discussed in the preceding paragraphs, the
C-DSP formulations generated from each differ in significant ways. These potential
differences are shown in Figure 3.20.
In the “Given” keyword, response surfaces are created in RCEM whereas a wide
range of system models and statistical models are available by pursuing enterprise design.
127127
In both formulations the “Find” keyword indicates the identification of particular values for
each of the design variables, but as is shown in the “Satisfy” keyword the DSPs are
tailored for different goals. The RCEM is primarily intended for the early stages of product
design, where top-level specifications are generated that bring the mean of product
performance on target while minimizing the variance due to noise factors. Decision
formulations generated with the enterprise design method are intended to be more wide-
ranging, and thus the goals address the resolving of interdependencies between the local
decision and external decisions across the enterprise. Robustness is just one option for
achieving such resolution.
RCEM Compromise DSP
Given RSE’s of computer analysis codes
Find Design VariablesSatisfy Constraints Goals - Mean on Target - Minimize Variance - Maximize Independence Bounds
Minimize Deviation Function
Enterprise Design Compromise DSP
Given System models Statistical metamodels
• RSE’s • Regression• Neural Nets • Kriging
Find Design VariablesSatisfy Constraints Goals - Enterprise-wide effects - Resolve interdependencies across enterprise domains and through time BoundsMinimize Deviation Function
Figure 3.20 C-DSP Formulations for Enterprise Design and RCEM
128128
3.6 PLACING THIS CHAPTER IN CONTEXT
Method for Enterprise Design
Timeline Procedure
Revised Task SP
Guidelines for Metamodeling
Categorizationof System Modeling
Definition of Interdependence
between Decisions
Integration of Models into Decisions
Integration of Decisions and Design Processes
Overall Philosophy for Implementing Method
Empowerment and Bounded Rationality
Engineering Design / DBD/ OR / Systems Theory
Hybrid Paradigm for Decision Support
Decision-Based Approach to Enterprise Design
Metamodeling, System Modeling, Decision Support Problem Technique
Developed in This Chapter
Developed in Previous Chapters
Developed in Chapters to Come
Figure 3.21 Pictorial Representation of Chapter 3 Context
This context is illustrated in Figure 3.21. In this chapter the primary elements of a
decision-based approach to enterprise design have been presented. A task-based method is
elaborated throughout Section 3.4, and an overall philosophy for implementing the method
is given in Section 3.2. The method is based solidly in the foundation of the DSP
Technique as presented in Section 3.3. A definition of interdependence between decisions
is offered in Section 3.4.5, and the revised Task SP, used to establish the formalism of the
method itself, is presented in Section 3.4.1.
129129
4. CHAPTER 4
METAMODELING TECHNIQUES FOR MODELAPPROXIMATION AND INTEGRATION
The concept of enterprise design is inextricably linked with the notion of
integration. The driver for this integration is the inevitable interdependence of design
activities as discussed in Section 3.2. Further, adopting the engineering design perspective
of modeling, analysis, and synthesis, the issue of enterprise integration is framed in terms
of either integrating existing models or building integrated modeling schemes. As noted in
Section 2.3.3, the path pursued is one of integrating models into a decision formulation.
This perspective is implicit in the approach to enterprise design developed in the previous
chapter; specifically such integration is accomplished by applying Step 4 of the method as
discussed in Section 3.4.6.
Model integration is achieved in this research through the application of statistical
metamodeling techniques. Because of the wealth of established research in this domain,
and because of the complexity and nuances of applying these techniques to enterprise
design, this chapter is devoted to expanding the description of Step 4 into sufficient detail.
The potential scenarios for model integration are established in Section 4.2, and the concept
of metamodeling is introduced in Section 4.3. A survey of metamodeling techniques is
given in Section 4.4, and guidelines and recommendations for their uses in enterprise
design are presented in Section 4.5.
130130
4.1 WHAT IS PRESENTED IN THIS CHAPTER
H 1.1
H 2.2
H 1.2
H 2.1
Q 2.1Integration
across Domains
Q 1.1Unified
Q 1.2Quantifiable
Q 2.2Integration
through Time
Method for Enterprise Design
Decision-Based Approach to Enterprise Design (Hypothesis 2)
Hybrid Paradigm for Decision Support (Hypothesis 1)
Domains
Tim
e
Timeline Procedure
EXISTING METHODS, TOOLS & TECHNIQUES
• System Modeling
• Metamodeling
• DSP Technique- DSPT Palette- Compromise DSP- DSIDES- RCEM
CONTRIBUTIONS
Motivation(Research Questions 1 and 2)
Philosophy Based on• Engineering Design• Operations Research• Systems Theory• Bounded Rationality• Empowerment
ANENTERPRISE
Revised Task SP
Guidelines for Metamodeling
Categorizationof System Modeling
§ 4.5
§ 4.4
In Section 4.4 statistical metamodeling techniques are reviewed and in Section 4.5
these techniques are incorporated into Task 4 of the enterprise design method. Guidelines
are presented for screening (Section 4.5.1), selecting metamodeling techniques (Section
4.5.2), building mean response metamodels (Section 4.5.3) and robustness metamodels
(Section 4.5.4), and metamodel validation (Section 4.5.5).
131131
4.2 SCENARIOS FOR MODEL INTEGRATION
The implicit philosophy embedded in the approach to enterprise design developed in
this work is that of design using available assets. In other words, an ideal scenario is one
in which existing modeling schemes are adapted and utilized in an enterprise design
decision formulation. (This is as opposed to the scenario of creating new modeling
schemes for each application.) By adopting such an approach the wealth of existing system
modeling techniques and software tools, the results of the concerted efforts of engineers
and analysts, opens up as a vast resource for capturing and quantifying enterprise behavior.
Building such decision formulations requires the integration of disparate models
that may exist across varied enterprise domains. These models must somehow be
integrated into a single decision formulation. In this section the requirements for such
integration are described in some detail. These requirements stem from two sources -- the
particular format of the existing model, and the format required for decision formulation
and solution.
As introduced in Section 3.4.6, the modeling techniques that exist in an enterprise
can be grouped into a handful of general categories. The particular categories shown below
are the result of collaboration and discussion with an industrial partner and are offered
axiomatically as a spanning set of modeling scenarios:
Scenario 1: model exists in the form of an analytical relationship (equation)
gathered from textbooks and such.
Scenario 2: model exists in the form of a computer analysis routine or simulation
with specified inputs and outputs.
Scenario 3: model exists implicitly, embedded in historical data.
132132
Scenario 4: model exists implicitly, embodied by the behavior of test equipment.
Scenario 5: model exists in the form of expert opinion.
To be integrated into a decision formulation, all of these models must be able to be
represented mathematically on computer, ideally in a format that requires very little
computing time. The models described by Scenario 1 and Scenario 2 are by definition
amenable to integration into a decision formulation, but actually invoking them for
computation may involve significant computational expense. However, for Scenarios 3
through 5, the models themselves may not exist mathematically; the implicit models need
to be made explicit. Therefore, in each scenario models may potentially need to be
transformed or approximated. Such approximation or transformation can uniformly be
achieved by the application of statistical metamodeling techniques; the idea of
metamodeling is introduced in the next section.
4.3 THE CONCEPT OF METAMODELING
In this section the concept of metamodeling is introduced and defined in terms of its
origins in statistics for creating approximations of computer codes. This concept is then
applied to the modeling scenarios of the previous section, thus developing the relationship
between metamodeling and model integration in enterprise design.
Much of today's engineering analysis work consists of running complex computer
codes -- supplying a vector of design variables (inputs) x and receiving a vector of
responses (outputs) y . Despite continual advances in computing power and speed, the
expense of running many of these codes remains non-trivial. Single evaluations of
aerodynamic or finite-element codes can take from minutes to hours, if not longer. What's
more, this mode of query-and-response can often lead to a trial and error approach to
133133
design, where a designer may never uncover the functional relationship between x and y
and therefore may never identify the best settings for the input values.
Statistical techniques are widely used in engineering design to address these
concerns. The basic approach is to construct approximations of the analysis codes that are
much more efficient to run and that yield insight into the functional relationship between x
and y . If the true nature of a computer code is represented as
y = f(x), [4.1]
then a "model of the model" or metamodel (Kleijnen, 1987) of the code is taken to be
y = g(x), and so y = y + ε [4.2]
where ε represents both the error of approximation and measurement or random errors.
The most common metamodeling approach is to apply the Design of Experiments (DOE) to
identify an efficient set of computer runs (x1, x2, . . . , xn) and then employ regression
analysis to create a polynomial approximation of the computer code. These approximations
then can replace the existing code while providing:
• a better understanding of the relationship between x and y (this understanding
is typically lost when running computer codes "blindly"),
• facilitated integration of domain dependent computer codes (we no longer work
with the codes themselves but rather simple approximations of them), and
• fast analysis tools for optimization and exploration of the design space (work
becomes more efficient because approximations are used rather than the
computationally expensive codes themselves).
Although the term “metamodeling” owes its origins to the approximation of computer
codes, the range of statistical techniques that it encompasses is broad enough to address all
of the scenarios for model integration highlighted in Section 4.2. This becomes clear by
134134
viewing each modeling scenario from a common perspective -- each model is in effect a
function y = f(x), whether it be a computer analysis code, an explicit equation, or a
relationship implied by historical data.
Scenario 1: It is quite likely, perhaps probable, that models existing in the form
of analytical relationships (equations) will not require any additional effort to integrate in a
decision formulation. However, it is possible that the equations themselves might be
overly complex or require excessive computation; in these cases metamodeling can be
applied as described above to create approximations at an appropriate level of complexity.
Scenario 2: If an existing computer analysis routine is easily portable and
requires little computational time, then perhaps metamodeling is not required for its
integration into a decision formulation. As discussed in Section 3.2.4, the analysis routine
can be linked directly to the solution software. If, however, the routine is slow or if its
availability is constrained, then metamodeling can be applied to create approximations as
described above.
Scenario 3: Statistical regression techniques are by definition employed to create
mathematical representations of behavior embedded in historical data. In this case
regression, or metamodeling, is used to make such implicit models mathematically explicit.
Scenario 4: This scenario describes the classic situation for the application of the
design of experiments to generate data, to which regression techniques are applied to yield
mathematical representations of the test equipment behavior. Again, in this case
metamodeling is employed to make such implicit models mathematically explicit.
Scenario 5: Expert opinions may already exist in mathematical format, in which
case this scenario reduces to that of Scenario 1. However, opinions may also exist in
verbal, symbolic, or qualitative format, or perhaps they can only be elicited in a case-by-
case method in a mode of query and response. The representation of expert opinion on
computer is in the domain of knowledge engineering and artificial intelligence and is
135135
therefore not a focus in this research, but if a query and response method is indeed used to
generate a set of data, then metamodeling techniques (specifically inductive learning of
Section 4.4.2.3) can be employed to create a mathematical representation.
In the next section a review of metamodeling techniques is presented that
encompasses regression, neural networks, inductive learning, and the more advanced
approach of kriging. The section is concluded with an introduction to the general statistical
approaches of Response Surface Methodology (RSM) and Taguchi's robust design.
Armed with this general knowledge of metamodeling techniques, the task of transforming
and integrating models is explored in more detail in Section 4.5.
4.4 SURVEY OF METAMODELING TECHNIQUES
In its most basic sense, metamodeling involves (a) choosing an experimental design
for generating data, (b) choosing a model to represent the data, and then (c) fitting the
model to the observed data. There are a variety of options for each of these steps as shown
in Figure 4.1, and a few of the more prevalent combinations have been highlighted. For
example, building a neural network involves fitting a network of neurons by means of
backpropagation to data which is typically hand selected while response surface
methodology usually employs central composite designs, second order polynomials and
least squares regression analysis.
In the remainder of this section a brief overview is provided of many of the options
listed in Figure 4.1. In Section 4.4.1 experimental designs are discussed, with particular
emphasis on (fractional) factorial designs, central composite designs, and orthogonal
arrays. Measures of merit are also discussed for the evaluation of such experimental
designs. In Section 4.4.2 model choice and model fitting are introduced, focusing on
response surfaces, neural networks, inductive learning and kriging. An overview of two
of the more prevalent metamodeling techniques, those of response surface methodology
136136
and Taguchi's robust design, is given in Section 4.4.3, and finally in Section 4.4.4 the
pitfalls of applying metamodeling to deterministic applications are discussed.
SAMPLE METAMODELING
TECHNIQUES
MODELFITTING
EXPERIMENTALDESIGN
MODELCHOICE
Kriging
Neural Networks
Inductive Learning
(Fractional) Factorial
Central Composite
Box-Behnken
Latin Hypercube
Uniform Shell
Hoke
Pesotchinsky
Box-Draper
D-Optimal
Plackett-Burman
Hexagon
Hybrid
Polynomial(linear, quadratic, etc.)
Splines(linear, cubic)
Network of Neurons
Rulebase orDecision Tree
Radial Basis Functions
Frequency-Domain
Kernel Smoothing
Least SquaresRegression
Weighted Least SquaresRegression
Backpropagation
Best Linear Unbiased Predictor (BLUP)
Entropy(information-theoretic) Random Selection
Select By Hand
Log-Likelihood
G-Optimal
Orthogonal Array
Best Linear Predictor (BLP)
Response Surface Methodology
Figure 4.1 Techniques for Metamodeling
4 .4 .1 Experimental Design
Properly designed experiments are essential for effective metamodel creation. The
traditional approach in engineering is to vary one parameter at a time and observe the
effects, or to randomly assign different combinations of factor settings to be used as
alternative analyses for comparison. Experimental design techniques developed for
effective physical experiments are being applied for the design of engineering computer
experiments to increase the efficiency (computer time) and effectiveness (appropriate effects
137137
included) of these analyses. In this section an overview of different types of experiment
designs is provided, along with measures of merit for selecting or comparing different
designs.
4 .4 .1 .1 A Survey of Experimental Designs
An experimental design formally represents a sequence of experiments to be
performed, expressed in terms of factors (design variables) set at specified levels, or
predefined values. An experimental design is represented mathematically by a matrix X
where the rows denote experimental runs and the columns denote the particular factor
setting for each run.
Factorial Designs: The most basic experimental design is a full factorial design
(Box, et al., 1978). The number of design points dictated by a full factorial design is the
product of the number of levels for each factor (a two factor experiment with 2 levels for
factor A and 3 levels for factor B has 2x3=6 design points, for example). The most
common designs are the 2k (for evaluating main effects and interactions) and 3k designs
(for evaluating main and quadratic effects and interactions) for k factors at 2 and 3 levels,
respectively. A 23 full factorial design is shown graphically in Figure 4.2(a).
(a) 23 Full Factorial (b) 23-1 Fractional Factorial (c) Composite Design
X1
X2
X3
X1
X2
X3
X1
X2
X3
Figure 4.2 Basic Three-Factor Designs
138138
The size of a full factorial experiment increases exponentially with the number of
factors which may lead to an unmanageable number of experiments, e.g., 10 factors each
with 2 levels requires 210 or 1024 experiments. Fractional factorial designs can be used
when experiments are costly and the number of design points for a full factorial design is
large (Montgomery, 1997). A fractional factorial design consists of a fraction of a full
factorial design ; the most common fractional factorial designs are 2(k-p) designs where a
fraction is equal to 1/2(p). A half fraction (2k-1) of the 23 full factorial design is shown in
Figure 4.2(b).
The reduction of the number of design points associated with a fractional factorial
design, however, does not come without a price. The 23 full factorial design shown in
Figure 4.2(a) allows estimation of all main effects (x1, x2, x3), all two factor interactions
(x1x2, x1x3 and x2x3), as well as the three factor interaction (x1x2x3). For the 23-1
fractional factorial indicated by the solid dots in Figure 4.2(b), however, the main effects
are aliased (or biased) with the two factor interactions. Aliased effects cannot be estimated
independently unless all but one of the confounded effects are known or assumed not to
exist. The resolution of an experiment design is defined by the confounding of effects: a
design of resolution R is one in which no p-factor effect is confounded with any other
effect containing less than R-p factors (Box, et al., 1978). The 23-1 design in Figure
4.2(b) can be at most a Resolution III design, denoted 2III3-1.
Often the 2k and 2(k-p) designs are used for identifying or screening important
factors when the number of factors is too large to evaluate higher order effects. In these
cases the sparsity of effects principle (Montgomery, 1997) is invoked, in which the system
is assumed to be dominated by the main effects and their low-order interactions. Based on
this principle, two level fractional factorial designs can be used to "screen" the factors to
determine which have the largest effect. The sparsity of effects principle may not always
139139
hold true, however. As noted in (Hunter and Naylor, 1970), every design provides
confounded estimates (quadratic and cubic effects, if present, will confound the estimates
of the mean and main effects respectively when a two level factorial design is used). "Any
phenomenon omitted from a fitted model will confound certain estimated parameters in the
model regardless of the design used. Good fractional factorial designs are carefully
arranged so that estimates of the effects thought to be important are confounded by effects
thought to be unimportant."
One specific family of fractional factorial designs most widely used for screening
are the two level Plackett-Burman designs. These resolution III designs are constructed to
study k=n-1 factors in n=4m design points. Plackett-Burman designs in which n is a
power of two are called geometric designs and are identical to 2(k-p) fractional factorials. If
n is strictly a multiple of four, the Plackett-Burman designs are referred to as non-geometric
designs and have very messy alias structures. Their use in practical applications is
problematic particularly if the design is saturated (i.e., the number of factors is exactly n-
1). If interactions are negligible, however, these designs allow unbiased estimation of all
main effects, and require only one more design point than the number of factors; they also
give the smallest possible variance (Box, et al., 1978). Myers and Montgomery (1995)
present a more complete discussion of factorial designs and aliasing of effects, and
minimum variance and minimum size designs will be discussed in Section 4.4.1.2. For a
more complete discussion of factorial designs, confounding, and design resolution, see
(Box, et al., 1978).
Central Composite and Box-Behnken Designs: For estimating quadratic effects
three level full factorial or fractional factorial designs (3k or 3k-p, respectively) can be used,
but they often lead to an unmanageable number of design points, e.g., 7 factors at three
levels requires 37 or 2187 experiments. The most common second-order designs,
140140
configured to reduce the number of design points, are central composite and Box-Behnken
designs.
A central composite design (CCD) is a two level (2k or 2(k-p)) factorial design,
augmented by n0 center points, and two "star" points positioned at ±α for each factor. This
design, shown for three factors in Figure 4.2(c), consists of 2(k-p) + 2k + n0 total design
points (142 + n0 points for 7 factors) to estimate 2k + k(k-1)/2 + 1 coefficients. It should
be noted that for values of α other than 1, each factor is evaluated at five levels. Setting α
= 1 locates the star points on the centers of the faces of the cube (for three factors), giving a
face-centered central composite design (CCF).
Often it is desirable to employ the smallest number of factors levels when creating
an experimental design. One common, economical class of such designs is the Box-
Behnken designs. These designs are formed by combining 2k factorials with incomplete
block designs (see Box, et al., 1978). These designs do not contain points at the vertices
of the hypercube defined by the upper and lower limits for each factor, which is desirable if
these extreme points are expensive or impossible to test. More information about central
composite and Box-Behnken designs can be found in (Montgomery, 1997).
Orthogonal Arrays: The experiment designs used by Taguchi, called orthogonal
arrays, are most often simply fractional factorial designs in two or three levels (2(k-p) and
3(k-p) designs). These arrays are constructed to reduce the number of design points
necessary to evaluate the required effects; the two-level L4, L12 , and L16 arrays, for
example, allow 3, 11, and 15 factors/effects to be evaluated in 4, 12, and 16 design points
respectively. In many cases, these designs are identical to those given by Plackett and
Burman. Taguchi's three level L9, for example, was given by Plackett and Burman in
1946 (Lucas, 1991). The definition of orthogonality for these arrays and other experiment
141141
designs is given in Section 4.4.1.2. An overview of Taguchi's approach to parameter
design, within which the orthogonal arrays are implemented, is given in Section 4.4.3.2.
4 .4 .1 .2 Measures of Merit for Evaluating Experimental Designs
Selecting the appropriate design is essential for effective experimentation.
Experimenters must balance the desire to gain as much information as possible about the
response-factor relationships with the cost of experimentation and need for efficiency
(measured in number of runs). Several measures of merit are available for evaluating and
comparing experimental designs to ensure the appropriate experiment is designed.
Orthogonality, Rotatability, Minimum Variance, and Minimum Bias: Four desirable
characteristics of an experimental design, to facilitate efficient estimates of the parameters,
are orthogonality, rotatability, minimum variance, and minimum bias. A design is
orthogonal if for every pair of factors xi and xj the sum of the crossproducts for the N
design points,
xiuxju∑u=1
N
, [4.3]
is zero. For a first order model, the estimates of all coefficients will have minimum
variance if the design can be configured so that
xiu2∑
u=1
N
= N ; [4.4]
the variance of predictions y will also have constant variance at a fixed distance from the
center of the design, and the design will be rotatable.
For second order modeling, Hunter (1985) suggests that orthogonality loses its
importance. "If the objective of the experimenter is to forecast a response at either present
or future settings of x , then an unbiased minimum variance estimate of the forecast y is
required. In the late 1950's Box and his coworkers demonstrated that rotatability ... and
142142
the minimization of bias from higher order terms ... were the essential criteria for good
forecasting." A design is rotatable if n•Var[ ˆ y (x)]/σ2 has the same value at any two
locations that are the same distance from the design center. The requirements for minimum
variance and minimum bias designs are beyond the scope of this work; more information
can be found in (Myers and Montgomery, 1995).
Unsaturated/Saturated and Supersaturated Designs: In many cases the primary
concern in the design of an experiment is its size, measured in number of runs. Most
experiment designs are unsaturated in that they contain at least two more design points than
the number of factors. An experiment design is said to be saturated if the number of design
points is equal to one more than the number of factor effects to be estimated. Saturated
fractional factorial designs allow unbiased estimation of all main effects (resolution III
designs) with the smallest possible variance and size (Box, et al., 1978). The most
common examples of saturated designs are the Plackett-Burman two level designs
discussed in the previous section (when total number of design points is a multiple of
four), and the related (or identical) orthogonal arrays of Taguchi.
For estimating second order effects, small composite designs have be developed to
reduce the number of design points necessary for a composite design. A small composite
design is saturated if the number of design points is 2k + k(k-1)/2 + 1 (the number of
coefficients to be estimated for a full quadratic model). Myers and Montgomery (1995)
note that recent work has suggested that these designs may not be good choices in all
applications; additional discussion on small composite designs can be found in (Box and
Draper, 1987) and (Lucas, 1974). Finally, in a supersaturated designs the number of
design points is less than or equal to the number of factors. Additional aspects of
supersaturated designs is found in (Draper and Lin, 1990a) and (Draper and Lin, 1990b).
143143
It is most desirable to use unsaturated designs for predictive models, unless running
the necessary experiments is prohibitively expensive. When comparing experiments based
on the number of design points and the information obtained, the D-optimal and D-
efficiency statistics are often used.
D-Optimality and D-Efficiency: A design is said to be D-optimal if |X ' X |/np is
maximized where X is the expanded design matrix which has n rows (one for each design
setting) and p columns (one column for each coefficient to be estimated plus one column
for the overall mean). The D-efficiency statistic for comparing designs, given in Equation
4.5, compares a design against a D-optimal design, normalized by the size of the matrix in
order to compare designs of different sizes.
D-efficiency = (|X'X |design/|X'X |D-optimum)1/t [4.5]
Other statistics for comparing designs include G-efficiency, Q-efficiency, and A-optimality.
The G-efficiency statistic is given by Equation 4.6, where dmax multiplied by 2/n is the
maximum prediction variance over the experimental region (Lucas, 1974).
G-efficiency = t/dmax [4.6]
An in-depth discussion of these statistics can be found in (Myers and Montgomery, 1995).
Having completed this review of experimental design alternatives and measures of merit,
we can now turn to the issues of model choice and model fitting.
4 .4 .2 Model Choice and Model Fitting
After selecting an appropriate experimental design and performing the necessary
runs to gather experimental data, the next step involves choosing an approximating model
and fitting method. Many alternative models and methods exist, but for practicality only
the four techniques found to be most prevalent in the literature are reviewed here: response
surfaces, neural networks, inductive learning and kriging.
144144
4 .4 .2 .1 Response Surfaces
Given a response of interest, y, and a vector of independent factors x thought to
influence y, the relationship between y and x can be written as follows:
y = f(x) + ε , [4.7]
where ε represents random error which is assumed to be normally distributed with mean
zero and standard deviation σ. Since the true response surface function f(x) is usually
unknown, a response surface g(x) is created to approximate f(x). Predicted values are then
obtained using y = g(x).
The most widely used response surface approximating functions are simple low-
order polynomials. If little curvature appears to exist, the first-order polynomial given in
Equation 4.8 can be employed. If significant curvature exists, the second-order polynomial
in Equation 4.9, including all two-factor interactions, can be used.
y = b0 + bixi∑i=1
k
[4.8]
y = b0 + bixi∑i=1
k
+ biixi2∑
i=1
k
+ bijxixji<j
∑j=1
k
∑i=1
k
[4.9]
The parameters of the polynomials in Equations 4.8 and 4.9 are usually determined
using a least squares regression analysis to fit these response surface approximations to
existing data. These approximations are normally used for prediction within response
surface methodology. A more complete discussion of response surfaces and least squares
fitting can be found in (Myers and Montgomery, 1995). An overview of RSM is given in
Section 4.4.3.1.
145145
4 .4 .2 .2 Neural Networks
A neural network is composed of neurons (single-unit perceptrons) which are
multiple linear regression models with a nonlinear (typically sigmoidal) transformation on
y. This idea is illustrated in Figure 4.3(a), where the inputs to each neuron are denoted
{x1, x2, . . . , xn}, the regression coefficients are denoted by the weights, wi, and the
output, y, is given by
y = 11 + e-η/T . [4.10]
As shown in Figure 4.3(a), η = Σwixi + β (where β is a the "bias value" of a neuron), and
T is the slope parameter of the sigmoid defined by the user. A neural network is then
created by assembling the neurons into an architecture; the most prevalent is the multi-layer
feedforward architecture shown in Figure 4.3(b).
InputUnits
HiddenLayer
OutputUnits
•
••
•
••
x~ y~
Inputs
x1
x2
x3
xn
•
••
Neuron
η = Σwixi + β
y = 1 + e -η/T
1
Outputy
w1
wn
(a) Single-Unit Perceptron (b) Feedforward Three-layer Architecture
Figure 4.3 Typical Neuron and Architecture
There are two main issues in building a neural network: 1) specifying the
architecture, and 2) training the network to perform well with reference to a training set.
146146
"To a statistician, this is equivalent to (i) specifying a regression model, and (ii) estimating
the parameters of the model given a set of data." (Cheng and Titterington, 1994) If the
architecture is made large enough, a neural network can be a nearly universal approximator
(Rumelhart, et al., 1994). "Training" a neural net refers to determining the proper values
of all the weights (wi) in the architecture, and is accomplished most commonly through
backpropagation (Rumelhart, et al., 1994). First, a set of p training data points is
assembled {(x1,y1), (x2,y2), ..., (xp,yp)}. For a network with a single output y, the
performance of the network is then defined as
E = (yp - yp)2∑p
, [4.11]
where yp is the output that results from the network given input xp, and E is defined as the
total error of the system. The weights are then adjusted in proportion to
∂E∂y
∂y∂wij
. [4.12]
Neural networks are best suited to approximate deterministic functions in regression-type
applications. Cheng and Titterington (1994) note that "In most applications of neural
networks that generate regression-like output, there is no explicit mention of randomness.
Instead, the aim is function approximation." Typical applications are speech recognition
and handwritten character recognition; very often the data is complex and of high
dimensionality. Networks with tens of thousands of parameters are not unheard of, and
the amount of training data is similar. Gathering the training data and determining the
model parameters is a process that can be very computationally expensive.
4 .4 .2 .3 Inductive Learning
Rule induction is one of five main paradigms of machine learning that also include
neural networks, case-based learning, genetic algorithms, and analytic learning (Langley
and Simon, 1995). Of the five, inductive learning is most akin to regression and
147147
metamodeling and is therefore the focus here. An inductive learning system induces rules
from examples, so the fundamental modeling constructs are condition-action rules. These
rules in essence partition the data into discrete categories, and can be combined into
decision trees for ease of interpretation. Many of the applications of inductive learning
have been in process control and diagnostic systems, and inductive learning approaches can
be used to automate the knowledge-acquisition process of building expert systems.
Excellent examples can be found in (Evans and Fisher, 1994), (Langley and Simon, 1995),
and (Leech, 1986) .
The training data collected are in the form of {(x1,y1), (x2,y2), ..., (xn,yn)} where
each xi represents a vector of attribute values (such as processing parameters and
environmental conditions), and each yi represents a corresponding observed output value.
Although attributes and outputs can be real-valued, the method is better suited to discrete-
valued data. Real values must often be transformed into discrete representations (Evans
and Fisher, 1994). Once the data set has been collected, training algorithms build a
decision tree by selecting the "best" divisive attribute and then recursively calling the
resulting data subsets. Although trees can be built by selecting attributes randomly, a more
efficient approach is to select the attribute that minimizes the amount of information needed
for category membership. The mathematics of such an information-theoretic approach are
given in (Evans and Fisher, 1994).
Although decision trees seem best-suited for applications with discrete input and
output values, there are also applications with continuous variables that have met with
greater success than standard statistical analysis. For example, Leech (1986) reports a
process-control application where, "Standard statistical analysis methods were employed
with limited success. Some of the data were non-numerical, the dependencies between
variables were not well understood, and it was necessary to simultaneously control several
148148
characteristics of the final product while working within system constraints. The results of
the statistical analysis, a set of correlations for each output of interest, were difficult for
people responsible for the day-to-day operation to interpret and use."
4 .4 .2 .4 Kriging
Since most computer analyses are deterministic and are therefore not subject to
measurement error, the usual measures of uncertainty derived from least-squares residuals
have no obvious statistical meaning (Sacks, et al., 1989b). Consequently, some
statisticians (e.g., Sacks, et al., 1989a; Welch, et al., 1992) have suggested modeling the
response as a combination of a polynomial model (usually linear) plus departure, i.e.,
Y(x) = βjfj(x) + Z(x)∑j=1
k
[4.13]
where Z(x) is the systematic departure from the assumed model. Z(x) represents the
realization of a stochastic process and is assumed to have mean zero and covariance, V,
given by
V(w,x) = σ2R(w,x) [4.14]
between Z(w) and Z(x) where σ2 is the process variance and R(w,x) is the correlation. The
covariance structure of Z relates to the smoothness of the approximating surface.
One method of analysis for such models is kriging (Matheron, 1963) which works
as follows (Sacks, et al., 1989b). Given a design S = {s1, s2, . . . , sn} and data ys =
{y(s1), ..., y(sn)}', we are concerned with estimating the y(x) at an untried x using the
linear (or polynomial) predictor
y(x) = c'(x)ys. [4.15]
If we consider y(x) as random, ys becomes Ys = [Y(s1), ..., Y(sn)]', and we can compute
the mean squared error of the predictor averaged over the random process. Thus, the best
149149
linear unbiased predictor (BLUP) is found by picking the n x 1 vector c(x) to minimize the
mean squared error (MSE) of y(x), using the expected value E[x] operator:
MSE[y(x)] = E[c'(x)Ys - Y(x)]2 [4.16]
subject to the constraint for unbiasedness
E[c'(x)Ys] = E[Y(x)] . [4.17]
R(w,x) must first be specified in order to compute Equation 4.17 and should reflect the
characteristics of the computer code. For a smooth response, a covariance function with
some derivatives might be adequate, whereas an irregular response might call for a function
with no derivatives. Sacks, et al. (1989b) discuss some correlation functions which result
in a linear or cubic spline interpolation of the data (Mitchell, et al., 1988). Consequently,
kriging is extremely flexible due to the wide range of correlation functions R(w,x) which
may be chosen. Depending on the choice of the correlation function, kriging can either
"honor the data", providing an exact interpolation of the data, or "smooth the data",
providing a nonexact interpolation of the data (Cressie, 1988). More information about
kriging and the notion of modeling deterministic computer experiments as the realization of
a stochastic process can be found in (Cressie, 1989; Journel, 1986; Laslett, 1994;
Matheron, 1963; Sacks, et al., 1989b). Cressie (1988) deals specifically with predicting
noiseless versions of Z, as is the case for deterministic computer experiments. As a final
note, keep in mind that kriging and splines (nonparametric regression models) are not the
same. In fact, in several comparative studies, kriging has been seen to outperform splines
and never performs worse than them (Laslett, 1994).
4 .4 .3 Strategies for Experimentation and Metamodeling
Two widely used methods incorporating experimental design, model building, and
prediction are response surface methodology and Taguchi's robust design or parameter
design. In this section a brief overview of these two approaches is provided.
150150
4 .4 .3 .1 Response Surface Methodology
Different authors describe response surface methodology, or RSM, differently.
Myers, et al. (1989) define RSM as "a collection of tools in design or data analysis that
enhance the exploration of a region of design variables in one or more responses." Box
and Draper (1987) state that, "Response surface methodology comprises a group of
statistical techniques for empirical model building and model exploitation. By careful
design and analysis of experiments, it seeks to relate a response, or output variable to the
levels of a number of predictors, or input variables, that affect it." Finally, Biles (1984)
defines RSM as the, "body of techniques by which one experimentally seeks an optimum
set of system conditions".
RSM then encompasses and incorporates the design of experiments (Section
4.4.1), response surface model building (Section 4.4.2.1), and 'model exploitation' for
exploring a factor space and seeking optimum factor settings. The general RSM approach
includes all or a subset of the following steps:
i) screening: when the number of factors is too large for a comprehensive exploration
and/or when experimentation is expensive, screening experiments are used to
reduce the set of factors to those that are most important to the response(s) being
investigated;
ii) first-order experimentation: when the starting point is far from an optimum (or in
general when knowledge about the space being investigated is sought), first
order-models and an approach such as steepest-ascent are employed to "rapidly
and economically move to the vicinity of the optimum" (Montgomery and Evans,
1975);
151151
iii) second-order experimentation: after the best solution using first-order methods is
obtained, a second-order model is fit in the region of the first-order solution to
evaluate curvature effects and attempt to improve the solution.
A more detailed description of RSM techniques and tools can be found in (Myers
and Montgomery, 1995) and a comprehensive review of RSM developments and
applications from 1966-1988 is given in (Myers, et al., 1989).
4 .4 .3 .2 Taguchi's Robust Design
Genichi Taguchi developed an approach for industrial product design that is built on
the foundation of statistically designed experiments. Taguchi's robust design for quality
engineering includes three steps: system design, parameter design, and tolerance design
(Byrne and Taguchi, 1987). The key element of the approach is the parameter design step,
within which statistical experimentation is incorporated.
Rather than simply improving or optimizing a response value, the focus in
parameter design is to identify the factor settings (design variable settings) that minimize
variation in performance as well as adjust the mean performance to a desired target.
Factors included in experimentation include control factors and noise factors. Control
factors are those which can be set and held at specific values (and are denoted as design
variables throughout this dissertation), while noise factors are those which cannot be
controlled, e.g., temperature on a shop floor. The evaluation of mean performance and
performance variation is accomplished through "crossing" two orthogonal arrays (Section
4.4.1). Control factors are varied according to an inner array, or "control" array, and for
each run of the control array, the noise factors are varied according to an outer array, or
"noise" array. For each control factor experiment, a response value is obtained for each
noise factor design point, and the mean and variance of the response (measured across the
noise design points) can be calculated. The performance characteristic used by Taguchi is a
152152
signal-to-noise (S/N) ratio defined in terms of the mean and variance of the response.
Several alternate S/N ratios are available based on whether lower, higher, or nominal
response values are desired; specific S/N ratio formulations can be found in (Ross, 1988).
The Taguchi approach does not explicitly include model building and optimization.
Analysis of experimental results is used to identify factor effects, plan additional
experiments, and to set factor values for improved performance. A comprehensive
discussion of the Taguchi approach is given in (Ross, 1988) and (Phadke, 1989). Taguchi
methods have been used extensively in engineering design and are often incorporated
within traditional RSM for efficient, effective, and robust design (see Myers and
Montgomery, 1995).
The Taguchi approach and RSM have been applied extensively in engineering
design. It is commonly accepted that the principles associated with the Taguchi approach
are both useful and very appropriate for industrial product design. Ramberg, et al. (1991)
suggest that "the loss function and the associated robust design philosophy provide fresh
insight into the process of optimizing or improving the simulation's performance." Two
aspects of the Taguchi approach are often criticized: the choice of experimental design
(orthogonal arrays, inner and outer) and the loss function (signal-to-noise ratio). It has
been argued and demonstrated that the use of a single experiment combining control and
noise factors is more efficient (Shoemaker, et al., 1991; Welch, et al., 1990). The
drawbacks of combining response mean and variance into a single loss function (signal-to-
noise ratio) are well-documented. Many authors advocate measuring the response directly
and separately tracking mean and variance (Chen, et al., 1995; Ramberg, et al., 1991;
Welch, et al., 1990). However, Shoemaker, et al. (1991) warn that "A potential drawback
of the response-model approach is that it depends more critically than the loss-model
approach on how well the model fits."
153153
Given the wide acceptance of Taguchi robust design principles and the criticisms,
many advocate a combined Taguchi-RSM approach or simply using traditional RSM
techniques within the Taguchi framework (Lucas, 1994; Myers, et al., 1989; Myers and
Montgomery, 1995; Ramberg, et al., 1991). This issue is carried further in Section 4.5.4.
4 .4 .4 A Closer Look at Metamodeling for Deterministic Applications
Since engineering design commonly involves exercising deterministic computer
codes, the use of statistical experimentation in creating metamodels of these codes warrants
a closer look. Given a response of interest, y, and a vector of independent factors x
thought to influence y, the relationship between y and x (Equation 4.2) includes the
random error term ε. To apply least squares regression, the error values for each data point
are assumed to have identical and independent normal distributions (IID) with means of
zero and standard deviations of σ, or εi IID N(0,σ2). This scenario is shown in Figure
4.4(a). The least squares estimator (LSE) then minimizes the sum of the squared
differences between the actual data points and the values predicted by the model. It is
acceptable if no data point actually lies on the predicted model, because it is assumed that
the model "smoothes out" the random error.
Of course, it is likely that the regression model itself is only an approximation of the
true behavior between x and y, so that the final relationship is
y = g(x) + εbias + εrandom [4.18]
where εbias represents the error of approximation. However, for deterministic computer
experiments as illustrated in Figure 5(b), εrandom has mean zero and variance zero, so after
model fitting we have the relationship
154154
y = g(x) + εbias [4.19]
x
y
x
y
ε ~ N(0,σ2)
(yi - yi)
L.S.E.= (yi - yi)2∑ (yi - yi)
2=0∑
y = g(x) = second orderleast squares fit y = g(x) = spline fit
ε = 0
(a) Non-Deterministic Case (b) Deterministic Case
Figure 4.4 Deterministic and Non-Deterministic Curve Fitting
The deterministic case of Equation 4.19 conflicts sharply with the methods of least squares
regression. First, unless εbias is IID N(0,σ2) then the assumptions for statistical inference
using of least squares regression are violated. Even further, because there is no random
error there is little justification for smoothing across data points; instead the model should
hit each point exactly and interpolate between them as shown in Figure 4.4(b). Finally,
most standard tests for model and parameter significance are based on computations using
εrandom (the mean squared error) and are therefore impossible to compute. These
observations are supported by literature in the statistics community; as Sacks, et al.
(1989b) carefully point out, because deterministic computer experiments lack random error:
155155
• response surface model adequacy is determined solely by systematic bias,
• the usual measures of uncertainty derived from least-squares residuals have no
obvious statistical meaning (deterministic measures of uncertainty exist, e.g.,
max |y(x) - y(x)| over x and a class of y's, but they may be very difficult to
compute), and
• the classical notions of experimental blocking, replication and randomization are
irrelevant.
Similarly, according to Welch and his co-authors (1990), current methods for the design
and analysis of physical experiments (for example, Box, et al., 1978; Box and Draper,
1987) are not ideal for complex, deterministic computer models. "In the presence of
systematic error rather than random error, statistical testing is inappropriate." (Welch, et
al., 1990) Finally, a discussion of how the model should interpolate the observations can
be found in (Sacks, et al., 1989a).
So where can these methods go wrong? Unfortunately it is very easy to
unthinkingly classify the εbias term from a deterministic model fit as εrandom and then
proceed with standard statistical testing. Several authors have reported statistical measures
such as the F-statistics and root MSE for verification of model adequacy, e.g., (Healy, et
al., 1975; Koch, et al., 1996; Unal, et al., 1994; Venter, et al., 1996; Welch, et al., 1990).
These measures have no statistical meaning since they assume the observations include an
error term which has mean of zero and a non-zero standard deviation. Consequently, the
use of stepwise regression for polynomial model fitting is not appropriate since it utilizes F-
statistic values when adding/removing model parameters.
R-Squared (defined as the model sum of squares divided by the total sum of
squares and thus varying from 0 to 1) is really the only measure for verifying model
adequacy for deterministic computer experiments, and often this measure is not sufficient.
156156
(A high R-Squared value can often be deceiving). Consequently, confirmation testing of
model validity through use of additional (different) data points becomes essential. Residual
plots may also be extremely helpful when verifying model adequacy for identifying trends
in data, examining outliers, etc.
Some researchers (e.g., (Giunta, et al., 1996, Giunta, et al., 1994, Venter, et al.,
1996)) have also employed metamodeling techniques such as RSM for modeling
deterministic computer experiments which contain numerical noise. This numerical noise is
used as a surrogate for random error, thus allowing the standard least-squares approach to
be applied. However, the assumption of equating numerical noise to random error is
questionable, and the appropriateness of their approach warrants further investigation.
Given the potential problems of applying least-squares regression to deterministic
applications, the trade-off then becomes one of appropriateness vs. practicality. If a
response surface is created to model data from a deterministic computer code using
experimental design and least-squares fitting, and if it provides very good agreement
between predicted and actual values (a situation that often occurs when the computer model
is "well-behaved"), then there is no practical reason to discard it. It should be used, albeit
with caution. However, it is important to be aware of the fundamental assumptions of the
statistical techniques employed, so that we avoid incorrect or misleading statements about
model significance.
How can a design engineer efficiently apply the metamodeling tools of Section 4.4
while avoiding the pitfalls described in this section? There are two ways to answer this
question: from the bottom up (tools -> applications) and from the top down (motives ->
tools). The bottom-up approach is presented in Section 4.5.2.1 and 4.5.2.2 and the top-
down approach is described in Section 4.5.2.3.
157157
4.5 APPLYING METAMODELING TO ENTERPRISE DESIGN
In this section the task of transforming and integrating models, introduced in
Section 3.4.6, is explored in greater detail. For this discussion it is assumed that either of
the questions of model efficiency, integration or robustness have been answered in the
affirmative so that the process of metamodeling is to be employed. (The scenarios for
answering “yes” to these questions are described in Section 4.2 and Section 4.3.) This
section is thus focused on the lower portion of Figure 4.5. (Note the dashed line added
from the robustness question to the task of building robustness metamodels; this is
discussed in Section 4.5.4.)
SelectedModeling
Option
Screen for Significant
Factors
Efficiency or Integration an
Issue?Robustness
Needed?
Significant Factors
Select Metamodeling
Technique Build Mean Response
Metamodel
Build Robustness Metamodel
Verify & Validate
σ model^
y model^Models
No No
Yes Yes
Use existing model as is
DesignVariables
NoiseFactors
MetamodelingTechniques
FixedParameters
ii
ii
ii
ii
ii
ii
ii
ii
ii ?? ??
??TT
TT
TT
TT
Figure 4.5 Task 4: Transforming and Integrating Models
In Section 4.5.1 the process of screening for significant factors is described, and in
Section 4.5.2 guidelines and recommendations are presented for selecting metamodeling
techniques based on the review in Section 4.4. In Section 4.5.3 a process for building
158158
mean response metamodels is outlined, and in Section 4.5.4 a similar process is presented
for building robustness metamodels. Finally in Section 4.5.5 issues for metamodel
verification and validation are discussed.
4 .5 .1 Screening for Significant Factors
Screening is an explicit element of RSM as described in Section 4.4.3.1 but can
also be applied independently or as a preface to any other metamodeling technique.
Screening is employed when the number of factors is too large for a comprehensive
exploration and/or when experimentation is expensive, so its goal is to reduce a large set of
factors to a smaller set of those that are most important to the response(s) being
investigated. As such, screening is a cornerstone of efficient metamodel building.
As noted in Section 4.4.1, experimental designs often employed for screening
include two-level fractional factorial and Plackett-Burman designs. These designs allow a
large number of factors to be investigated for a relatively few number of runs; the trade off
is that these designs are commonly of low resolution so that only the main (linear) effects
are compared. Analysis of variance is employed to compute the “contribution” of each
factor to the overall variation in response, and if the response includes random or
experimental error then these contributions are compared to the magnitude of the random
error. Using statistical testing, only the factors that are statistically significant are retained
for subsequent model building. If the response does not include random error then
statistical testing can not be performed, but the factor contributions can still be compared to
each other, and using engineering judgment the least contributors can perhaps be dropped
from subsequent model building.
4 .5 .2 Selecting a Metamodeling Technique
As shown in Figure 4.1, selecting a metamodeling technique requires the selection
of an experimental design, selecting a form for the model, and selecting a model fitting
159159
technique. In the subsections to follow these selections are addressed in turn. In Section
4.5.2.1 a bottom-up approach is presented for selecting model choice and model fitting
alternatives, and in Section 4.5.2.2 an evaluation of experimental designs is given. Finally
in Section 4.5.2.3 a set of initial recommendations of modeling techniques is generated.
4 .5 .2 .1 Evaluation of Model Choice and Model Fitting Alternatives
In this section a bottom-up approach to selecting metamodeling techniques is
presented -- that of selecting model choice and model fitting alternatives. In Section 4.4.2
four metamodeling techniques are described; here some brief guidelines are given for their
evaluation.
Response Surfaces: This technique is primarily intended for applications with
random error, though deterministic applications are not necessarily incorrect. Building
higher-order models with greater than ten factors becomes difficult. (Regression model
building itself is tractable for fifteen to twenty factors, but actually obtaining the data is the
limiting factor.) It is the most well-established metamodeling technique, and is probably
the easiest to use.
Neural Networks: This nonlinear regression approach is best suited for
deterministic (non-noisy) applications. They can be a nearly universal approximator, and
so are able to handle highly nonlinear or extremely large (~ 10,000 parameters) problems,
as long as enough data points are provided and enough computer time is allocated to fit the
model. Neural networks therefore lend themselves to large problems, so that (Cheng and
Titterington, 1994), "Now the procedure is to toss the data directly into the NN software,
use tens of thousands of parameters in the fit, let the workstation run 2-3 weeks grinding
away doing the gradient descent, and voilá, out comes the result."
Inductive Learning: This modeling technique is most appropriate where the input
and output factors are primarily discrete-valued or can be grouped into categories. The
160160
predictive model, in the form of condition/action rules or a decision tree, may perhaps lack
the mathematical insight desired for engineering design applications, but it is ideal for
applications such as process control and diagnosis.
Kriging: This interpolation method created to handle deterministic data is not based
on the same assumptions as standard regression analysis and appears to be better suited for
deterministic computer experiments, as long as the number of factors is small. (Less than
ten, according to Welch and coworkers (Welch, et al., 1992)). Kriging is extremely
flexible due to the wide scope of correlation functions R(w,x) which may be chosen
(Laslett, 1994). Depending on the choice of the correlation function, kriging can either
"honor the data", providing an exact interpolation of the data by means of a linear or cubic
spline, or "smooth the data", providing a nonexact interpolation of the data (Cressie,
1988). Additional advantages include (Welch, et al., 1992):
• it provides a basis, via the likelihood, for a stepwise algorithm to determine the
important factors;
• it is flexible, allowing nonlinear and interactions effects to emerge without
explicitly modeling such effects;
• the same data can be used for screening and building the predictor, so expensive
runs are efficiently used.
The complexity of the method, however, coupled with a lack of computer software
support, may make the learning curve of this technique prohibitive in the near term.
The second component of a bottom-up approach is choosing an experimental
design, which has more direct applicability to response surface methods but also applies to
the remaining metamodeling techniques. (The explorations of what designs are most
appropriate for these other techniques are, as yet, open research areas.) An evaluation of
experimental design alternatives is given in the next section.
161161
4 .5 .2 .2 Evaluation of Experimental Design Alternatives
There are many voices in the discussion of the relative merits of different
experimental designs, and it is therefore unlikely that all of them have been captured here.
However, broad trends appear, and in general the central composite designs have been
found to consistently perform and compare favorably. Other second order designs worth
investigating are the hexagonal (Montgomery and Evans, 1975) and Box-Behnken and
hybrid (Giovannitti-Jensen and Myers, 1989) designs. In the remainder of this section the
body of recommendations uncovered in the review of the literature is captured.
Lucas (1976) compares CCD, Box-Behnken, uniform shell, Hoke, Pesotchinsky,
and Box-Draper designs, using the D-efficiency and G-efficiency statistics. The
conclusion drawn from these comparisons is the following: "Taken together, these results
indicate that a composite design having a star point distance larger than the hypercube's star
point distance of 1.0 but less than the sphere's star point distance of k will perform well
for symmetrical experimental regions, often arising in practice, whose shape is between a
hypercube and a sphere."
Montgomery and Evans (1975) compare six second-order designs: a) 3^2 factorial,
b) rotatable orthogonal CCD, c) rotatable uniform precision CCD, d) rotatable minimum
bias CCD, e) rotatable orthogonal hexagon, and f) rotatable uniform precision hexagon.
The criteria used for comparison are average response achievement and distance from true
optimum (over the response surface created from the designs). Overall, the 3^2 factorial
design did poorly, while the central composite designs provided a good average response
achievement, with the minimum-bias doing best. On average, the two hexagon designs
performed similarly to the central composite designs in their average response achievement,
while generally requiring fewer observations than the CCD.
162162
Lucas (1974) compares symmetric and asymmetric composite and smallest
composite designs for different numbers of factors using the |X'X | criterion, and the D-
efficiency and G-efficiency statistics. Significant observations drawn in this study include
the following: "a finite composite design whose |X'X | value is very close to the |X'X |
value of the optimum composite design can be obtained conveniently"; "The Box-Draper
designs included for comparison are the best possible designs for the given number of
observations, but their D- and G-efficiencies are not as high as the best composite
designs." Lucas concludes that if the experimental region is a hypercube, composite
designs are near optimal. "They are difficult to beat in practice." In a more recent
publication, Lucas (1991) states: "Smallest composite designs have very low efficiencies.
I now virtually never recommend using such designs."
Giovannitti-Jensen and Myers (1989) discuss several first and second order
designs, observing that the performance of rotatable CCD and Box-Behnken designs are
nearly identical, and that "using α considerably smaller than the rotatable value for the CCD
results in poor prediction performance close to the design perimeter in the case of a
spherical region." However, "when the region of interest is strictly cuboidal rather than
spherical," a CCD with α = 1 should be used. Finally, they note that "hybrid designs
appear to be very promising".
In sum, while any particular design may be most appropriate for a given
application, the overall pervasiveness of central composite designs appears warranted.
With this review of experimental designs completed, a more high-level set of
recommendations for selecting metamodeling techniques can be developed. This is done in
the next section.
163163
4 .5 .2 .3 Initial Recommendations for Metamodeling Uses
In this section a top-down approach is elaborated for selecting a metamodeling
technique. The majority of metamodeling applications are built around the creation of low-
order polynomial metamodels using central composite designs and least squares regression.
The nearly universal popularity of this approach is due, at least in part, to the maturity and
well-established nature of response surface methodology, the ease and simplicity of the
method, and easily accessible software support tools. However, the RSM approach starts
to break down when there are a large (> 10) number of factors to be modeled, or when the
relationship to be modeled is highly nonlinear. And as is shown in Section 4.4.4, there are
also dangers of applying the RSM approach blindly to deterministic applications. The
alternative approaches to metamodeling described in section 4.5.2.1 address these
limitations, each in their own way. Recommendations are summarized as follows:
• If a large number of factors must be modeled in a deterministic application, then
neural networks may be the best metamodeling choice despite their tendency to
be computationally expensive to create.
• If the underlying function to be modeled is deterministic and highly nonlinear in
a small (≤ 10) number of factors, then kriging may be the best choice despite its
statistical complexity.
• However, in deterministic applications that have a small number of factors and
are fairly well behaved, another option is presented for exploration: applying
the standard RSM approach, augmented with a Taguchi outer array.
This third recommendation, that of implementing a mixed RSM/Taguchi approach, is an
avenue that has been a focal point in this research and therefore is explored in detail in the
next section as an example of how mean response metamodels can be created.
164164
4 .5 .3 Building Mean Response Metamodels
There are countless ways to build metamodels that predict the mean response of a
given modeling technique; in fact this is the primary definition of “metamodeling” itself
that is offered in Section 4.3. It is therefore clearly beyond the scope of this section to
illustrate the processes for implementing RSM, neural networks, inductive learning and
kriging; the comprehensive set of references given throughout Section 4.4 serves better to
perform this task.
Instead, in this section a description of the preferred method of implementing a
mixed RSM/Taguchi approach is given in some detail. The benefits of this approach are
that it overcomes the limitations of traditional RSM for deterministic applications, and it
also allows both a mean response metamodel and a robustness metamodel to be built from
the same experiment.
How are these problems with traditional RSM for deterministic applications
overcome? The fundamental problem with applying least-squares regression to
deterministic applications is the absence of εrandom in Equation 4.19. However, if some of
the input parameters for the computer code can be classified as noise factors, and if these
noise factors are varied across an outer array for each setting of the control factors, then in
essence a series of replications are generated that can be used to approximate εrandom. This
approximation can be justified if it is reasonable to assume that, were the experiments
performed on an actual physical system, that the random error observed would have been
due to fluctuations in the same noise factors that are varied in the computer code. If this
assumption is reasonable, then statistical testing of model and parameter significance can be
performed. This is accomplished by using a product array design as shown in Figure 4.6.
165165
1234.
+ + + .- + + .+ - + .- - + .. . . .
µy σyµy σyµy σyµy σy. .
y µy = f(x)σy = f(x)
^
^
x2 x31 x .
11 y12 y13 .y21 y22 y23 .
y31 y32 y33 .y41 y42 y43 .. . . .
y
. . . . . . . . + - - .
- + - .
.
++
.
1 2 3 4 .
.
z 2z 1
.
Noise ArrayC
ontr
ol A
rray y14
y24
y34
y44.
2
2
2
22
Figure 4.6 Product Array Design for Creating Mean Response andRobustness Metamodels
A product array, shown in Figure 4.6, consists of an inner control array and an
outer noise array. Each array is a designed experiment, the inner array designed to dictate
the settings of the control factors, and the outer array designed for the noise factors. The
control and noise arrays can each be any experimental design; generally, however, the
control array is a CCD, and the noise array is a 2-level factorial design. For each row of
the control array (one set of values of the control factors), response values are generated for
each noise factor combination (each row of the noise array). For example, control array
row 1, run with noise array row 1, leads to the response value y11 of Figure 4.6, control
row 1 run with noise row 2 leads to response value y12 , and so on. This experimentation
strategy then leads to multiple response values for each set of control factor settings, from
which a response mean and variance can be computed as shown in the figure. Given these
response mean and variance values, response surface models can be fit for the mean and
variance of each response as functions of the control factors. This idea of fitting a model to
the variance values is one option for building robustness metamodels, as is discussed in the
next section.
166166
4 .5 .4 Building Robustness Metamodels
The idea in building “robustness” metamodels is to capture mathematically the
variability of a given response with respect to changes in the values for its identified noise
factors. There are two different scenarios that arise for building robustness metamodels;
the second scenario gives rise to the dashed line added in Figure 3.17. Fundamentally,
there are two options: the selected modeling option either exists in equation form, or in
some other form.
If the modeling option exists in equation form, the dashed-line path in Figure 3.17
can be pursued if desired, thus bypassing all of the screening and statistical experimentation
activities. An equation for response variability can be constructed using the Taylor
expansion -- taking the partial derivative of the model with respect to each noise factor, and
multiplying the square of each partial by the variance of its associated noise factor. This is
presented in more detail later in this section.
On the other hand, if the selected modeling option does not exist in equation form,
then statistical experimentation must be employed in order to build both mean and
robustness metamodels. Here again we have two options for robustness metamodel
creation: 1) “modeling the noise” and applying Taylor’s expansion approximations, or 2)
employing a product array approach. These two options are discussed next.
4 .5 .4 .1 “Modeling the Noise” and Taylor’s Expansion
One approach for estimating and modeling the response variation caused by noise
variation is to treat the noise factors as additional control factors and construct a design
varying this entire set. A resulting response equation can then be postulated as a single,
formal model of the type
y = f (x, z), [4.20]
167167
where y is the estimated response and x and z represent the settings of the control and
noise variables. Based on the surface model, it is possible to estimate the mean of the
response using the statistical expected value function and to estimate the variability of the
response using the Taylor expansion (Phadke, 1989):
Mean of the response µy= f (x ,µz) [4.21]
Variance of the response σy2 =
∂f∂z i
2σzi
2∑i = 1
k, [4.22]
where µ represents the mean values, k is the number of noise factors in the response model
and σZi is the standard deviation associated with each noise factor.
The underlying assumption for this approach is that all noise factors vary
approximately normally. The mean of each noise factor (µz) is approximated using the
midpoint of the range over which that factor is expected to vary, and the variance is
determined by defining the expected range of each noise factor as ± 3σ (6σ total over each
range). Given these approximations, and the second order polynomial response models in
x and z, the robustness response surface models for mean and variance for each response
can be derived directly.
4 .5 .4 .2 Product Array Approach
The second approach presented here for creating response surface equations for the
mean and variance of each response is the product array approach. As shown in Figure
4.6, values for both mean and variance are computed for each run of the control array, and
models can be fit using least-squares regression to these variance values just as easily as
fitting models to the mean values. (However, there is no statistical measure of error for
168168
these variance models and therefore no statistical testing can be performed.) The variability
of the response is thus represented as a function of only the control factors.
The advantages of using the Taylor expansion for creating the response variance
models are that the experiment may require fewer runs overall, and if a second-order
response surface is used for model fitting then the equation for σy2 is very easy to create.
However, the disadvantages of this approach are that it is based on the underlying
assumption that all noise factors are normally distributed, which may or may not be true,
and that the σy2 is based on the mean response approximation and thus is an approximation
of an approximation. In contrast, with the product array approach both µy and σy are
estimated directly from “real” data, and no assumptions about the noise distributions are
necessary. For these reasons the product array approach is the one implemented in this
research.
The final task in Figure 3.17 is that of metamodel verification and validation; this
task is discussed in the next section.
4 .5 .5 Metamodel Verification and Validation
The primary concern after building a metamodel to approximate either the mean
response or robustness behavior of a modeling technique is the accuracy to which the
metamodel represents the actual model’s behavior. A representative process for evaluating
a metamodel’s validity is as follows:
• Select a set of test cases that represent the range of potential input values to
which the metamodel may be subjected. These test cases should NOT be
chosen from the set of runs used for building the metamodel; ideally these
cases represent combinations of factor settings that have not yet been tested.
169169
• Compute output values for these test cases using both the metamodel and the
original modeling scheme it is intended to approximate.
• Compare the output values from the metamodel to those from the original
modeling scheme. Perform a risk analysis on the differences in output values
by weighing the costs and probabilities of improving the metamodel’s accuracy
versus using the metamodel with a given margin of error. As a general rule of
thumb, in the early stages of a design process (conceptual design or preliminary
design), errors of approximation on the order of one or two percent are often
acceptable.
The above process is intended only as a general guide to metamodel validation; a more in-
depth discussion of model validity can be found in (Sargent, 1994).
4.6 PLACING THIS CHAPTER IN CONTEXT
This context is illustrated in Figure 4.7. In this chapter a review of statistical
metamodeling techniques is presented in Section 4.4, and these techniques are applied to
the method for enterprise design to yield guidelines for metamodeling in Section 4.5.
In the next chapter the final elements of this decision-based approach to enterprise
design are developed -- those of establishing the context of integrating decisions and design
processes, and of developing a procedure for resolving interdependencies between
decisions along a design timeline.
170170
Method for Enterprise Design
Timeline Procedure
Revised Task SP
Guidelines for Metamodeling
Categorizationof System Modeling
Definition of Interdependence
between Decisions
Integration of Models into Decisions
Integration of Decisions and Design Processes
Overall Philosophy for Implementing Method
Empowerment and Bounded Rationality
Engineering Design / DBD / OR / Systems Theory
Hybrid Paradigm for Decision Support
Decision-Based Approach to Enterprise Design
Metamodeling, System Modeling, Decision Support Problem Technique
Developed in This Chapter
Developed in Previous Chapters
Developed in Chapters to Come
Figure 4.7 Pictorial Representation of Chapter 4 Context
171171
5. CHAPTER 5
IMPLEMENTING ENTERPRISE DESIGN ALONGDESIGN TIMELINES
The enterprise design approach developed in the previous four chapters can be used
to integrate models from diverse enterprise domains into unified decision formulations.
Although these decision models have been called out as the pathway to integration in
Section 2.3.3, it has not yet been made explicit how these models relate to the integration of
design processes.
In this chapter an argument is presented for how this approach to enterprise design
can be used to integrate the processes of product design, manufacturing process design,
and organization design as called for in Research Question 2. In Section 5.2 the
relationship between decisions and design processes is made explicit through the notion of
design timelines, and the implications of design timelines for enterprise integration are
explored. In Section 5.3 a broad range of potential enterprise design applications is
presented, spanning across both engineering and management domains. These applications
highlight the particular nature of integration across design processes and establish the
potential of enterprise design to be a uniformly applicable design approach. Finally in
Section 5.4 a procedure is presented for resolving decision interdependencies through time;
this procedure becomes necessary when decisions are interdependent with others along the
same timeline and therefore can’t be made concurrent. (This type of interdependence is
captured by some of the examples in Section 5.3.)
172172
5.1 WHAT IS PRESENTED IN THIS CHAPTER
H 1.1
H 2.2
H 1.2
H 2.1
Q 2.1Integration
across Domains
Q 1.1Unified
Q 1.2Quantifiable
Q 2.2Integration
through Time
Method for Enterprise Design
Decision-Based Approach to Enterprise Design (Hypothesis 2)
Hybrid Paradigm for Decision Support (Hypothesis 1)
Domains
Tim
e
EXISTING METHODS, TOOLS & TECHNIQUES
• System Modeling
• Statistical Metamodeling
• DSP Technique- DSPT Palette- Compromise DSP- DSIDES- RCEM
CONTRIBUTIONS
Motivation(Research Questions 1 and 2)
Philosophy Based on• Engineering Design• Operations Research• Systems Theory• Bounded Rationality• Empowerment
ANENTERPRISE
Revised Task SP
Guidelines for Metamodeling
Categorizationof System Modeling
Timeline Procedure
§ 5.4
§ 5.2
In this chapter the decision-based approach to enterprise design is developed across
the dimension of time. In Section 5.2 the concept of design timelines is presented as a
vehicle for reconciling interdependencies between decisions through time. Examples are
cited throughout Section 5.3, and in Section 5.4 a procedure for resolving these
interdependencies is offered as a facet of the method for enterprise design.
173173
5.2 DESIGN TIMELINES AND INTEGRATION
Harking back to the discussion in Section 1.1, recall that one of the motives for
developing an approach to enterprise design is to foster the integration of product design,
manufacturing process design, and organization design. Recall also that in Section 1.3.4
two different types of integration arise from adopting a decision-based perspective -- that of
integration across enterprise domains and that of integration through time. While the
concept and importance of integration across enterprise domains is relatively
straightforward, the need for integration through time may be less clear and is therefore the
focus of this section.
5 .2 .1 The Concept of Design Timelines
It is inarguable that most design processes progress somewhat sequentially through
time. This is an unavoidable consequence of designing complex systems -- designing such
systems takes time, and not everything can be done at once. Therefore the concept of a
design timeline appears, evident both in industry and academia and across varied design
domains. Although progress along a timeline may be measured in either calendar-based
time or event-based time, the examples that follow uniformly adhere to an event-based time
mindset. (Calendar-based time represents the steady progression of hours, months, and
years, whereas event-based time is flexible and elastic, measured in the occurrences of
significant events.)
Most prescriptive design processes both in academia and in industry imply design
timelines. An example of this is illustrated back in Figure 1.2 where generic design
processes are described for both organization design and product design; organization
design is depicted as the sequential development of goals, strategies, organization form and
organization culture (Hanna, 1988) whereas product design is described as a series of
174174
transformations from need to function to behavior to embodiment to configuration to
instance (Dixon, et al., 1988). Similarly in an operations management context a
representative timeline for manufacturing process design is given in Figure 5.1.
Macro Process Design
Process Studies
Production Procedures
Facility Studies
Design Modifications
Final Design
Manufacturing Process
Figure 5.1 Manufacturing Process Design (Gaither, 1994)
Design timelines are also evident in industry, where policies and guidelines are
created to help formalize and standardize design processes. An example of such a
standardized design process is illustrated in Figure 5.2. This type of design process is
fairly common in some segments of the defense industry, where a structured process is
required in order to meet specific milestones set by the contracting government agency. In
Figure 5.2 these milestones occur after the generation of the system’s functional
requirements, its system implementation requirements, and its operational requirements.
175175
Source Documents
Mission Analysis
Performance Analysis
Functional Requirements
Implementation Mapping
Operational Requirements
System Implementation Requirements
Operations Analysis Trades
Functional Analysis Environmental Analysis
Operator/Operational Analysis
Detail Environmental Analysis
Hardware Analysis Trades
Software Analysis Trades
Hardware Specifications
Software Specifications
Figure 5.2 System Development Process in the Defense Industry
What do design timelines imply for enterprise design? There are two important
points to take away from this discussion. The first point is that most often, prescriptive
design processes move from the general to the specific. The activities at the beginning of a
design process are most often at a higher level of abstraction, and each subsequent activity
generates increasing detail. This first point raises interesting issues for interdependencies
along a single design timeline such as are illustrated in Section 5.3.7. The second point to
note is that quite often, each design process in an enterprise is defined separately and
proceeds along its own design timeline. This has interesting implications for integration
across multiple design timelines, as is discussed in the next section.
176176
5 .2 .2 Viewing an Enterprise from a Design Timeline Perspective
Recall that in Figure 1.7 a decision-based view of an enterprise is presented in
terms of a two-dimensional grid of interdependent decisions. Along one dimension
decisions are made across different design domains, and along the other dimension
decisions are made along the steady progression of time. This figure serves well to
illustrate the types of interdependencies between decisions, but it does not explicitly
address the interdependencies between design processes. Because the integration of the
processes of product design, manufacturing process design, and organization design is
called for in Research Question 2 (Section 1.4), it is important to conceptualize how design
processes can be interdependent. This is accomplished by viewing an enterprise from a
design timeline perspective, as is shown in Figure 5.3.
Product A
ManufacturingProcess C
ManufacturingProcess D
Product B
BusinessProcess E
PASSAGE OF TIME
= Decision Along a Design Timeline
DesignTimelines
Figure 5.3 Design Timelines in an Enterprise
177177
Figure 5.3 is not radically different from Figure 1.7; in each the various decisions
within an enterprise are represented across two dimensions, with time being a dimension
common to both. The difference is that in Figure 5.3 the decisions relevant to various
design processes within an enterprise are linked together into individual streams. This
representation is somewhat idealistic; in actuality there will be many countless decisions in
each design process, some occurring concurrently and some causing iteration. These
details are left out of the figure because they obscure the main point - that at any given point
in a design timeline, other relevant design processes may have already concluded, while
others may not even have begun. The final element missing from Figure 5.3, the fact that
some of the decisions are interdependent, is shown in Figure 5.4.
Product A
ManufacturingProcess C
ManufacturingProcess D
Product B
BusinessProcess E
PASSAGE OF TIME
= Decision Along a Design Timeline
DesignTimelines
Figure 5.4 Interdependencies Between Design Timelines
In Figure 5.4 the complexities inherent in enterprise design are evident. The
method for enterprise design developed in Section 3.4 can be applied to any one of these
178178
decisions along these design timelines, and quite possibly interdependencies will be
identified between it and other decisions made at other points in time in an altogether
different design process. (These links are shown by the dashed lines in Figure 5.4.) And
yet, it is a goal within this research to “integrate” different design processes together. The
approach to integration taken in this research is introduced well in Section 3.2, but with the
concept of design timelines now established, the details of the approach can be explored in
more depth. This is done in the next section.
5 .2 .3 Implications of Design Timelines for Enterprise Integration
Because of the decision-based foundation of this research, integrating design
processes translates into the integration of decisions. This point is inherent in Figure 5.4.
However, what is also clear from the figure is that not all interdependent decisions occur at
a synchronized moment in time. This gives the options for integration, introduced in
Section 3.2, added significance.
The heretofore mentioned options for achieving integration are either to enforce
coordination or to promote empowerment. These options come into play when defining the
boundaries of a given decision as discussed in Section 3.4.5. As shown in Figure 3.14,
the options for dealing with an additional piece of input data when creating a decision
formulation are incorporation, transformation, or applying robustness. Integration through
enforcing coordination allows incorporation to be implemented, whereas promoting
empowerment offers the additional options of transformation and robustness. In this
section each of these options is explained in terms of design timelines, utilizing the context
established in Figure 5.4.
To explore these options for integration let us take a smaller portion of Figure 5.4 to
investigate in more detail. In Figure 5.5 portions of just two design processes are shown,
those of a product design process (denoted Product “A”) and of a manufacturing system
179179
design process (arbitrarily denoted Manufacturing Process “C”). In the left half of the
figure a current decision along the manufacturing process timeline is seen to be
interdependent with a decision occurring later along the product design timeline. In this
configuration there is nothing that intrinsically prevents the decisions to be “forced” into
concurrency; the situation after such a shift is shown in the right half of the figure.
Coordination between design processes has been enforced, and the interdependent
decisions can be formulated and solved in a truly joint and concurrent fashion, perhaps by a
cross-functional team from both design efforts.
Product A
ManufacturingProcess C
PASSAGE OF TIME
Product A
ManufacturingProcess C
PASSAGE OF TIME
Enforcing Coordination
Figure 5.5 Integration Through Enforcing Coordination
Enforcing coordination is certainly an effective option for achieving integration, but
the added effort and control needed to align the two processes in time may not always be
advisable, especially if the joint decision grows to unmanageable size or complexity.
Furthermore, there are potential configurations among decisions that make enforcing
coordination literally impossible, so another avenue to integration is needed. Such a
configuration is illustrated in Figure 5.6.
A situation is illustrated in the left half of Figure 5.6 in which a product design
decision is interdependent with a manufacturing process design decision intended to occur
180180
some time in the future. However, this manufacturing process decision is also
interdependent with another product design decision set to occur even later in time. In this
instance, no amount of “shifting” of the design processes will allow all interdependent
decisions to be executed concurrently. The approach to integration, then, of promoting
empowerment is detailed in the right half of the figure. When the time comes to make each
decision, the links between it and other decisions are recognized and incorporated into the
formulation, but the “external” design variables are not attempted to be controlled. This is
an important point to digest, because it brings a new twist to the idea of “integration”.
Integration between decisions is achieved by taking into account the effects of external
decisions and not by forcing the external decision to be executed within a joint formulation.
The interdependent links between decisions are addressed in a weaker form than by
enforcing coordination, but this weaker form is seen to be better than none at all. In this
case “integration” means awareness, modeling and robustness (transformation). Decisions
may not strictly be “interdependent” anymore.
Product A
ManufacturingProcess C
PASSAGE OF TIME
Promoting Empowerment
PASSAGE OF TIME
Product A
ManufacturingProcess C
Figure 5.6 Integration Through Promoting Empowerment
In terms of implementing this concept of promoting empowerment, if values for the
external variables have already been set in a prior decision then it is a straightforward case
181181
to use their set values as input for analysis. The more interesting case is when the external
decision will be occurring in a later point in time. In this case perhaps transformations can
be identified that map or correlate the values of the current decision’s variables with the
values of the downstream decision. (This is more likely if the downstream decision is a
member of the same design timeline as the current decision.) Similarly, it is also possible
to identify potential ranges or probability distributions for the downstream decision’s
variables and then treat those variables as noise factors in the current formulation. This
method of achieving integration through time by promoting empowerment is set forth in
detail in Section 5.4, but first a range of examples of decision interdependence between
design timelines and through time will be presented, ideally to bring life and added meaning
to the preceding figures and to make the method of Section 5.4 more intuitive. These
examples are provided in the next section.
5.3 APPLICATIONS OF ENTERPRISE DESIGN ACROSSENGINEERING AND MANAGEMENT
In this section a range of examples are presented to illustrate how enterprise design
could be applied across decisions in engineering and management. The purposes for such
a presentation are twofold and tie back to the two fundamental research questions
established in Section 1.1. The first purpose is to explore the boundaries of applicability
for this method of enterprise design. Can it be uniformly applied to the design of products,
manufacturing processes, and the organization itself, as called for in Research Question 1?
The second purpose is to examine the details of how integration is achieved across design
processes. Could this method of enterprise design succeed in integrating the design of
products, processes, and the organization itself into a common design timeline, as called
for in Research Question 2? The details of such integration, accrued through repeated
182182
example, are distilled into a general method for handling decision interdependencies
through time in Section 5.4.
In the following sections each application is described in terms of a baseline
decision with its design variables and goals. Links are then identified that bridge to
external decisions; these interdependencies are embodied by common goals and by related
design variables. Finally, in each subsection the approaches for handling decision
integration are addressed.
5 .3 .1 Designing Organization Strategy
A central activity in strategic management is the act of defining the set of goals,
strategies and policies that embody both the vision of the organization and how it is to be
implemented. In this case let us consider a sales strategy that encompasses the selection
and development of a product line to target a given market. This could be a manufacturer
deciding to introduce a high-end or luxury line of products to compliment its “economy”
line, or a regional manufacturer with local success deciding to expand into other regional or
national markets.
The design variables for these decisions would include the exact mix of products
selected for the product line, along with their different ranges of features, quality, and
price. Goals for such strategy development would be targets for increased growth or
profitability of the organization. Such goals could be mapped to the range of product
features and prices by conducting customer surveys and correlating the strength of their
preferences to different combinations of product features and price. However,
implementing such a strategy may often entail the redesign of products and the
reconfiguring of manufacturing processes, and its success is dependent on the eventual
results of these design efforts. And it is unlikely that these design efforts can be completed
within the same time frame as the setting of strategy.
183183
Therefore it is quite possible that these strategy decisions are interdependent with
other decisions occurring later in various product design and manufacturing process design
timelines. The design variables for these downstream decisions would be the dimensions,
materials, and configurations of product components and assemblies, as well as the
sequences of manufacturing operations, process parameters, tooling and so on required for
each. These details will ultimately determine the actual measures of product cost,
functionality and quality. On the other hand, it would not be prudent to begin such design
efforts without a clear enunciation of the strategy guiding their development.
Thus, in this instance we have a decision (or set of decisions) that is interdependent
with other decisions that can not be forced into concurrency. Enforcing coordination does
not appear to be an option; however, the avenue of promoting empowerment developed in
Section 5.4 could be pursued.
5 .3 .2 Designing Organization Structure
Another activity of high visibility in strategic management is the act of defining, or
redesigning, the structure of the organization. The design variables in these decisions
include the levels of hierarchy, the reporting relationships, and the spans of control in an
organization.
The goals for such an activity can encompass a desire to reduce costs and increase
efficiencies (to make the organization more “lean” and “mean”) by reducing headcount and
payroll. Additional goals may also be concerned with increasing the effectiveness of the
organization in an information-processing sense. Because the flows of information within
an organization are most often between local coworkers (span of control) and between
worker and supervisor (reporting relationship), it is important to ensure that the right
information gets to the right place at the right time, without causing information overload.
184184
Of course, these design decisions are interdependent with other decisions across the
organization. For example, if layoffs are invoked then it is quite possible that company
morale would dip, thus potentially leading to decreased productivity per worker. This
would dampen the positive financial effects of reduced payroll and in the extreme could
overwhelm these cost savings, leading to an overall drop in profits. In this case it would
be important to understand the other drivers of employee morale such as financial
incentives, a reasonable workload, and an understanding of the company’s direction.
Other decisions in human resources would probably need to be coupled with these
restructuring activities.
Similarly from an effectiveness standpoint, the overall functioning of an
organization can be abstracted to the execution of business processes, which can be
represented as series of tasks, the people who perform them, and the resulting flows of
material and information. For an effective organization it is critical to have consistency
between the work to be done, the incentives that motivate the employees, and the
information flows implied by organization structure. Therefore, designing the organization
structure is interdependent with the design of its business processes (tasks and flows) and
the design of its incentives policies.
Can these decisions all be rolled into one large design effort? Perhaps so, but
probably not. The design and improvement of business processes is an iterative and
continuing effort and spans most facets of an organization, so even if coordination between
design efforts was achieved for a time, the ever-changing nature of business processes
would erase the effects over time. Instead, integration would be best achieved by
designing an organization structure that would be flexible and robust to changes in the tasks
and information processing requirements of business processes.
185185
5 .3 .3 Outsourcing and Core Competencies
With the ever-increasing speed by which market forces are changing, and with the
escalating standards for product quality, customer service, and competitive pricing, many
organizations are exploring the common theme of identifying “core competencies.” The
idea is to identify and strengthen those unique functions within an organization that have
the most market value, whether they be product design, manufacturing quality, or
responsive customer service. These functions are strengthened by focusing greater
attention on their improvement, while “outsourcing”, or contracting out, other activities that
can be performed more efficiently by outside firms without deteriorating the organization’s
strategic position.
Some outsourcing decisions are relatively self-contained, such as the contracting
out of cafeteria (food service) duties and the contracting out of janitorial services. These
decisions can be pursued within relatively well-defined boundaries of financial metrics and
a few performance measures, such as the quality of work and the impact on internal
workers’ activities. However, other outsourcing decisions rapidly become very complex.
For example, consider the decision of whether or not to manufacture a family of
components in-house. A representative example of this is a computer manufacturer
considering whether to fabricate DRAM chips internally or buy them from external
suppliers. The goals for such a decision are the overall cost, quality, lead times, and
strategic position that would result from each design alternative.
First let us consider a purely “make vs. buy” decision. Whether or not to
manufacture a family of components really determines a portion of the overall scheduling of
the shop floor, and it is this schedule as a whole that determines the costs, throughputs,
and lead times of the production process. Outsourcing is thus interdependent with
production scheduling.
186186
However, a just as likely scenario is one in which the manufacture of this family of
components is recognized to be a potential core competency. In this case the decision
grows from a binary yes/no to a wide range of possibilities: perhaps the manufacturing
process itself can be redesigned, and perhaps the family of components can be redesigned
as well. All of these variables contribute to the overall cost, quality, lead times, and
strategic position of the organization. Thus outsourcing decisions, in addition to being
interdependent with production scheduling decisions, can also easily be interdependent
with manufacturing process design and product design as well.
In the extreme it can not be expected to gather all of these varied decisions into one
concurrent formulation. Instead, integration can be achieved by understanding the interplay
between the variables of each decisions on their common enterprise goals -- cost, lead time,
and so on. Each decision maker can then formulate the local decision to be robust to the
external decision variables that will vary over time.
5 .3 .4 Business Process Reengineering
Consider the reengineering of an order fulfillment process (the handling and
tracking of customer orders, invoices, and billing). The design variables for this effort are
the tasks and task sequences of this business process; the goal is to increase process
efficiency in terms of costs and duration. However, there are other variables that drive
process efficiency. The actual mix of orders received (large or small customer, size of each
order) will affect the task durations and perhaps the best task sequence. This stream of
orders is driven by marketing efforts and also by the types of products that are offered.
And in the larger scheme of things, the value of process efficiency is open for
question. True efficiencies are measured in terms of the amount of orders actually shipped,
and this throughput will only increase if the order fulfillment process is the bottleneck. In
actuality, the rate of orders shipped is most likely dependent on production capacity, so that
187187
increasing the efficiency of order fulfillment may only cause the orders to wait in longer
queues before production, or it may leave the workers with more idle time. In either case a
gain in process efficiency is translated into neither lowered costs nor increased sales.
A system-level perspective must be taken when undertaking such process
reengineering efforts. The effects of both the local variables and the external decision
variables (of marketing, product design, and production capacity for example) on the
efficiency goals must be understood; and if indeed the local variables are significant drivers
then the business process can be reengineered with this broader perspective in mind.
Recommendations can also be generated for marketing policies, product design features,
and production capacity levels, but regardless perhaps the best policy is to design an order
fulfillment process that will be robust to changes in these variables that will doubtless occur
over time.
5 .3 .5 Manufacturing Facility Location
Choosing a location for a manufacturing facility is perhaps the most high-level
decision pursued within manufacturing process design, perhaps extending into the domain
of organization design as well. Facility location decisions have extremely long-term
implications for an organization, ideally spanning into multiple decades. The design
variables include where to locate the facilities (factories, distribution centers, et cetera), the
functionalities assigned to each facility (manufacturing strengths, capacities and
capabilities), and the share of given markets that they are intended to cover. Goals for
these decisions are to meet specific targets for sales volume, percentage growth, increased
profits, and so on.
However, the actual success of a given facility depends on its particular realization:
the details of manufacturing processes, the amount of skilled labor and the actual mix of
skills available, the layout of facility, and so forth. The facility also should be flexible to
188188
adapt to changes in product designs, as it is ideally intended to span multiple product life
cycles. All of these variables affect the above goals. Therefore, manufacturing facility
locations are interdependent with other decisions across product design and organization
design that will be made at later points in time.
The span of design decisions that are technically interdependent with manufacturing
facility location is immense and reaches well beyond the time frames established for the
majority of design efforts. It is not conceivable to draw all of these decisions into a single
joint formulation; instead integration must be pursued by fostering empowerment and
attempting to promote robust solutions to reduce the strength of the interdependencies.
Forecasting is a well-established activity that fits within this idea of promoting
empowerment -- predictions are constructed for the long-term effects of the current
decision, and these predictions are, in effect, estimates of the outcomes of the downstream
interdependent decisions. Armed with these predictions a decision formulation can then be
pursued in order to achieve a solution that is as robust as possible to the uncertainties of the
future.
5 .3 .6 Manufacturing Process Redesign
There are many facets to manufacturing process redesign; common goals are
improving process quality, increasing process capacity, reducing costs, shortening lead
times, and so on. Often these redesign efforts contain the question of whether to purchase
new technology, whether it be new automated equipment, material transporters, or
information technology; thus different alternatives are generated for cost/benefit analysis.
Similarly, traditional process modeling techniques such as simulation and queuing
theory are often used to predict the impact of these technology variables as well as a wealth
of additional processing parameters under control of the design team. However, for the
goals of the redesign efforts to be realized, the shop floor must operate as a holistic entity.
189189
Thus the skill levels and training of the work force come into play for determining task
durations and probabilities of rework. In addition, details of the given product designs
released to the floor will test the actual capabilities of the equipment to a greater or lesser
degree, thereby affecting the ultimate achievement of cost and quality goals.
Once again we see that design decisions within a given domain (in this case the
manufacturing process design domain) are interdependent with decisions across other
enterprise domains. These interdependent decisions are likely to be made at different points
in time, but in this case it may be possible to enforce their coordination if the potential
benefits warrant it. However, problem size will still be a factor; it is always an issue to
keep decisions from expanding to an unmanageable size. Therefore the alternative of
promoting empowerment in Section 5.4 still has appeal.
5 .3 .7 Integrating Cost Estimates into Product Design
A critical industry need is understanding the enterprise-wide effects of early-stage
product design decisions, fostering the ability to make trade-offs between product
performance, product cost, and time to market while the product is still in its formative
stages. Wile product design variables certainly drive the product cost and time to market,
these measures are in the end embodied by other enterprise activities -- manufacturing,
development, sales, marketing, support, and so on. Clearly, to make the best product
design decisions the integration of all these other enterprise activities is needed.
These enterprise issues are embodied well by the concepts of product cost and
quality. Main drivers for product cost are its development cost and schedule and
manufacturing cost and schedule, and the modeling and design of the manufacturing
processes and development processes are traditionally in the realms of manufacturing
process design and organization design. Thus we see interdependencies developing with
decisions occurring in other design timelines.
190190
What makes integration difficult is the differing amount of input information
required for these external models. For example, in the early stages of product design one
requirement may call out a desired system cost. A legitimate analysis procedure is then a
bill-of-materials estimator that provides costs for each element and component of the
product. Clearly, however, this kind of detailed information is just not available in
design’s early stages. Based on what information actually is available, an appropriate
analysis procedure can be identified, but then this model may not support the type of output
information required.
This is a fundamental paradox in trying to address affordability issues in product
design: on the one hand, building a truly accurate cost model requires a complete and
detailed description of the product. On the other hand, by the time such information is
available most of the product cost is already determined and there is little opportunity for
cost improvement. This idea is illustrated in Figure 5.7, in the context of the design
timeline introduced in Figure 5.2.
There are two fundamentally different approaches, then, for integrating affordability
issues into product design decisions, as shown in Figure 5.7. The first approach
(traditional cost modeling) is to compute cost estimates from a bill of materials, which
means postponing affordability issues until enough product details are known. This is the
conventional way of doing things, and it is safe and justifiable, but by waiting for a bill of
materials it may miss the greatest leverage points for reducing cost. The second approach
(proposed cost modeling) is to compute cost estimates from the very start, using whatever
information is available (performance requirements, technical parameters, etc.). This
requires approximating the actual cost of the product because of incomplete information.
The key then lies in the method of approximation.
191191
PRODUCT DESIGN AFFORDABILITY
Bill of Materials
System Requirements
SourceDocuments
FunctionalRequirements
System ImplementationRequirements
Hardware & SoftwareSpecifications
IMPLEMENTATION MAPPING
OPERATIONS ANALYSIS
DETAILDESIGN
MISSIONANALYSIS
DESIRED COST MODELING
TRADITIONAL COST MODELING
Cost Estimate
Approximation
Cost Estimate
Figure 5.7 Integrating Cost Estimates Into Product Design
Again, this issue reduces to interdependencies between decisions occurring at
different times along different design timelines; a method for resolving these
interdependencies is now offered in the next section.
5.4 PROCEDURE FOR HANDLING DECISION INTERDEPENDENCIESALONG A DESIGN TIMELINE
The two scenarios for decision interdependence through time, illustrated in Figure
5.5 and Figure 5.6, are supported by the examples presented in the previous sections. The
192192
examples where enforcing coordination are a possibility are those of designing organization
structure (Section 5.3.2), of outsourcing and core competencies (Section 5.3.3), of
business process reengineering (Section 5.3.4), and of manufacturing process redesign
(Section 5.3.6). Similarly, the examples where enforcing coordination are not possible are
those of designing organization strategy (Section 5.3.1), of manufacturing facility location
(Section 5.3.5), and of integrating cost estimates into product design (Section 5.3.7).
An approach to implementing coordination is discussed briefly in Section 3.4.5, but
it is not really the focus of this research effort. Although it is an effective way to handle
interdependent decisions, it is not applicable in all situations and is at odds with the
philosophies of bounded rationality and empowerment that are at the heart of this
dissertation. Alternatively, the approach based on empowerment is indeed applicable in all
of the situations of decision interdependence described in Section 5.3. This procedure is
developed in detail next.
The procedure for resolving interdependencies between decisions through time is
illustrated in Figure 5.8. This procedure spans Tasks 2, 3, and 4 of the enterprise design
method developed in Section 3.4; these boundaries are illustrated at the top of the figure.
In terms of formulating decisions, timeline interdependencies materialize in a
handful of different ways. Recall from Section 3.4 that enterprise design begins with a
baseline decision described in terms of design variables, goals, and models for analysis.
Task 1 of the method entails generating an expanded list of potential goals for the decision,
and Tasks 2a and 2b entail generating and selecting modeling options for each potential
goal and identifying the additional input information needed for each.
193193
Selected Modeling
Option
Function of Design
Variables?
Identify Noise
Factors
Yes
NoNo
Incorporate
Choose to Enforce
Coordination?Yes
Look toPast?
Yes
Build ModelUsing Historical
Data
Model
Build Robustness Metamodel
Specify Transformations or Correlations
Build Mean Response
Metamodel
TASKS 2 TASK 3 TASK 4
No
Model
Model
ii
iiii
ii
TT TT
TTTT
TT
??
?? ??
Figure 5.8 Procedure for Resolving Timeline Interdependencies
Timeline interdependencies surface when the additional input information needed
for a modeling option “belongs” to other design decisions occurring later in time, and also
may surface if the selected modeling option is not a function of the design variables at all.
(This usually indicates that the modeling option was generated by examining the goal from
“the top down.”) These two situations are illustrated in Figure 5.9. In the left half of the
figure the modeling scheme takes as input the values for both a current decision and a
downstream decision. The model is indeed a “function of the design variables” in the
context of Figure 5.8. In this case enforcing coordination may be an option as shown in
the figure, or alternatively the downstream decision’s variables {y} can be classified as
noise factors and a robust solution constructed. (A third option, and one that is left for
continuing research, is the application of game theoretic protocols to investigate
cooperative, leader/follower, or competitive solutions.)
194194
However, when the selected modeling option itself is not a function of the design
variables, depicted by the right hand side of Figure 5.9, there are two fundamental options
for dealing with this interdependency as shown in Figure 5.8: looking to the past, or
looking to the future. Looking to the past assumes that the current system being designed
is predominately similar to systems designed previously. If historical data has been
collected on these past systems, then it becomes possible to (for example) build regression
models to correlate the values of any upstream design parameters with any downstream
product or process characteristics. For example, consider design for manufacture -- if a
low-cost system is to be designed, then historical data can be used to correlate the final
manufacturing cost of the system with the system characteristics set in the early stages of
design. The benefit of using historical data is that it is inherently validated; the
relationships fit by the regression model are factual. The disadvantage is that the causality
of the data may be suspect.
{x} {y}
T
Current Decision
Downstream Decision
Model f(x,y)
T
{y}
Downstream Decision
f(y)
{x}
Current Decision
?
T
Transform or
Correlate
Figure 5.9 Timeline Interdependencies in a Decision FormulationContext
Looking to the future assumes that the current system will differ in significant ways
from systems designed previously, so that historical data is no longer enough. Models
must then be built to capture the issues of the downstream decision. One difficulty of this
195195
approach is building models of tractable size; a balance must be struck between model
accuracy and ease of use. The other difficulty is relating the upstream design parameters to
the downstream models; often the upstream variables do not appear in the downstream
modeling schemes. In these cases it may be possible to construct transformations that map
values of the upstream decision’s variables to values of the downstream decision’s
variables. The downstream decision’s models can then be exercised over their range of
input values, and the resulting output can be correlated with the values of the upstream
decision’s variables. Several examples of this are illustrated throughout the case studies of
Chapter 6.
In sum, then, if historical data is available, if causality is likely, and if the new
system being designed is similar to those designed previously, then “looking to the past”
makes the most sense, and regression models are built to establish correlation. However,
if historical data is not available, or if the new system being designed differs significantly
from those designed previously, or if downstream models are readily available and easily
transformable, then “looking to the future” is the option of choice. With the offering of this
timeline procedure all of the elements of the decision-based approach to enterprise design
are in place. This approach will now be implemented in a suite of case studies in the next
chapter.
5.5 PLACING THIS CHAPTER IN CONTEXT
This context is shown in Figure 5.10. The final element of the overall philosophy,
that of defining the integration of decisions and design processes, is offered in Section
5.2.3. Similarly the final element of the method for enterprise design, the procedure for
resolving interdependencies between decisions through time, given in Section 5.4.
Therefore at this point the decision-based approach to enterprise design is complete. It can
now be tested on a case study; this is done throughout the next chapter.
196196
Method for Enterprise Design
Timeline Procedure
Revised Task SP
Guidelines for Metamodeling
Categorizationof System Modeling
Definition of Interdependence
between Decisions
Integration of Models into Decisions
Integration of Decisions and Design Processes
Overall Philosophy for Implementing Method
Empowerment and Bounded Rationality
Engineering Design / DBD/ OR / Systems Theory
Hybrid Paradigm for Decision Support
Decision-Based Approach to Enterprise Design
Metamodeling, System Modeling, Decision Support Problem Technique
Developed in This Chapter
Developed in Previous Chapters
Figure 5.10 Pictorial Representation of Chapter 5 Context
197197
6. CHAPTER 6
CASE STUDY: DESIGN OF A FORWARD-LOOKING INFRARED RADAR SYSTEM
In this chapter a case study suite is presented based upon the design of a Forward-
Looking Infrared Radar (FLIR) system. The “suite” is constructed by applying the
enterprise design approach to the case and first considering only its product design aspects.
This baseline is then sequentially expanded to include manufacturing process design issues
in the second phase and then organization design issues as well in the third phase. This
third phase of the case study suite provides a detailed example of how the integration of
product design, manufacturing process design, and organization design can be achieved.
This case study suite serves as an in-depth application of the enterprise design
method laid out throughout Section 3.4 and is used to give concrete illustrations of
applying the metamodeling guidelines of Section 4.5 and the timeline procedure of Section
5.4. The system models identified and integrated also follow the categorization established
in Section 2.4.1. The overall structure of the case study suite is given in Section 6.1, and
then in Section 6.2 the basics of FLIR system design are described. The three phases of
the case study suite are given in Sections 6.3, 6.4, and 6.5, and a critical evaluation of the
entire case study suite is given in Section 6.6.
198198
6.1 STRATEGY AND OVERVIEW OF CASE STUDY SUITE
In this chapter an example of implementing enterprise design is described in detail,
thus illustrating how the integration of product design, manufacturing process design, and
organization design can be accomplished. This case study suite is centered around the
design of a Forward-Looking Infrared Radar (FLIR) system, and as such it is an example
of applying enterprise design from the product design domain. At this point it is important
to reflect back on a key set of concepts which define the context, or mindset, for this case
study suite.
• The input for the enterprise design method is a given local design decision to be
made, as illustrated in Figure 3.10. Therefore, this case study suite is focused
on a local design decision and then expanded to include additional issues across
the enterprise.
• Designing an entire enterprise results from the cumulative effects of local
decision makers, as discussed in Section 3.2. Therefore, this case study suite
does not result in a completely redesigned enterprise. Such an effort is well
beyond the scope of any one dissertation.
• The central goal in this research, that of integrating product design,
manufacturing process design, and organization design, is achieved by
integrating design issues across these domains into a single, unified decision
formulation.
• Although this case study is based on actual engineering practice and on real
engineering data, it has not been developed to meet the needs of a specific
design effort or customer. Therefore its true substance is embodied less by the
199199
actual numbers generated than by its persuasive power of establishing how a
decision-based approach to enterprise design can indeed be implemented.
Returning to the discussion of research priorities given in Section 1.5.2, the
primary focus in this dissertation is in the area of overlap between the processes of product
design, manufacturing process design, and organization design. Therefore the progression
of the three phases of the case study suite, denoted Phase A, Phase B, and Phase C,
proceeds as illustrated in Figure 6.1. The path pursued in this case study suite begins in
the realm of product design (Phase A) for a baseline, then extends to the overlap between
product design and manufacturing process design (Phase B). The final step along this path
takes us into the area where issues from all three design processes intersect (Phase C).
Notice, however, that the realm of product design is never truly exited.
Product Design
Organization Design
Mfg. Process Design
ALL DECISIONS MADEACROSS AN ENTERPRISE
B
C
A
Figure 6.1 Strategy for the Three Phases of the Case Study Suite
200200
This progression (from Phase A to B to C) of the case study suite is also used to
support the verification and validation of the enterprise design approach. Phase A is
employed for approach verification: the solutions that are generated with the approach are
checked to ensure that they are reasonable according to the judgment of a group of
experienced FLIR system designers. Phase B and Phase C are used to support the
validation of the enterprise design approach, from the perspective of answering whether the
results are useful, and whether they are generated at a reasonable cost.
Why is a FLIR system design example chosen for this case study? In truth, there
are a theoretically limitless number of examples with which to illustrate (and test) the
application of the approach for enterprise design. Any of the examples in Section 5.3 are
legitimate candidates. (The FLIR design example fits this mold, illustrating an application
of the scenario developed in Section 5.3.7.) In selecting the example for the case study
suite three primary criteria are employed. Firstly, the example must support the testing of
research hypotheses. It must illustrate the applicability and mathematical rigor of the
enterprise design approach, and it must illustrate the integration of product design,
manufacturing process design, and organization design issues. Secondly, the example
should embody a real engineering problem driven by industrial interest. Such interest
fosters the provision of actual data, modeling techniques, and engineering experience to
make the example as lifelike as possible, and it also enhances the potential for the example
achieving industrial implementation. Thirdly and finally, the example must satisfy the
assumptions on which the enterprise design approach is grounded. (These assumptions are
discussed in Section 6.6.1.) The example must have system modeling options available for
quantifying its enterprise-wide impacts, the design alternatives must be able to be codified
mathematically, and the decision formulation, once constructed, must be of tractable size.
(In essence, these assumptions move us more into the area of parametric design, where
decision alternatives are represented by specific numeric values for design variables, and
201201
away from configuration design, where design alternatives may be ill-defined, discrete-
valued, or even non-numeric.) The FLIR system design example is chosen because it
indeed satisfies these three criteria.
What class of engineering problems does the FLIR example represent? Is this class
of problems interesting or useful? Applying the criteria of industrial interest to the FLIR
example results in a significant narrowing in problem scope. Because there is a keen
industrial interest in modeling and assessing the cost and schedule impacts of system
design decisions, the manufacturing process design issues integrated into the formulation in
Phase B are those of production costs, and similarly, the organization design issue
integrated into the formulation in Phase C is that of design process duration. Thus in this
instance the FLIR example represents a concurrent engineering problem, albeit with a
significant twist. As discussed in Section 5.3.7, production cost impacts and design
process duration impacts are commonly estimated only well after system design has
concluded, but through enterprise design a measure of concurrent engineering is achieved
without requiring true concurrency. (Instead, the procedure of resolving decision
interdependencies along a design timeline is invoked.) The value of this class of problems
often appears during contract talks for a design project, where terms are negotiated with the
customer for levels of product performance, the per-unit cost of each system to be
delivered, and the length of the contract schedule. The FLIR example developed in this
case study suite is therefore not a complete formulation, but it is instead used to
demonstrate the general behaviors of this problem class.
In searching the realm of available system modeling techniques for FLIR design in
industry, the scope of the FLIR example is narrowed even further. Production cost models
are identified for the FLIR afocal optics subsystem and the focal plane array detector, and a
design process model is made available for estimating the duration of the focal plane array.
In fairness, however, there are a wealth of avenues in which the FLIR example could, and
202202
perhaps should, be expanded. Are the afocal optics cost and focal plane array detector cost
the best measures for system cost? May not other measures be more appropriate? Is the
design of the focal plane array the most important driver for design process duration?
Might not the design of other subsystems be more important? And on an even larger scale,
there is reason to question the selection of production cost impact and design process
duration impact as the most important enterprise-wide effects of this decision. Why not
assembly costs or maintenance costs? What about the potential effects for manufacturing
outsourcing, for sales forecasts, or for competitive strategy? These are all valid questions,
and in fact the design decision could indeed be expanded in these directions. Because the
emphasis in this chapter is on illustrating the enterprise design approach and on testing its
hypotheses, the FLIR example is found to be adequate as is. In further work, however,
such expansion of the example should surely receive consideration. Before turning to the
development of the case study suite, it is important to establish the background of FLIR
system design; such an introduction is provided in the next section.
6.2 INTRODUCTION TO FLIR SYSTEM DESIGN
Forward-Looking Infrared Radar (FLIR) systems are a specific example of a more
general category of electro-optical systems that include cameras, imaging systems, machine
vision systems, and televisions. The purpose of a FLIR system, or “thermal imaging” or
“infrared imaging” system, is simply to see in the dark. This is accomplished (Hopper,
1993) by the “detection and appropriate processing of the natural radiation emitted by all
material bodies.” This radiation is described by Planck’s radiation law and at 300 K the
peak of this radiation occurs at a wavelength of about 10 µm.
Thermal imaging applications appear across both military and commercial uses.
Military applications include reconnaissance, target acquisition, fire control and navigation,
203203
whereas commercial applications span the categories of civil, environmental, industrial, and
medical uses. Common civil applications include law enforcement and border patrol uses,
while in industry thermal imaging systems can be used in manufacturing and non-
destructive testing. The FLIR system designs considered in this research are drawn from
the military applications of reconnaissance and target acquisition; an overall diagram of
FLIR system architecture and functionality is illustrated in Figure 6.2.
Image
Target Emitted Photons Optics Detector Electronics
Figure 6.2 Overview of FLIR System
The purpose of a FLIR system is to produce an image of a target either for human
viewing or machine pattern recognition. The image is produced by gathering and focusing
emitted photons from the target onto either a single detector or detector grid (a focal plane
array). The detector is fabricated from a photosensitive material which generates a voltage
proportional to the amount of radiant energy received. These voltages are then transformed
through signal processing electronics to generate a real-time image of the scene.
In actuality, the FLIR system structure illustrated in Figure 6.2 is fairly idealized
and omits much of its detail and complexity. A more detailed, but still high-level,
description of a specific FLIR system is shown in Figure 6.3. The incoming photons enter
at the upper left of the figure; after the infrared afocal they are scanned by an
204204
electromechanical scanner. This scanner is a small mirror actuated by a scan motor driven
by a digital signal processor. The scanned image reflects off an interlacer mirror and is
focused onto the detector by an infrared imager.
Commander'sDisplay
Figure 6.3 Schematic of CVTTS FLIR (Anderson, et al., 1996)
At any given instant in time a horizontal slice of the image is incident on a single
row of 240 detector elements; through precise control of the scan mirror angle, the vertical
dimension of the image is created by assembling these rows in real time onto a CRT
display. The detector uses mercury cadmium telluride detector elements in a photovoltaic
mode which must be maintained at a temperature of 77 K; this is accomplished by a 0.65
watt linear cooler. Each of the voltages from the 240 detector elements varies slightly
according to variations in materials, so analog processing is employed to compensate for
these irregularities. These analog signals are then multiplexed and converted to digital
format and stored column-wise in a reformatter memory. Data is then read from the
reformatter memory in a row-wise format for TV compatibility.
205205
From both Figure 6.2 and Figure 6.3 a basic architecture of FLIR systems
emerges. The system can be partitioned into an optics subsystem, a detector subsystem,
and the supporting electronics (signal processing subsystem). In Figure 6.3 the optics
subsystem encompasses the afocal, scanner, interlacer and imager, while the detector
subsystem includes the detector and cooler. The electronics subsystem makes up the bulk
of the remaining functions, and a display subsystem can be broken out if desired. Each of
these subsystems is fairly complex and in practice each is normally addressed by separate
teams of designers.
Common design practice in industry is to first perform a system-level FLIR design,
in which specific customer requirements are transformed into specifications (lower-level
requirements) for each subsystem. For example, to achieve a given target for system
resolution or sensitivity, specifications may be generated that require a given level of
efficiency from the afocal optics subsystem and a basic configuration and sensitivity of the
detector subsystem. The subsystems are then designed to meet these requirements; lens
configurations are proposed and analyzed, detector materials and signal processing are
explored, and so on. Finally the individual components (lenses, integrated circuits and
circuit card assemblies, enclosures, software, etc.) are designed to meet the subsystem
requirements. Therefore a design timeline appears as discussed in Section 5.2.1.
What are the enterprise design implications for FLIR system design? As discussed
in Section 5.3.7, it would be very beneficial to be able to estimate system costs and
schedules in the early stages of a design effort, so that trade studies could be performed
trading off system performance with cost and schedule. However, estimating these costs
takes us out of the domain of product design. For example, computing the production cost
for a given product component commonly requires the specification of manufacturing
operations, tooling, processing parameters, and so on. Specifying these parameters is in
the domain of manufacturing process design. Similarly, computing the cost of the design
206206
process itself requires specifying the sequence of design tasks, the task durations and
probabilities of iteration, the number of employees assigned to each task, their wage levels,
and so on. Specifying these parameters is in the domain of organization design. Thus we
see that component design decisions are interdependent with other detailed decisions in
manufacturing process design and project management. And what’s more, these
component design decisions are directly influenced by the upstream decisions made in
system design. This general scenario is illustrated in Figure 6.4.
Product Design
ManufacturingProcess Design
OrganizationDesign
PASSAGE OF TIME
Traditional
Proposed
System Design Subsystem Design Component Design
Figure 6.4 System Design from a Timeline Perspective
Traditionally, computing the impact of product design decisions on manufacturing
process cost and design process cost are postponed until detailed (component) design.
Cross-functional teams are formed to design the components and their manufacturing
processing, and project managers keep the design teams on schedule. This traditional
approach is shown in Figure 6.4 by heavy two-way arrows. Applying the enterprise
design method to decisions in system design results in the dashed lines radiating outward
from system design into areas within manufacturing process design and organization
design. In effect, these system design decisions are interdependent with others that occur
207207
at different points in time in different design domains. To make this structure more clear,
Figure 6.4 is fleshed out in the context of FLIR system design in Figure 6.5.
All of the decisions in Figure 6.5 are interdependent to some degree, and the cause
of this interdependence is the common goal of reducing overall system cost. Decisions
made in FLIR system design are represented by the circle in the upper left of Figure 6.5.
These high-level decisions have significant impact on the downstream component design
decisions as shown, and these component design decisions are interdependent with
manufacturing process design decisions and project management decisions. These
downstream decisions drive both the production costs and the design process duration of
the project, so taking these cost impacts into account during FLIR system design creates a
web of interdependent decisions. This web spans: system design decisions in which high-
level system requirements are set; component design decisions in which component shapes,
features and tolerances are set; manufacturing process design decisions in which the
sequences of operations are set; project planning decisions where the structure of design
tasks is set; and project management decisions where the actual task durations are
monitored and controlled.
PASSAGE OF TIME
FLIR System Design
FLIR Component
Design
Project Planning
Manufacturing Process Design
Project Management
Production Costs
Design Process Duration
Standard Design Process Structure
PHASE A
PHASE B
PHASE C
Figure 6.5 Decision Interdependencies for FLIR System Design
208208
How can integration be achieved? In this situation these interdependent decisions
can not be forced together into a synchronized moment in time, and we are left only with
the option of promoting empowerment as discussed in Section 5.2.3. “Integration” in this
context is therefore achieved by taking into account the effects of external decisions and not
by forcing the external decisions to be executed within a joint formulation at the same point
in time. The procedure for handling decision interdependencies along a design timeline will
be invoked as discussed in Section 5.4.
For each phase of the case study, the enterprise design method of Section 3.4 is
used to create and solve a compromise DSP formulation of a FLIR system design decision.
The set of design variables remains relatively constant for each decision formulation, but
through the progression from Phase A to Phase B to Phase C additional goals and
constraints are added to the formulation to capture its enterprise-wide effects. This
sequential augmentation is shown in Table 6.1. The design variables in each of the
formulations in Table 6.1 are system-level product design parameters, and in essence it is
the constancy of these design variables that binds the formulations together. The problem
statements for each phase of the case study therefore are:
Phase A: Find values for system-level FLIR variables that define a high-performance
system, both in terms of its noise equivalent temperature difference
(NETDmean) and the variability in this performance (NETDstdev).
Phase B: Recognize that the setting of these system-level FLIR variables has
significant impact on the production costs of the system. Model the
production cost impacts of these variables, and define alternate system
configurations that illustrate the trade-offs between product performance and
production cost.
Phase C: Recognize that the setting of these system-level FLIR variables also has
significant impact on the duration of the remaining design process for the
209209
system. Model the design process duration impacts of these variables, and
define alternate system configurations that illustrate the trade-offs between
product performance, production cost, and design process duration.
Table 6.1 Sequential Augmentation of C-DSP Formulations
Phase A Phase B Phase C
Given Equations for:• Noise equivalent
temperature difference(NETDmean and NETDstdev)
Equations for:• Noise equivalent
temperature difference(NETDmean and NETDstdev)
• Afocal optics productioncost (LNSCST)
• Focal plane array produc-tion cost (FPACST)
Equations for:• Noise equivalent
temperature difference(NETDmean and NETDstdev)
• Afocal optics productioncost (LNSCST)
• Focal plane array produc-tion cost (FPACST)
• Focal plane array designprocess duration (DPmean
and DPstdev)
Find Values for:• Optics f number• Optics transmission• Detector element size• Detector detectivity• # time delay and integra-
tion stages in detector
Values for:• Optics f number• Optics transmission• Detector element size• Detector detectivity• # time delay and integra-
tion stages in detector• Optics focal length
Values for:• Optics f number• Optics transmission• Detector element size• Detector detectivity• # time delay and integra-
tion stages in detector• Optics focal length
Sat isfy Bounds (on system var’s)
Constraints:1. NETDmean
Goals:Product Performance
1. NETDmean
2. NETD stdev
Bounds (on system var’s)
Constraints:1. NETDmean
2. LNSCST3. FPACST
Goals:Product Performance
1. NETDmean
2. NETD stdev
Production Cost3. LNSCST4. FPACST
Bounds (on system var’s)
Constraints:1. NETDmean
2. LNSCST3. FPACST4. DPmean
Goals:Product Performance
1. NETDmean
2. NETD stdev
Production Cost3. LNSCST4. FPACST
Design ProcessDuration
5. DPmean
6. DPstdev
Minimize 2-level deviation function 4-level deviation function 6-level deviation function
210210
It is through the defining of these alternate system configurations in Phase B and
Phase C that enterprise design is realized. As stated in Section 1.3.4, these decisions are
made by:
• Identifying and assessing the enterprise-wide effects of each local decision,
• Identifying and assessing the effects of other external decisions on the local
decision at hand, and
• Making each decision to satisfy as many of the enterprise-wide goals as
possible, while being as robust as possible to external decisions that are beyond
the decision-maker’s control.
Thus this FLIR system design example captures the issues both of integration
across design domains and integration through time. The first phase of the case study
suite, focused on only the product design concerns, is developed in the next section.
6.3 CASE STUDY PHASE A: ISSUES OF PRODUCT PERFORMANCE
There are many decisions inherent in the process of FLIR system design described
in the previous section. In this phase of the case study a representative decision is
identified and then re-formulated in terms of the decision support paradigm at the heart of
this research. In other words, a mathematical and multi-objective formulation of the
decision is created and solved. This transformation is accomplished by applying the
approach of enterprise design.
There are two primary purposes for developing this phase of the case study. The
first purpose is verification of the enterprise design approach; this is accomplished by
testing to ensure that the generated solutions are reasonable. This testing is accomplished
by comparing these solutions against the experiences and engineering judgment of a
community of FLIR system designers and managers in industry. The second purpose is to
211211
develop a baseline for the case study phases to come, so that apples can be compared to
apples. Validation of the enterprise design approach is supported if “better” solutions are
found by integrating issues from manufacturing process design and organization design,
and this value is determined by comparison. In the subsections to follow Phase A of the
case study is developed by applying each of the five tasks of enterprise design in turn.
6 .3 .1 Expand Scope of Decision
(Based on Section 3.4.3)
In this section, enterprise design Task 1 is applied to FLIR system design. Because
this phase of the case study serves as a baseline for further comparisons, the scope of this
decision is not expanded. Instead in this
section the baseline decision is developed in
more detail.
The decision selected for this case
study is that of specifying system-level
requirements to achieve given targets for
system performance. The performance goal selected for this phase of the case study is the
system’s Noise Equivalent Temperature Difference (NETD). Noise is inherent in any
photon-detection process; permeating every image are random or scattered photons that
create an underlying band of noise. NETD measures the system’s sensitivity with respect
to this noise. In broad terms, picture the FLIR system aimed at a blank wall of uniform
temperature. Then picture gradually increasing the temperature of a point source within that
wall. At some point, this temperature signal will rise in magnitude above the noise
threshold and become visible. NETD is a measure of the temperature difference at which
this occurs. Ideally system NETD should be as small as possible. Some high-performance
FLIR systems can achieve a NETD of 0.05 °C, so this value is selected as our goal target.
DesignVariables
Goals and Constraints
Awareness of InternalCustomers
Awareness ofExternal Customers
Brainstorm Potential Effects
GenerateAdditional
Goals and Constraints
Raw List of Potential Goals and Constraints
Final List of Potential Goals and Constraints
TTii
TT
ii
ii
ii
ii ?? iiPreliminary Selection
212212
The effectiveness of FLIR systems degrades rapidly as NETD rises above 0.2 °C, so this is
used as a constraint value. The goals and constraints used for this baseline decision
formulation are shown in Table 6.2.
Table 6.2 Goals and Constraints for Phase A Decision
Constraint Value Objective Target
System PerformanceNoise EquivalentTemperature Difference (°C)
≤ 0.2 °C 0.05 °C
A range of design variables for computing system NETD is possible depending on
the modeling technique employed, but a fairly common set of variables include the optics
aperture diameter, optics focal length, optics transmission, scan efficiency, number of
detector elements, detector element dimensions (height, width), detector detectivity, and the
spectral region for consideration (Hopper, 1993). At this stage, because a particular
modeling technique for NETD has not yet been selected, the set of design variables for this
decision are left in this vague state of definition; a more concrete set of design variables
will be defined in Tasks 2a and 2b in the next section. Again, because in this phase of the
case study the enterprise-wide effects of this decision are not considered, the remainder of
this task is skipped and no additional goals or constraints are generated. Thus the
abbreviated output from this task is the baseline decision roughly defined in terms of design
variables, goals, and constraints, and we are now ready to move to Tasks 2a and 2b.
These tasks are addressed in the next section.
213213
6 .3 .2 Identify Modeling Techniques and Determine Input Needed forAnalysis
(Based on Section 3.4.4)
In this section enterprise design Tasks 2a and 2b are applied to FLIR system
design, and this involves delving more deeply into the details of modeling and analyzing
FLIR performance. Because both the one
goal and the one constraint in this decision
formulation require the calculation of NETD,
these tasks can be executed jointly for both.
In this instance there are two commonly
applied and well-established options for
calculating system NETD. The first option is to compute NETD using equations found in
most IR texts, and the second option is to use a computer analysis code entitled FLIR92,
created by the Army Night Vision Laboratories and widely distributed free of charge.
These are the two modeling options generated for consideration.
How are these modeling alternatives evaluated? Because both of these options are
widely available, have negligible computing costs and are free of charge, the two primary
criteria for evaluating them are their accuracy and the amount of input information required.
Textbook equations require only a reduced number of system parameters, but their
accuracy can be questionable. On the other hand, FLIR92 is much more accurate (as
documented in Ratches, et al., 1975) but requires more data for input. Because accuracy is
much more important than the amount of input data required, FLIR92 is selected. (As a
point of interest, in previous iterations of this phase of the case study, decision
formulations were generated using these textbook equations and their predictive accuracy
was indeed unsatisfactory.)
Goal or Constraint
Existing Modeling
Techniques
Generate Modeling
Alternatives
Design Variables
List of Alternatives
Evaluate Each Alternative
Select Modeling Option
Model Accuracy, Cost, Input Data, etc.
Satisfactory?
Create NewModeling Techniques
Model for Goal or
Constraint
Additional Input Data
Needed
Yes
No
Create Model
ii
ii
ii
ii
ii
ii
ii
TT
TT TT
TT ????
214214
The next step of these tasks is to create the actual FLIR92 model to describe the
system designs under consideration in this design effort. It is during this process that the
set of potential design variables for this decision is determined concretely. FLIR92
supports nearly 100 possible parameters to define an IR system, but not all of them are
required to specify any particular system. In this instance the class of FLIR system designs
of interest can be described using only 44 of these parameters. A sample FLIR92 input
file containing these 44 parameters is given in Appendix A.1. The exclusion of the
remaining parameters embodies a set of assumptions invoked by experienced FLIR system
designers about the class of FLIR system designs being considered for this application, and
documenting these assumptions is beyond the scope of this discussion. Suffice it to say
that the input file given in Appendix A.1 results in the generation of system designs that are
reasonable for the application at hand.
With the creation of this FLIR92 model, Tasks 2a and 2b are in essence complete;
this FLIR92 model is satisfactory in terms of both efficiency and accuracy, and therefore
no new modeling techniques need to be created. The outputs from these tasks are the list of
input data (embodied by Appendix A.1) and the FLIR92 code itself. Task 3 can now be
addressed, and this is done in the next section.
6 .3 .3 Define Boundaries of Decision
(Based on Section 3.4.5)
In this section enterprise design Task 3 is applied to FLIR system design. The first
activity in executing this task is to gather and classify all of the input data required for
FLIR92 according to the classification scheme of Figure 3.16. Although 44 parameters are
used to define our system, experienced FLIR designers indicate that 28 of them are
assumed constant for the class of FLIR system designs being considered. Of the 16
remaining parameters, two are identified as noise factors, leaving 14 potential design
215215
variables. All of these 14 can be set independently of each other, so none are classified as
dependent variables. These 16 design variables and noise factors are listed in Table 6.3.
Variables that define the optics
subsystem are the optics f number (FNUM),
the optics focal length (FL), and the optics
transmission (T0). In simple terms, f number
determines the “speed” of an optics system; a
lower f number is “faster” and therefore yields
increased performance. In terms of the classification scheme of Figure 3.16, f number is a
continuous design variable, and using the judgment of FLIR system designers its feasible
region is known to be between 1.5 and 3. (This knowledge is based on a wealth of
experience designing FLIR systems with similar requirements.) Focal length is a rough
measure of the overall magnification of the system; a higher focal length indicates less
magnification. It is also a continuous design variable and its feasible region is specified as
between 10 and 25 centimeters. Finally, optics transmission is a measure of the efficiency
of the optics system in terms of the fraction of exiting photons to incident photons. It is a
cumulative measure of the efficiencies of all of the lenses and mirrors in the optics system.
Perfect efficiency would yield an optics transmission value of 1, and zero is the worst
transmission value possible. Again relying on the experience of FLIR designers, the
feasible region for optics transmission is set between 0.3 and 0.7.
Variables that define the detector subsystem are the horizontal and vertical sizes of
each detector element (DX and DY), the detectivity of each detector element (DSTAR), and
the number of time delay and integration stages (TDI) in the focal plane array. These
detector elements are fairly small, and so the feasible region for both their horizontal and
vertical sizes is set from 20 micro-meters to 30 micro-meters.
Controlled by someone else?
AdditionalInput Data
Additional Design Variables
Fixed Parameters
Gather & Classify Data
Known now in timeline?
Yes
No
NoYes
Incorporate or Make Robust
Transform or Make Robust
Make Robust
Incorporate Noise Factors
1 3
2 4
ii
ii
ii
ii
TT TT
TTTT
TT
??
??
216216
Table 6.3 NETD Design Variables, Noise Factors and FeasibleRegions
DESIGNVARIABLES DESCRIPTION FEASIBLE REGIONOptics 1) FNUM optics f number continuous; 1.5 ≤ x ≤ 3
2) FL optics focal length, cm continuous; 10 ≤ x ≤ 25
3) T0 optics transmission continuous; 0.3 ≤ x ≤ 0.7
Detector 4) DX detector element horizontal size, µm continuous; 20 ≤ x ≤ 30
5) DY detector element vertical size, µm continuous; 20 ≤ x ≤ 30
6) DSTAR detector detectivity, 10^11 cmHz^(1/2)/W
continuous; 0.7 ≤ x ≤ 1.5
7) TDI number of Time Delay and Integrationstages (# of 240-detector rows in FPA)
discrete; values of 1, 2, or 4
Electronics 8) ORDHPF order of high pass filter discrete; values of 1, 2, or 4
9) CUTLPF low pass filter 3 decibel cutoff, Hz continuous; 10000 ≤ x ≤ 20000
10) ORDLPF order of low pass filter discrete; values of 1, 2, or 4
Display11) CRTBRT CRT brightness, milli-Lamberts continuous; 8 ≤ x ≤ 12
12) CRTX horizontal CRT spot size, milli-radians continuous; 0.01 ≤ x ≤ 0.04
13) CRTY vertical CRT spot size, milli-radians continuous; 0.01 ≤ x ≤ 0.04
NOISEFACTORS DESCRIPTION FEASIBLE REGION
14) SCEFF scan efficiency continuous; 0.6 ≤ x ≤ 0.8
15) SPCTN spectral cut on, µm continuous; 7.65 ≤ x ≤ 8.65
16) EYEINT eye integration time, s continuous; 0.1 ≤ x ≤ 0.3
Detector detectivity is a measure that compares the detector’s responsivity relative to
the detector noise current. “Detectivity” and “responsivity” are both measures of the
sensitivity of each detector; the higher the detectivity value, the more sensitive the detector
and the higher performance for the FLIR system as a whole. Again using the judgment of
experienced FLIR designers, the feasible region for DSTAR is set from 0.7 to 1.5,
217217
measured in 1011 (cm)(Hz)1/2/watts. Finally the number of time delay and integration
stages is used to specify the configuration of the focal plane array grid. At a bare
minimum, focal plane arrays for systems of this type contain one row of 240 detectors.
Other feasible configurations, however, include 2 and 4 rows of 240 detectors. These
rows are called Time Delay and Integration (TDI) stages, and the more rows the higher the
performance of the detector. Therefore the feasible region for this discrete variable is
specified as the set of values {1, 2, 4}.
Variables that define the electronics subsystem are the order of the high pass filter
(ORDHPF), the order of the low pass filter (ORDLPF), and the 3-decibel low pass cutoff
frequency (CUTLPF). These variables all feed into the determination of the system’s
modulation transfer function, which is in essence a high-level description of the
subsystem’s signal processing characteristics. Recalling the basics of electronics system
design, each filter can either be designed in a first-order, second-order, or fourth-order
configuration, so the feasible regions for these discrete variables are the set {1, 2, 4}.
Again relying on the judgment of FLIR designers, the frequency range for the low pass
filter cutoff is set from 10,000 Hz to 20,000 Hz.
Variables that define the display subsystem are the brightness of the cathode ray
tube display (CRTBRT) and the horizontal and vertical spot sizes of the display (CRTX
and CRTY). The brightness of CRT displays, measured in milli-Lamberts, is known to
vary from 8 to 12, and the spot size, an indicator of the resolution of the CRT, is known to
vary from 0.01 to 0.04 milli-radians.
An astute reader will note that only 13 design variables are listed in Table 6.3, while
initially 14 potential design variables were identified earlier in this section. The rogue
variable is the efficiency of the scanner (SCEFF). This section begins with scan efficiency
classified as a potential design variable: it represents the efficiency of the scanner assembly
and is similar in nature to the optics transmission; it can take on values from 0 to 1 with 1
218218
representing perfect efficiency. However, when the questions within Task 3 are executed it
is found that scan efficiency can not be controlled with any certainty by the FLIR system
designers; it will in actuality be determined by an external group of designers at some point
later in the design timeline, and these designers operate outside the authority of the FLIR
system designers for which this formulation is intended. Therefore scan efficiency falls
into category (4) of Task 3, and thus the approach selected is to make system NETD robust
to changes in scan efficiency; this is done by categorizing it as an additional noise factor.
So what begins as 14 design variables and 2 noise factors ends with 13 and 3, respectively.
Implicit in the preceding discussions is that the other 13 design variables are under the
control of the system designers and can be specified at the current point in the design
timeline, so they are incorporated into the formulation with no additional concerns.
The noise variables for this decision formulation are thus the scan efficiency
(SCEFF), spectral cut on (SPCTN), and eye integration time (EYEINT). The scan
efficiency is defined previously, and the spectral cut on represents the lower bound of the
infrared radiation that the system must detect. Recall from Section 6.2 that the peak of this
radiation occurs at roughly 10 micro-meters, and the FLIR system detects radiation in a
narrow spectrum around this peak. The lower bound required for this spectrum varies
depending on the particular environments in which the FLIR system is used, so in essence
this factor is uncontrollable. Based on previous experience it is known that the range of
potential environments corresponds to a spectral cut on somewhere in between 7.65 and
8.65 micro-meters. Finally, the eye integration time is a measure of the visual acuity of the
FLIR system operator. An eye integration time of 0.1 seconds represents the upper limit in
visual capabilities, while a time of 0.3 seconds represents the lower realm of typical
operator performance.
Thus the output for Task 3 is generated; all of the input data for the FLIR92 model
have been classified as fixed parameters, design variables, and noise factors. The design
219219
variables and noise factors are listed in Table 6.3, and the remaining fixed parameters are
found in the sample FLIR input file in Appendix A.1.
6 .3 .4 Transform and Integrate Models
(Based on Section 3.4.6)
In this section enterprise design Task 4 is applied to FLIR system design. Because
FLIR92 runs execute in a matter of seconds, computational efficiency is not really an issue
here. Furthermore, because it exists on UNIX platforms with text file input and output, it
would be possible to link FLIR92 to DSIDES
directly, thus making integration not an issue.
However, in implementing this case study
practical limitations to computer access and
networking arise, and in parallel there is
considerable industrial interest in building
FLIR92 approximations. Therefore, in this instance sufficient reason is found to pursue
FLIR92 approximation and integration. In addition, because robustness is also needed in
this case, metamodeling is pursued for both the mean response and the variation of NETD.
The first step of building metamodels, of critical importance for efficient model
building, is to determine the sets of design variables and noise factors that actually have a
significant effect on the response. (It is a wasteful exercise to build predictive models as
functions of twenty design variables if in actuality only three of the variables affect the
response. In either case the accuracy of the model will be the same, but substantially
differing amounts of effort are required for model construction.) As discussed in Section
4.5.1 this determination of significance is accomplished through the activity of factor
screening.
Selected Modeling
Option
Screen for Significant
Factors
Efficiency or Integration an
Issue?Robustness
Needed?
Significant Factors
Select Metamodeling
Technique Build Mean Response
Metamodel
Build Robustness Metamodel
Verify & Validate
σ model^
y model^Models
No No
Yes Yes
Use existing model as is
Design Variables
Noise Factors
Metamodeling Techniques
Fixed Parameters
ii
ii
ii
ii
ii
ii
ii
ii
ii ?? ??
??TT
TT
TT
TT
220220
From Table 6.3 sixteen factors, 13 design variables and 3 noise factors, may have
significant effects on NETD. To determine which of these factors are the most significant a
two-level, 33-run experiment is designed, consisting of a 32-run resolution III, 216-11
fractional factorial design and a center point. The design matrix is given in Appendix A.2.
Each “-1” and “+1” in the design matrix corresponds to setting the design variable or noise
factor to its lower bound or upper bound, respectively; these values are used to create 33
input files for FLIR92, one file for each run. (The FLIR92 input file in Appendix A.1 is
actually the file used for the first screening run.) The 33 output files from FLIR92 are
collated and the specific NETD values extracted; these NETD results from each run are
also given in Appendix A.2. To determine the significance of the relative contributions
from each of the 16 factors a Pareto plot is used; this plot is shown in Figure 6.6.
TermFNUMT0DSTARTDISPCTNDXFLCRTBRTSCEFFEYEINTCUTLPFCRTYCRTXORDHPFORDLPFDY
Scaled Estimate0.17059375-0.1215937-0.1119062 -0.092099
0.07915625-0.05790620.030843750.02828125-0.0277812-0.02578120.021031250.018093750.01740625-0.01559550.01276457-0.0127188
.2 .4 .6 .8
Figure 6.6 Pareto Plot of NETD Factor Significance
The Pareto plot in Figure 6.6 is generated using the JMP statistical software
package, taking as input the screening experiment and results as given in Appendix A.2.
221221
The bars in the figure represent the relative contribution (fraction of total) of a given factor
for the given response. The factors are listed in order of contribution size, with the largest
contributor (longest bar) listed first. (In this case optics f number provides the largest
contribution, roughly 20%, of the NETD response). The line cascading down from the
first bar represents the cumulative contribution of effects, and thus sums to one by the time
the bottom of the figure is reached. What Figure 6.6 shows clearly is that not all of the 16
factors contribute equally to predicting system NETD, and therefore there is legitimate
reason to remove some of these factors from further metamodel building. The hard
question is what subset of the 16 factors should be selected. There are many options for
selecting this significant set of factors, but all of them are in essence rules of thumb. The
brute-force approach is to retain all factors and plow ahead in building large metamodels.
The benefit of this approach is that the predictive accuracy of these metamodels will be as
high as possible, with the cost of substantial computational effort. Another option is to
arbitrarily select a cumulative percentage (such as 80%, 90%, or 95%), look to the Pareto
plot, see where the cascading cumulative line intersects these vertical percentage markers,
and then draw a horizontal line back to the left from this intersection point. All of the
factors above this line are then included, and the factors below the line are dropped from
further consideration. For example if in Figure 6.6 the cumulative percentage of 80% is
selected, the top seven factors up to and including optics focal length (FL) would be
retained for further metamodel building, and the remaining nine would be dropped from
consideration.
Another valid option is to peruse down the Pareto plot and look for natural
groupings or break-points in the contribution bars. If there is a group of factors all with
similar magnitudes of contribution, and the remaining factors are clustered in a group of
much smaller contributions, then perhaps this group of remaining factors should be
dropped from consideration. For example in Figure 6.6 there is an argument for keeping
222222
the first six factors down to detector horizontal size (DX) and dropping the remaining
factors from further metamodel building.
Yet another legitimate option is to consider the limitations of the particular statistical
software package used for metamodel creation. For example the software package used in
this dissertation, JMP version 3.1.5 for the Macintosh, will only generate central composite
designs (CCD’s) in two to eight factors, with the eight factor CCD allowing a maximum of
324 runs. Although there are often ways around such limitations (such as designing a
custom experiment in this case), it is certainly a valid approach to count down the number
of factors from the top of the Pareto plot and stop including factors when the chosen
statistical software package reaches the limits of its capabilities.
To reiterate, all of these options are in essence are rules of thumb. The true test is
the predictive accuracy of the metamodels that are created from each set of factors, so in
practice one cannot escape the potentially iterative process of building, testing, and revising
metamodels. The moral of the story is to use whatever rules of thumb are most appropriate
for selecting the set of significant factors, to press on and pursue metamodel creation, and
then be prepared for the possibility of having to iterate.
Engineering judgment can also play a role in determining the set of significant
factors, and such is the case here for NETD. The judgment of experienced FLIR designers
indicates that in general, the parameters relating to the signal processing electronics and
display are of lesser importance for predicting NETD than those relating to the optics,
scanner, and detector. Coupling this engineering judgment with the idea of identifying a
natural break point in the Pareto plot, the result is the identification of five design variables
and two noise factors as the most significant: optics f number (FNUM), optics
transmission (T0), detector detectivity (DSTAR), the number of TDI stages (TDI), spectral
cut on (SPCTN), detector width (DX), and scan efficiency (SCEFF). As a further element
of the engineering judgment invoked, it is decided to link detector width with detector
223223
height so that only square elements are investigated, and thus detector width (DX) and
detector height (DY) are combined into a single factor (DXY). These seven factors are
listed in Table 6.4 along with their original feasible regions, and the remaining eight factors
are held constant for the remaining metamodel building.
Table 6.4 NETD Design Variables and Noise Factors Remaining afterScreening
DESIGNVARIABLES DESCRIPTION FEASIBLE REGIONOptics 1) FNUM optics f number continuous; 1.5 ≤ x ≤ 3
2) T0 optics transmission continuous; 0.3 ≤ x ≤ 0.7
Detector 3) DXY detector element size, µm continuous; 20 ≤ x ≤ 30
4) DSTAR detector detectivity, 10^11 cmHz^(1/2)/W
continuous; 0.7 ≤ x ≤ 1.5
5) TDI number of Time Delay and Integrationstages (# of 240-detector rows in FPA)
discrete; values of 1, 2, or 4
NOISEFACTORS DESCRIPTION FEASIBLE REGION
6) SCEFF scan efficiency continuous; 0.6 ≤ x ≤ 0.8
7) SPCTN spectral cut on, µm continuous; 7.65 ≤ x ≤ 8.65
Thus we move in enterprise design Task 4 from screening to selecting a
metamodeling technique, the recommendations for which are found in Section 4.5.2.
Screening has reduced the size of the problem so that nearly any metamodeling technique
could be applied to build approximations of NETD. Certainly RSM, kriging, or neural
networks could be applied. However, as discussed in Section 4.5.2.3, there are so many
advantages to choosing response surface methods that selecting another technique is
generally only warranted if there would be significant problems with RSM. Such
224224
indications are not evident here. Therefore, because RSM is so well-established, embraced
by industry, and has ubiquitous software support, it is selected for this metamodeling
effort. An additional benefit is that since noise factors are present in this formulation,
applying the product array RSM approach of Section 4.5.3 allows both a mean response
model and a robustness (variance) model to be built from the same experiment.
To build the mean response and robustness models a product array is constructed
using a central composite design for the five design variables, with the two noise variables
varied across a two-level (full factorial design) outer array. Thus for each run of the CCD a
set of four “replications” are generated, and from this sample a mean and standard deviation
can be computed. (This structure is illustrated in Figure 4.6.) The five-factor CCD
selected is comprised of a full 25 factorial experiment plus two star points for each factor
plus one center point, yielding 43 runs in all. Recall that four replications are generated for
each of these 43 runs, so in all 172 FLIR92 input files are created and executed. Running
these 172 FLIR92 files takes on the order of ten minutes for a 486/33 PC, so the
computational effort is far from intensive. The design matrix and the NETD results are
given in Appendix A.3.
Note that in this appendix the values for each design variable and noise factor are
given in a coded scale, with the levels of the two factorial points denoted as “-1” and “+1”,
respectively. The alpha value for the continuous factors in the CCD is selected as 2.354,
so each of the star points are set at either “-2.354” or “+2.354”. (The discrete factor TDI
uses an alpha value of “2”.) This coding is in effect a normalization of the feasible regions
of each factor, which aids in both the orthogonality of the design matrix itself and the ease
of interpretation of the resulting metamodel. When employing the design of experiments
for metamodel building, the coding of factor levels is universally recommended. This
coding adds an extra step to the metamodeling process; these coded values must be
transformed into the “uncoded” values of each feasible region before being used as input
225225
for FLIR92. To this end linear transformations are employed; these transformations are
given for each design variable and noise factor in Appendix A.4.
Least-squares regression is then applied to fit predictive models of both this mean
response and the standard deviation. The pooled standard deviations can be used as a
measure to support statistical testing of parameters, but this avenue of statistical testing is
not exercised here. Because both a mean and standard deviation are computed for these
runs, this experiment serves jointly for building a mean response metamodel and a
robustness metamodel. As a first-pass effort second-order polynomials are selected as the
model structure, and model coefficients are fit to the mean and standard deviation data
using least-squares regression. These models are fit using the JMP software package using
the design matrix and NETD results of Appendix A.3 as input. Noting that these models
are functions of the CODED design variables, the resulting models are shown in Equation
6.1 and Equation 6.2:
NETD mean = 0.1254697+ (-0.011832*DXY) + (-0.022123*DSTAR) +
(0.0018516*DXY*DSTAR) + (0.0385671*FNUM) +
(-0.003367*DXY*FNUM) + (-0.006086*DSTAR*FNUM) +
(-0.024524*T0) + (0.0021172*DXY*T0) +
(0.0037734*DSTAR*T0) + (-0.006695*FNUM*T0) +
(-0.025606*TDI) + (0.0020391*DXY*TDI) +
(0.0037578*DSTAR*TDI) + (-0.006773*FNUM*TDI) +
(0.0041797*T0*TDI) + (0.0007129*DXY*DXY) +
(0.0032171*DSTAR*DSTAR) + (0.0022696*FNUM*FNUM) +
(0.0040744*T0*T0) + (0.0074483*TDI*TDI) [6.1]
NETD stdev = 0.0302713 + (-0.00286*DXY) + (-0.005271*DSTAR) +
(0.0004286*DXY*DSTAR) + (0.009203*FNUM) +
(-0.000776*DXY*FNUM) + (-0.00143*DSTAR*FNUM) +
(-0.005902*T0) + (0.0005213*DXY*T0) +
(0.0008875*DSTAR*T0) + (-0.001562*FNUM*T0) +
226226
(-0.006079*TDI) + (0.0004655*DXY*TDI) +
(0.0009766*DSTAR*TDI) + (-0.001618*FNUM*TDI) +
(0.0009553*T0*TDI) + (0.0001428*DXY*DXY) +
(0.0007025*DSTAR*DSTAR) + (0.0004885*FNUM*FNUM) +
(0.0009593*T0*T0) + (0.0016884*TDI*TDI) [6.2]
If accurate, these approximations carry significant impact for the design of a FLIR system.
The first approximation, that of NETD mean, gives a functional form to the overall character
of the design space, thus adding valuable insight in the search for solutions. The second
approximation, that of NETD stdev, has equivalent import. It allows a system design to be
pursued irrespective of the ultimate value for scan efficiency and helps in identifying
regions of the design space that are robust to changes in scan efficiency and spectral cuton.
Are these approximations sufficiently accurate? There are a number of measures to
aid in this evaluation. First, the R-square value is high for both models, on the order of
0.996 for each. (A value of one would indicate a perfect fit.) Second, plots of the
predicted values versus the actual values for both NETD mean and standard deviation
(generated in JMP) show that there is a close correlation; these plots are shown in Figure
6.7. Third, the residuals are plotted against each model factor, and no significant trends
appear that would indicate the necessity of higher-order terms in the model.
In the plots of Figure 6.7 the horizontal axis indicates the actual NETD values
computed from FLIR92 for each of the 172 experimental runs, and the vertical axis
indicates the values predicted from the response surface equations (Equations 6.1 and 6.2).
The angled line represents the ideal fit (actual and predicted values being equal) around
which the predicted data is scattered, and the horizontal dashed line in each plot represents
the response mean value. The dashed lines bounding each angled line are confidence bands
for the predicted responses, indicating whether the model is significant at the 5% level. If
these confidence bands completely contain the horizontal dashed line, the model is not
227227
significant. The bands are very tight in Figure 6.7, which is desirable. Since the
confidence curves cross the horizontal line in both cases, the models are significant at the
5% confidence level.
NET
D m
ean
0.05
0.10
0.15
0.20
0.25
0.30
0.35
.00 .05 .10 .15 .20 .25 .30 .35NETD mean Predicted
NET
D S
tdev
0.01
0.02
0.03
0.04
0.05
0.06
0.07
0.08
.01 .02 .03 .04 .05 .06 .07 .08NETD Stdev Predicted
Figure 6.7 Plots of Predicted vs. Actual Values for NETD Mean andStandard Deviation
As a final test of model accuracy, additional runs are performed with FLIR92 and
these actual values are checked against the values predicted by the models above. The
points selected for verification are fairly stringent; an 8-run fractional factorial design is
created using the alpha values of the CCD, which tests the extreme corners of the model
space, and another 8-run fractional factorial design is created at a cube in an untested region
within the modeling space. For each of these 16 runs the two noise factors are again varied
in their outer array, and values for NETD mean and standard deviation are calculated. The
design matrix in coded values for this verification and the FLIR92 results are given in
Appendix A.5. (The transformations for mapping these coded values back to the actual
feasible regions of Table 6.4 are again given in Appendix A.4.)
228228
These verification points are then plugged into Equations 6.1 and 6.2, and the
comparison between these calculated values and the actual values from FLIR92 are also
shown in Appendix A.5. The difference between actual and calculated values is very small
for most of these points, on the order of 0.02° or less. (This level of accuracy is on the
same order of magnitude as the measurement error for the most accurate testing equipment
for FLIR systems.) There are three points that are areas of concern for predicting NETD
mean: runs 2, 5 and 6. These error values are highlighted in bold in the appendix; run 2
has an error of -0.135, run 5 has an error of 0.043, and run 5 has an error of -0.299. Are
these errors a problem? As it turns out, they most likely are not. The actual values of mean
NETD for runs 2 and 6 are much higher than 0.2°, so these points are not in feasible
regions of the design space. Similarly, run 5 is at a coded value of “2” for TDI, which
equates to five TDI stages. This value is also not feasible for our design space.
All these measures support the conclusion that these second-order approximations
of NETD mean and standard deviation are sufficiently accurate. With these models created
and verified, they can now be integrated into a unified decision formulation and used to
explore the design space. This is done in the next section.
6 .3 .5 Generate Potential Solutions
(Based on Section 3.4.7)
In this section Task 5 of the enterprise design method is applied to FLIR system
design. The outputs of the previous four
enterprise design tasks are used to create a
compromise DSP math formulation,
formulated through the keywords given, find,
satisfy, and minimize. This formulation is
shown in Figure 6.8. A set of potential
Goals and Constraints
Models
Design Variables
Noise Factors
Additional Input
Create Math Formulation
Exercise C-DSP
Many Potential Solutions
Verification& Validation
Final Competing Solutions
Select Best Solution
C-DSP Template
Solutions Acceptable?
YesNoii
ii
ii
ii
ii ii ii
ii
TT
TTTT
?? ??
229229
solutions is generated by exercising this compromise DSP, using DSIDES to perform a
multi-objective search of the product performance design space. Representative DSIDES
input files, a data file and FORTRAN file, are given in Appendix A.6 and A.7,
respectively.
Given Approximations (equations) for:
• NETD, mean and standard deviationFind
Values for the design variables FNUM, T0, DXY, DSTAR, TDI
Values for the deviation variablesdi+, di-
Satisfy System constraints
NETDmean(FNUM, T0, DXY, DSTAR, TDI) ≤ 0.2 [6.1]System goals
NETDmean + d1- - d1+ = Target [6.1]
NETDstdev + d2- - d2+ = 0 [6.2]Bounds
1.5 ≤ FNUM ≤ 30.3 ≤ T0 ≤ 0.7 20 ≤ DXY ≤ 300.7 ≤ DSTAR ≤ 1.5 TDI = {1, 2, 4}di+, di- ≥ 0 ; di+ • di- = 0
Minimize Preemptive deviation function
Z = [ f1(di+, di-), f2(di+, di-) ] (varies by scenario)
Product Design(System Performance)
FLIR92Product Array
Figure 6.8 C-DSP Math Formulation for Phase A
This formulation incorporates all the FLIR system information of the previous
sections: the goals and constraints given in Table 6.2, the five design variables and their
ranges as given in Table 6.4, and the two response surface equations given in the previous
section (Equation 6.1 and 6.2). (These equations are used for computing the left hand side
of both the constraints and goals as shown in Figure 6.8) Recall that a detailed description
230230
of the compromise DSP in terms of goal formulations, deviation variables, the deviation
function and so on is given in Section 3.3.3. The objective of this formulation is to find
the values of the five design variables that satisfy the mean NETD constraint and the
bounds on the design variables and ultimately minimize the deviation function to achieve as
closely as possible the goals for NETD mean and NETD standard deviation. (Recall that
with the scan efficiency and spectral cuton varied over the outer “noise” array, the standard
deviation model is included to address robustness.) In this case the goal target for NETD
standard deviation is set at zero, indicating a desire to minimize variability along the same
vein as the RCEM illustrated in Figure 3.8.
The design space is explored by “exercising” the compromise DSP, and in this case
this exercising is accomplished by using different goal priority scenarios and different
starting points for the design variables. The two scenarios selected are that of 1) NETD
mean at highest priority and NETD standard deviation at lower priority, and 2) NETD
standard deviation at highest priority and NETD mean at lower. The deviation function
formulations for each scenario are shown in Table 6.5. In both cases the preemptive
approach is used, placing the different goals in separate priority levels. For both goals only
the overachievement of each goal target is penalized; coming in under the target value is
completely acceptable. This is shown in Table 6.5 by the existence of only the di+ terms in
each priority level. Because the standard deviation of NETD is known to be fairly small, in
the deviation function d2+ is multiplied by 100 to ensure that small improvements are
recognized. Scenario 1 corresponds to a FLIR system customer stating, “Achieving the
target of 0.05 °C for mean NETD is of the utmost importance. Values less than 0.05 °C are
equally preferable. It would also be nice to minimize the variability of NETD, but this
improvement should not be made at the expense of mean NETD.” In contrast, Scenario 2
is just the reverse; it corresponds to a FLIR system customer stating, “Minimizing the
231231
variability of NETD is of the utmost importance. It would also be nice to hit the target of
0.05 °C for mean NETD, but this improvement should not be made at the expense of
NETD variability.”
Table 6.5 Deviation Function Scenarios for Phase A
Deviation Function
Scenarios Priority Level 1 Priority Level 21. NETD Mean Dominant d1
+ 100 • d2+
2. NETD Standard DeviationDominant 100 • d2
+ d1+
For each of these two scenarios, high and low starting points are explored in order
to establish the convergence of solutions. Recall from Section 3.3.4 that the ALP
algorithm iterates through synthesis cycles and analysis cycles in its search of the design
space; it starts with an initial design and improves it as much as possible until convergence
is reached. This procedure only results in local optima for nonlinear design spaces, so
different starting points should be used in the search for solutions. In an attempt to “cover”
the design space the different starting points for each scenario are set at the upper and lower
bounds of the feasible regions of each design variable.
Because there are both continuous and discrete design variables in the formulation,
this is a mixed discrete / continuous problem; the FALP algorithm of Section 3.3.4 has
been developed expressly to deal with this class of problems. However, because there is
only one discrete design variable (TDI) with only three values, employing the power of
FALP would be overkill. Instead standard DSIDES is employed rather than the foraging
version, and an “exhaustive search” is performed for the TDI values. Therefore each of the
two scenarios at both their high and low starting points are replicated three times, and in
232232
each replicate the value of TDI is hard-wired at 1, 2, or 4. In all, (2 scenarios)(3 TDI
runs)(2 starting points) = 12 DSIDES runs are performed. The outputs from all twelve
runs are given in Appendix A.8, and the best solutions for each of the two scenarios are
given in Table 6.6. For both scenarios feasible, converged solutions are obtained.
1
1.5
2
2.5
3
0 10 20 30 40 50Cycles
FNUM
0
0.2
0.4
0.6
0.8
1
0 10 20 30 40 50Cycles
T0
18
20
22
24
26
28
30
32
0 10 20 30 40 50Cycles
DXY
0.5
0.7
0.9
1.1
1.3
1.5
1.7
0 10 20 30 40 50Cycles
DSTA
R
Figure 6.9 Design Variable Convergence from High and Low StartingPoints, Phase A, Scenario 1, TDI = 1
Before discussing these best solutions, it is important first to ensure that they are
verified. Verification is performed by ensuring the trends in solutions are as expected and
by checking these best solution values against values computed from FLIR92. An
important trend to support solution verification is if both the high and low starting points
converge to the same region for each design variable, and to this end convergence plots are
generated for the DSIDES runs in both scenarios and across all three values of TDI. The
convergence plots for Scenario 1 are illustrated in Figure 6.9, Figure 6.10, and Figure 6.11
233233
for each of the three values of TDI. As can be seen in the figures, each of the four
continuous design variables do indeed converge to the same regions from both starting
points across all three values of TDI. Similar plots are also generated for Scenario 2, but
for the sake of brevity these plots are not shown. The relative smoothness of these
convergence plots indicate a fairly well-behaved design space; the more jagged plots for
optics transmission (T0) and detector detectivity (DSTAR) are still fairly benign.
1
1.5
2
2.5
3
0 10 20 30 40 50Cycles
FNUM
0
0.2
0.4
0.6
0.8
1
0 10 20 30 40 50Cycles
T0
18
20
22
24
26
28
30
32
0 10 20 30 40 50Cycles
DXY
0.5
0.7
0.9
1.1
1.3
1.5
1.7
0 10 20 30 40 50Cycles
DSTA
R
Figure 6.10 Design Variable Convergence from High and Low StartingPoints, Phase A, Scenario 1, TDI = 2
As another trend to support the reasonableness of these solutions we observe that as
the number of time delay and integration stages increases from Figure 6.9 to Figure 6.10 to
Figure 6.11, the DSTAR values converge to correspondingly smaller values. This makes
sense because increasing TDI in effect enhances detector performance, which then reduces
the level of DSTAR needed to compensate.
234234
1
1.5
2
2.5
3
0 10 20 30 40 50Cycles
FNUM
0
0.2
0.4
0.6
0.8
1
0 10 20 30 40 50Cycles
T0
18
20
22
24
26
28
30
32
0 10 20 30 40 50Cycles
DXY
0.5
0.7
0.9
1.1
1.3
1.5
1.7
0 10 20 30 40 50Cycles
DSTA
R
Figure 6.11 Design Variable Convergence from High and Low StartingPoints, Phase A, Scenario 1, TDI = 4
Table 6.6 Best Solutions for Each Scenario of Phase A
SCENARIO1 2
GOAL Level 1 Mean Std DevPRIORITIES Level 2 Std Dev Mean
GOAL NETD Mean (°C) 0.044 0.044VALUES NETD Std Dev (°C) 0.010 0.010
FNUM 1.50 1.50DESIGN T0 0.546 0.546
VARIABLE DXY 25.61 25.61VALUES DSTAR 1.168 1.168
TDI 4 4
DEVIATION Level 1 0.000 1.018FUNCTION Level 2 1.018 0.000
Run a14l a24l
235235
As the final and most stringent test of solution verification, the design variable
values for the best solutions for both scenarios are plugged back into FLIR92 and actual
values for NETD mean and NETD standard deviation are computed. If the errors of
approximation between the response surface equations and FLIR92 are sufficiently small,
then the verification of these solutions is confirmed. The design variable values from Table
6.6 are used to create eight FLIR92 input files: for each scenario a set of four replications
is generated by varying scan efficiency (SCEFF) and spectral cuton (SPCTN) in an outer
array identical to that illustrated in Appendix A.5. The mean and standard deviation of
these FLIR92 values are then computed, and these values are shown in the “FLIR92
Actual” column of Table 6.7. The design variable values are also plugged into Equation
6.1 and Equation 6.2, and these results are shown in the “Metamodel Predicted” column of
Table 6.7. The absolute errors between the actual and predicted values are shown at the
right of the table, and the difference between FLIR92 and the metamodels at this solution
point is very small, on the order of 0.004 for NETD mean and 0.001 for NETD standard
deviation.
Table 6.7 Verification of Best Solutions for Phase A Scenarios
FLIR92 MetamodelActual Predicted Error
Mean Stdev Mean Stdev Mean StdevScenario 1 Best 0.0475 0.0111 0.0435 0.0102 0.0040 0.0009Scenario 2 Best 0.0475 0.0111 0.0435 0.0102 0.0040 0.0009
Thus these solutions are verified. We can feel very comfortable with the accuracy
of the solutions generated from this exploration of the design space, and it therefore
becomes reasonable to draw a set of implications and recommendations from them. These
implications and recommendations are discussed in the next section.
236236
6 .3 .6 Phase A Results and Recommendations: Designing a High-Performance System
Looking to Table 6.6 we see an unusual circumstance: the solutions are exactly
equivalent to the number of significant digits shown. Selecting the best solution is
therefore in this instance redundant; both the solutions in Table 6.6 represent the best
solution. For both scenarios, the optics f number (FNUM) is driven to its lower bound
and the number of detector time delay and integration stages (TDI) is driven to its upper
bound. The solution values for the remaining design variables are close to the middles of
their feasible regions.
Is this solution reasonable? Results similar to these were used in a presentation
given to several groups of system designers and engineering managers in industry, and
there was uniform agreement that these solutions corresponded with their previous
experience. FLIR systems with similar characteristics to the solutions shown in Table 6.6
have indeed been designed to meet the same general targets for product performance. Thus
this case study phase supports the verification of the enterprise design approach; when
applied with the same set of (limited) goals, it results in solutions that are similar to those
generated by the other traditional design approaches existing in industry.
Although the primary interests in this case study are the integration of product
design, manufacturing process design and organization design, there are still some
significant implications for just this initial phase of the case study. FLIR system design is
not commonly pursued in the manner described in this phase; more often than not different
designs are explored in an iterative process of trial and error and one-factor-at-a-time
analyses. Thus these notions of statistical metamodeling, decision formulations, multi-
objective optimization and robust design were met with considerable interest and
enthusiasm from engineers and managers in industry, and these techniques alone embody a
significant area of improvement for engineering practice.
237237
In terms of the case study phases to come, perhaps the most useful implication of
these results is the fact that both goal scenarios push the design variables in the same
direction. We can therefore establish a partial ordering of these goals which will simplify
further explorations: NETD mean will always be put ahead of NETD standard deviation.
The results from this phase of the case study can now be used as a reference point to
determine the relative merit of integrating additional enterprise concerns into the decision
formulation. These comparisons are developed in the sections to follow.
6.4 CASE STUDY B: INTEGRATING MANUFACTURING PROCESSDESIGN ISSUES
In Phase A of this case study a FLIR system design is identified based only on
measures of product performance (NETD mean and NETD standard deviation). However,
this system design decision has other enterprise-wide implications that are important to
consider, as are shown in Figure 6.4. In the subsections to come Phase B of the case
study is developed with the focus of integrating manufacturing process design issues into
the decision formulation. This integration is accomplished by applying the method of
enterprise design.
6 .4 .1 Expand Scope of Decision
(Based on Section 3.4.3)
In this section enterprise design Task 1 is applied to the FLIR system design
decision of the previous phase in order to identify its potential impacts on manufacturing
process design. What are these manufacturing process design impacts? As discussed in
Section 6.2 the external customers for this FLIR system have become acutely cost-
conscious in recent years. Thus an awareness of these external customers would without
doubt result in the generation of additional goals to address system cost in almost any
product design decision.
238238
The notion of “cost” to the external customers has many different facets, including
the cost of acquisition, maintenance costs, repair costs, upgrade costs, and so on. All of
these costs are affected to some degree by
each product design decision, but the cost
that receives perhaps the most customer
attention is that of acquisition. The typical
customer wants to pay a minimum amount to
receive a system that meets a basic
performance threshold. And in most instances a primary driver of acquisition cost is the
per-unit production cost of the system, so in this section additional goals associated with
production cost are generated.
Under the general heading of production cost there are a large number of cost
measures that contribute to the total system cost, including the production costs of the
afocal optics assembly, the detector subsystem, the signal processing electronics, and the
display subsystem. All of these measures are legitimate aspects of production cost and so
are added to a raw list of potential goals and constraints.
Recalling the design variables employed in Phase A of the case study, they are
primarily focused on the optics and detector subsystems of the overall FLIR system.
Variables associated with the signal processing electronics and the display subsystem are
found to be of lesser significance in predicting the system NETD. Therefore,
brainstorming the potential effects of the design variables in terms of production costs
produces a smaller list of potential goals - those of afocal optics assembly production cost
and detector production cost.
Thus in this case reducing the raw list of potential goals and constraints to the final
list of goals and constraints is a simple matter of selecting the goals that are common to
both sets - those that drive production cost and that also are primarily affected by the design
DesignVariables
Goals and Constraints
Awareness of InternalCustomers
Awareness ofExternal Customers
Brainstorm Potential Effects
GenerateAdditional
Goals and Constraints
Raw List of Potential Goals and Constraints
Final List of Potential Goals and Constraints
TTii
TT
ii
ii
ii
ii ?? iiPreliminary Selection
239239
variables at hand. The two measures selected for measuring the production cost impact of
this system design decision are those of afocal optics production cost and focal plane array
(FPA) production cost, and for each a constraint value and a goal target are identified.
These goals and constraints are added on to the baseline formulation established in Phase
A, so the cumulative list of goals and constraints is shown in Table 6.8.
Table 6.8 Goals and Constraints for Phase B Decision
Constraint Value Objective Target
System PerformanceNoise EquivalentTemperature Difference (°C)
≤ 0.2 °C 0.05 °C
Production CostAfocal Optics (NMU) ≤ 80 1
Focal Plane Array (NMU) ≤ 70 1
In effect, the information in Table 6.8 serves as a general problem statement for this
phase of the case study. It is desired to model and quantify the production cost impact of
the FLIR system design decision carried over from Phase A of the case study. This
decision has the design variables of optics f number, optics transmission, detector size,
detector detectivity, and the number of time delay and integration stages, and their
production cost impact is couched in terms of the overall costs for the afocal optics
assembly and the focal plane array detector. Once these manufacturing cost issues are
integrated into the decision formulation, the best solution can be pursued considering
measures of both product performance and production cost.
Although the cost values in Table 6.8 are based on actual production costs, they
have been transformed to a dimensionless scale from 0 to 100 to protect the interests of
240240
their industry sources. (Values of “1” therefore translate into the lowest production costs
that can be achieved.) Given these cost measures as our manufacturing process design
issues of concern, we can now identify how they are to be computed and what input
information is required for analysis. These tasks are performed in the next section.
6 .4 .2 Identify Modeling Techniques and Determine Input Needed forAnalysis
(Based on Section 3.4.4 and Section 5.4)
In this section enterprise design Tasks 2a and 2b are applied to FLIR system design
in order to quantify the measures of manufacturing cost impact identified in the previous
section. Integrating production cost measures into system design is a prime example of
decision interdependence through time as discussed in Section 5.3.7. Therefore in this
section, in addition to following the enterprise design tasks of Section 3.4.4, the procedure
for resolving timeline interdependencies is invoked as discussed in Section 5.4. In this
section the enterprise design Tasks 2a and 2b are applied to the measures of FPA
production cost and afocal optics production cost in turn. These two applications also lead
us down different paths in the timeline interdependency method of Figure 5.8, so care will
be taken to avoid potential confusion.
6 .4 .2 .1 Focal Plane Array Production Cost
In terms of estimating the production cost of a focal plane array of detectors based
on system-level design variables, no existing modeling techniques are found to be
available. However, in discussion with cost estimation experts it is noted that there is a
strong correlation between the wafer size of the FPA (in terms of the number of detectors)
and production cost, and it is also noted that historical data exists to quantify the
relationship. For our decision formulation the FPA size is determined by the number of
241241
TDI stages; a TDI of one corresponds to a 240 x 1 array; a TDI of two corresponds to a
240 x 2 array, and so on.
Because this FPA will be similar to
those used in previous systems, the decision is
made (in terms of Figure 5.8) to “look to the
past,” and so a regression model will be built
to correlate FPA production cost with the
number of TDI stages. Returning to the context
of Figure 3.12, there are no existing modeling
techniques that appear other than regression on
historical data, so only one modeling alternative
is generated and the “selection” of a modeling
option is just a formality. However, the task of
creating the model does have meaning in this
case and is discussed next.
Building FPA Production Cost Regression Model: Historical data is provided that
documents 240 x 1, 240 x 2, and 240 x 4 detectors as costing 11.58, 29.74, and 78.42,
respectively in today’s dollars. (Again, these cost numbers are normalized for reasons of
confidentiality.) A linear regression model is fit to the data, resulting in the equation of:
FPACST = -12.76 + 22.574 * TDI [6.3]
This linear model has an R-square value of 0.995 and results in predictions that are within
an acceptable tolerance from the historical data values. This model is thus found to be
satisfactory, and enterprise design Tasks 2a and 2b are complete. The model itself is
supplied as task output, and no additional data is needed.
SelectedModeling
Option
Function of Design
Variables?
Identify Noise
Factors
Yes
NoNo
Incorporate
Choose to Enforce
Coordination?Yes
Look to Past?
Yes
Build Model Using Historical
Data
Model
Build Robustness Metamodel
Specify Transformations or Correlations
Build Mean Response
Metamodel
TASKS 2 TASK 3 TASK 4
No
Model
Model
ii
iiii
ii
TT TT
TTTT
TT
??
?? ??
Goal or Constraint
Existing Modeling
Techniques
Generate Modeling
Alternatives
Design Variables
List of Alternatives
Evaluate Each Alternative
Select Modeling Option
Model Accuracy, Cost, Input Data, etc.
Satisfactory?
Create NewModeling Techniques
Model for Goal or
Constraint
Additional Input Data
Needed
Yes
No
Create Model
ii
ii
ii
ii
ii
ii
ii
TT
TT TT
TT ????
242242
6 .4 .2 .2 Afocal Optics Production Cost
Again for this production cost measure the parallel frameworks of Figure 3.12 and
Figure 5.8 will be followed; we begin with Figure 3.12. For the development of this
model the aid of cost estimation experts is again brought in. It is noted that a commercially
available software package entitled PRICE-H is applicable for estimating the cost of
mechanical assemblies; this package requires the specification of a system architecture and
the assignment of weights (as in mass of material) and complexities (a dimensionless
factor) for each element of the architecture. PRICE-H then processes the architecture in
reference to a large database of existing products and computes a production cost figure
based on similarity. (These relationships in PRICE-H have been determined by statistically
analyzing thousands of completed projects where the product characteristics and project
costs and schedules are known.) PRICE-H is a member of a suite of costing tools
developed by General Electric; this suite consists of PRICE-H for hardware, PRICE-S for
software, PRICE-M for microcircuits, and so on. This suite among the most widely used
by government and industry for parametric cost estimating. As with the FPA cost model
this is the only modeling alternative identified, so model selection is a formality.
In order to create the PRICE-H model for this afocal optics assembly, the aid of
cost estimation experts is heavily enlisted. Using their specific knowledge of these optics
assemblies a baseline PRICE-H model is created that yields reasonable results compared to
their general knowledge of the costs and characteristics of FLIR systems designed
previously. Neither the PRICE-H software nor this afocal optics model were made
available to the author, so they are not included in this dissertation. Will PRICE-H be
satisfactory? This is a question that is only partially answered at this stage because
additional input data is needed to implement PRICE-H, and addressing this data is the
focus of Task 3 in the next section. However, factors in favor in PRICE-H are its
243243
impressive record of accurate cost estimation when used correctly, and the existence of
expert users at our disposal.
Following along in Figure 5.8, because this afocal lens assembly is assumed to be
similar to those built previously it would be the most desirable to “look to the past”, but
unfortunately in this case no production cost data is available. Therefore we now move to
Task 3, where transformations will be identified to map system architecture, weights and
complexities to the system design variables. Thus Tasks 2a and 2b conclude for the afocal
optics production cost model with the following output: the PRICE-H software package
and a list of additional input data needed -- the system architecture, weights, and
complexities.
6 .4 .3 Define Boundaries of Decision
(Based on Section 3.4.5 and Section 5.4)
In this section enterprise design Task 3 is applied to the FLIR system design
decision in order to determine how the links to downstream decisions in manufacturing
process design can be resolved. Because building the FPA detector production cost model
terminates with creating a regression model
based on historical data, this measure does not
require any more attention through Task 3 or
Task 4, as is shown in Figure 5.8. Therefore
in this section we turn our focus completely to
that of building the afocal lens assembly
production cost model.
Controlled by someone else?
AdditionalInput Data
Additional Design Variables
Fixed Parameters
Gather & Classify Data
Known now in timeline?
Yes
No
NoYes
Incorporate or Make Robust
Transform or Make Robust
Make Robust
Incorporate Noise Factors
1 3
2 4
ii
ii
ii
ii
TT TT
TTTT
TT
??
??
244244
PASSAGE OF TIME
FLIR System Design(optics transmission and f
number, FPA size and configuration, etc.)
FLIR Component Design
(number of lenses, dimensions, number
of coatings, etc.)
Manufacturing Process Design(processes for lens
polishing, coatings, etc.)
Production Costs
Figure 6.12 Decision Interdependence for Afocal Optics ProductionCost
The issue confronted in this section is that of resolving interdependencies between
FLIR system design decisions, downstream FLIR design decisions, and downstream
manufacturing process decisions. Based on Figure 6.5, a general description of this
situation is offered in Figure 6.12. In this figure a triangle of three interdependent groups
of decisions are represented, each with a list of potential design variables. From an afocal
optics perspective, the FLIR system design decisions drive the overall requirements for
optics transmission, optics f number, and so on, which will dictate the number of lenses
required, the materials and dimensions for each, the coatings required to achieve given
transmission levels, and so on in FLIR component design. These component design
decisions are heavily linked with the manufacturing processes that are designed for the
afocal optics assembly, including the operations and parameters for lens fabrication,
polishing and coating, and so on. Only given the output of these manufacturing process
design decisions can truly accurate measures of production cost be constructed.
If manufacturing simulation models were available for predicting the costs and
schedules of the lens fabrication process, then perhaps cost impact models could be
constructed by varying these processing parameters, tallying the results, and creating
245245
correlations or transformations to map the FLIR system design variables to these
processing parameters. However, the existence of the PRICE-H software package
simplifies the situation in Figure 6.12, effectively removing the manufacturing process
design decisions from the mix. Again based on a wealth of historical data, production cost
estimates can be generated based primarily on product design parameters (system
architectures, weights, and complexities). The issue of decision interdependence through
time still exists, though, because even these fairly general product design parameters are
not yet known with certainty during FLIR system design.
Proceeding from the upper left of Figure 3.14, the additional input data required for
PRICE-H are a system architecture with associated weights and complexities. In this case
the afocal lens assembly can be treated as a single element, so only single measures of
weight and complexity are needed.
Computing accurate measures of the weight and complexity of the assembly would
require a fair amount of detailed information; the sizes and materials for each lens plus the
configuration of the supporting framework would be needed to compute system weight,
and similarly the surface roughnesses of the lenses and the number of lens coatings would
need to be specified to compute system complexity. Although this information could be
under the “control” of the system designers, it can not be known with any certainty until
well downstream in the design timeline. Therefore we move into box 3 of Figure 3.14,
where our choices are to perform variable transformations or to make the formulation
robust to these factors. In this case robustness is clearly not an option because no control
factors would remain to give any design meaning to the model. Therefore we look again to
the context of Figure 5.8. The PRICE-H model is not a function of our system design
variables and historical data is not available to allow looking to the past, so we must look to
the future and attempt to construct transformations. This situation is shown in a decision
formulation context in Figure 6.13.
246246
T
Downstream Decision
PRICE-H
{Architecture, weights, complexities}
Current Decision
?
T
Transform or
Correlate{TDI, f#, FL, T0, D*,
DXY}
Figure 6.13 Afocal Lens Assembly Cost Model TimelineInterdependency
Examining the system design variables at our disposal in Table 6.3, we note that
both the focal length and the f number of the optics have legitimate implications for system
weight. In optics design the diameter of a lens can be computed by dividing its focal length
by its f number, and if this diameter is coupled with a lens profile (thickness) and material
density then lens weight can be computed. Relying on the expertise of optics designers and
cost estimation experts, the following transformation is constructed: assuming that the lens
assembly would scale similarly as the sum of its component lenses, the lens “diameter” for
the system as a whole is computed and calibrated to a known lens assembly of given
weight. Adding in a fixed amount of weight for the lens support structure we arrive at the
following transformation:
WEIGHT (lbs.) = 2.5 + FL8.05 FNUM
2
, [6.4]
where the value of 2.5 captures the weight of the support structure and the squared term for
optics focal length and lens number is an approximation of the total weight of all of the
lenses in the afocal assembly. This transformation is obviously fairly approximate and so it
is not intended to be used as an accurate predictor; instead its purpose is to capture general
247247
trends that quantify in rough terms how afocal optics weight varies as a function of system-
level parameters.
Similarly the optics transmission can be linked to system complexity -- complexity
in the context of PRICE-H is taken to be a function of the number of finishing operations
required for each component, their surface roughnesses, and so on. Lenses may have to be
polished to achieve given levels of surface roughness, and various coatings may be
required to yield the desired optical characteristics. These processing details are strongly
linked with the transmission of the optics assembly as a whole, because these polishings
and coatings are applied to enhance the component transmission of each lens. Therefore,
similarly to system weight, again a transformation is created and calibrated. In this case the
transformation takes the form of the piecewise linear function given in Figure 6.14
constructed from the four data points shown.
T0 CMPLX0.7 8.032
0.58 7.9180.5 7.83
0.42 7.7610.3 7.705 7.6
7.7
7.8
7.9
8
8.1
8.2
0.2 0.4 0.6 0.8T0
CM
PLX
Figure 6.14 Transformation from Optics Transmission (T0) to SystemComplexity (CMPLX)
The transformation of Figure 6.14 was constructed directly by PRICE-H experts
given a knowledge of the range of complexity values appropriate for the afocal optics
architecture and a knowledge of the feasible region for optics transmission values. In
248248
essence this transformation embodies engineering judgment, but the positive relationship
between optics transmission and system complexity is, at the very least, not incompatible
with the physics of the problem. With these two transformations created we complete the
activities of Task 3; in terms of Figure 3.14 all of the additional input data required for
computing production cost have been transformed into the existing system design
variables. (Although note that the design variable of optics focal length, previously
dropped as the result of NETD screening, is picked up again for building these models of
production cost impact.) We can now proceed to transforming and integrating the models;
this is pursued in the next section.
6 .4 .4 Transform and Integrate Models
(Based on Section 3.4.6)
In this section Task 4 of the enterprise design method is applied to FLIR system
design in order to create efficient and portable models to quantify the production cost
impact of the system design variables. The regression model for focal plane array
production cost created in Section 6.4.2.1 is already efficient, portable, and sufficiently
accurate, so it is passed on as an output of this
task as is. However, the afocal optics
production cost model developed with PRICE-
H is another matter. PRICE-H is only available
on the Windows platform, and its structure is
such that it is intended for individual
explorations of “What if?” scenarios. Therefore the integration of the afocal lens assembly
production cost model into a decision formulation is definitely an issue; in this section
metamodeling is applied to build a more portable approximation. No noise factors have
Selected Modeling
Option
Screen for Significant
Factors
Efficiency or Integration an
Issue?Robustness
Needed?
Significant Factors
Select Metamodeling
Technique Build Mean Response
Metamodel
Build Robustness Metamodel
Verify & Validate
σ model^
y model^Models
No No
Yes Yes
Use existing model as is
Design Variables
Noise Factors
Metamodeling Techniques
Fixed Parameters
ii
ii
ii
ii
ii
ii
ii
ii
ii ?? ??
??TT
TT
TT
TT
249249
been identified for this model, however, so robustness is not needed and only an
approximation for the mean response itself needs to be constructed.
As we consider whether screening is necessary in this case to identify a smaller set
of design variables, an additional benefit of creating the transformations in the previous
section appears. Although PRICE-H in general requires a fair amount of detailed
information, through the variable transformations we are able to construct a model that is a
function of only our system-level design variables. Screening is therefore not a
requirement here. The list of system design variables, along with their descriptions and
feasible regions, is given in Table 6.9. Thus we move in enterprise design Task 4 from
screening to selecting a metamodeling technique, the recommendations for which are found
in Section 4.5.2.
Table 6.9 Design Variables for Afocal Optics Production CostMetamodel
DESIGNVARIABLES DESCRIPTION FEASIBLE REGIONOptics 1) FNUM optics f number continuous; 1.5 ≤ x ≤ 3
2) FL optics focal length, cm continuous; 10 ≤ x ≤ 25
3) T0 optics transmission continuous; 0.3 ≤ x ≤ 0.7
Similarly to Section 6.3.4 this problem size is small enough that nearly any
metamodeling technique could be applied to build approximations of lens assembly
production cost, but again the advantages of selecting response surfaces are strong enough
that an alternative method would be selected only if there would be significant problems
with RSM. In this case no such problems appear. Therefore a central composite design is
constructed with the factors of optics f number (FNUM), optics focal length, (FL), and
optics transmission (T0). This 15-run design, consisting of an 8-run 23 full factorial
250250
experiment with 6 star points and one center point, is given in Table 6.10. Lens assembly
production cost is predicted for each run by transforming these values into the associated
values for weight and complexity and then entering these values into PRICE-H. In this
case the design matrix is given in the real values of the design variables, and recall that just
as with FPA detector cost the cost figures are normalized. The production cost results were
gathered by working with an experienced PRICE-H user, feeding in each value for afocal
optics weight and complexity, and noting down the result. Despite the inefficiencies of this
process it was completed in less than an hour’s time.
Table 6.10 Central Composite Design for Afocal Optics ProductionCost Metamodel
ProductionRUN F# FL T0 WEIGHT CMPLX Cost
1 1.93 14.31 0.42 3.348 7.761 28.672 1.93 14.31 0.58 3.348 7.918 57.563 1.93 20.69 0.42 4.273 7.761 61.444 1.93 20.69 0.58 4.273 7.918 97.225 2.57 14.31 0.42 2.978 7.761 15.226 2.57 14.31 0.58 2.978 7.918 41.337 2.57 20.69 0.42 3.500 7.761 34.118 2.57 20.69 0.58 3.500 7.918 64.119 1.5 17.5 0.5 4.600 7.83 89.11
10 3 17.5 0.5 3.025 7.83 28.2211 2.25 10 0.5 2.805 7.83 19.4412 2.25 25 0.5 4.405 7.83 81.6713 2.25 17.5 0.3 3.433 7.705 21.8914 2.25 17.5 0.7 3.433 8.032 85.0015 2.25 17.5 0.5 3.433 7.83 44.33
Armed with the design matrix and results in Table 6.10 it is possible to build
metamodel approximations of PRICE-H using response surface methodology. The
statistical software package used for this metamodel creation is again JMP version 3.1.5 for
the Macintosh. As a first-pass effort a second-order polynomial is selected for the model
structure, and model coefficients are fit to the PRICE-H cost data using least-squares
251251
regression. Since this is a one-replication design there is no measure of mean squared error
and therefore no statistical testing of model terms is performed. Noting that this model is a
function of the REAL values for the design variables, the model is:
LNSCST = 33.85384 + (-83.68089*FNUM) + (9.1531936*FL) +
(-47.15439*T0) + (24.736707*FNUM*FNUM) +
(-3.768831*FL*FNUM) + (0.1031586*FL*FL) +
(217.00534*T0*T0) [6.5]
Is this approximation sufficiently accurate? The R-square value for the model is 0.993,
which is encouraging. Further, a plot of the predicted cost values versus the actual PRICE-
H cost values from Table 6.10 (generated in JMP) shows that there is a close correlation;
this plot is given in Figure 6.15. Plots of the residuals are also examined against each
factor, and no significant trends appear that would indicate the necessity of higher-order
terms in the model.
In the plot of Figure 6.15 the horizontal axis indicates the actual afocal optics
production cost values computed from PRICE-H for each of the 15 experimental runs, and
the vertical axis indicates the values predicted from the response surface Equation 6.5. The
angled line represents the ideal fit (actual and predicted values being equal) around which
the predicted data is scattered, and the horizontal dashed line in the plot represents the
response mean value. The dashed lines bounding each angled line are confidence bands for
the predicted responses, indicating whether the model is significant at the 5% level. If
these confidence bands completely contain the horizontal dashed line, the model is not
significant. The bands are relatively tight in Figure 6.15, which is desirable. Since the
confidence curves cross the horizontal line, the metamodel is significant at the 5%
confidence level.
252252
CO
ST
0
20
40
60
80
100
0 20 40 60 80 100COST Predicted
Figure 6.15 Plot of Predicted vs. Actual Values for Afocal OpticsProduction Cost
As a final test of model accuracy, additional confirmation runs are performed with
PRICE-H and these actual values are checked against the values predicted by the
polynomial approximation of Equation 6.5. These verification runs are selected arbitrarily
in an attempt to evenly cover much of the feasible regions for the three design variables.
The design matrix in real values, the PRICE-H results, and the differences between the
actual and predicted values are given in Table 6.11. Throughout the entire range of cost
values the values predicted by the metamodel of Equation 6.5 are within a few percentage
points of the actual PRICE-H values. This is not immediately apparent in Table 6.11,
because the normalization of the cost values inflates the error values shown in the right-
hand column. In the non-normalized cost scale the differences between PRICE-H values
and predicted values are actually much smaller, on the order of a few percentage points.
253253
Table 6.11 Confirmation Runs for Afocal Optics Production CostMetamodel
Cost CostRUN F# FL T0 (actual) (pred) ERROR
1 3 33.29 0.465 74.67 73.08 1.592 2.25 24.97 0.465 74.67 76.92 -2.253 1.5 16.65 0.465 74.67 75.83 -1.164 3 30.55 0.614 97.44 88.79 8.655 2.25 22.91 0.614 97.44 93.24 4.206 1.5 15.27 0.614 97.44 94.38 3.077 3 27.54 0.536 66.89 61.44 5.458 2.25 20.65 0.536 66.89 65.78 1.119 1.5 13.77 0.536 66.89 68.81 -1.92
10 3 24.15 0.666 81.33 78.46 2.8811 2.25 18.11 0.666 81.33 81.69 -0.3612 1.5 12.08 0.666 81.33 86.15 -4.81
All of these tests support the conclusion that this second-order approximation of
afocal lens assembly production cost is sufficiently accurate. The next question would be
to investigate whether the actual PRICE-H values were themselves accurate. This sort of
investigation is not possible here; for the purposes of this example we rely on the strong
history that PRICE-H has had at producing accurate results, but in practice this model
would warrant further verification against historical cost data. With this model created and
verified it can now be integrated into a unified decision formulation; this is done in the next
section.
6 .4 .5 Generate Potential Solutions
(Based on Section 3.4.7)
In this section Task 5 of the enterprise design method is applied to FLIR system
design in order to capture and quantify the trade-offs between system performance and
production costs. The outputs of the previous four enterprise design tasks are used to
augment the compromise DSP math formulation generated in Phase A, formulated through
the keywords given, find, satisfy, and minimize. This formulation is shown in Figure
254254
6.16. A set of potential solutions is generated by exercising this compromise DSP, using
DSIDES to perform a multi-objective search of the product design space considering both
product performance and production cost issues. Representative DSIDES input files, a
data file and FORTRAN file, are given in Appendix B.1 and B.2, respectively.
The formulation of Figure 6.16 incorporates all of the FLIR system information
generated in Phase A of the case study and in
the previous sections. Comparing Figure 6.16
to the Phase A compromise DSP math
formulation in Figure 6.8, the cumulative
nature of the formulation is visible. All of the
design variables, goals, constraints, and
bounds are retained from the Phase A formulation. In addition, the execution of the
previous four enterprise design tasks for Phase B have augmented the formulation.
Additions include the production cost goals given in Table 6.8, the design variable for
optics focal length given in Table 6.9, the regression metamodel created for focal plane
array production cost (Equation 6.3), and the response surface metamodel created for
afocal optics production cost (Equation 6.5). (These equations are used for computing the
left hand side of both the constraints and goals as shown in Figure 6.16) Recall that a
detailed description of the compromise DSP in terms of goal formulations, deviation
variables, the deviation function and so on is given in Section 3.3.3. The objective of this
formulation is to find the values of the six design variables that satisfy the mean NETD
constraint, the afocal optics production cost constraint, the focal plane array production cost
constraint, and the bounds on the design variables and ultimately minimize the deviation
function to achieve as closely as possible the goals for NETD mean, NETD standard
deviation, afocal optics production cost and focal plane array production cost. (Recall that
Goals and Constraints
Models
Design Variables
Noise Factors
Additional Input
Create Math Formulation
Exercise C-DSP
Many Potential Solutions
Verification& Validation
Final Competing Solutions
Select Best Solution
C-DSP Template
Solutions Acceptable?
YesNoii
ii
ii
ii
ii ii ii
ii
TT
TTTT
?? ??
255255
with the scan efficiency and spectral cuton varied over the outer “noise” array, the NETD
standard deviation model is included to address robustness.)
Given Approximations (equations) for:
• NETD, mean and standard deviation• FPA production cost• Afocal lens assembly production cost
FindValues for the design variables
FNUM, FL, T0, DXY, DSTAR, TDIValues for the deviation variables
di+, di-
Satisfy System constraints
NETDmean(FNUM, T0, DXY, DSTAR, TDI) ≤ 0.2 [6.1]LNSCST(FNUM, FL, T0) ≤ 80 [6.5]FPACST(TDI) ≤ 70 [6.3]
System goalsNETDmean + d1- - d1+ = Target [6.1]
NETDstdev + d2- - d2+ = 0 [6.2]
LNSCST + d3- - d3+ = 1 [6.5]
FPACST + d4- - d4+ = 1 [6.3]Bounds
1.5 ≤ FNUM ≤ 3 10 ≤ FL ≤ 250.3 ≤ T0 ≤ 0.7 20 ≤ DXY ≤ 300.7 ≤ DSTAR ≤ 1.5 TDI = {1, 2, 4}di+, di- ≥ 0 ; di+ • di- = 0
Minimize Preemptive deviation function
Z = [ f1(di+, di-), ..., f4(di+, di-) ] (varies by scenario)
FLIR92Product Array
Historical Data
Regres-sion
PRICE-HCCD and
RSE
Product Design(System Performance)
Manufacturing Process Design
(Production Cost)
Figure 6.16 C-DSP Math Formulation for Phase B
The design space is explored by “exercising” the compromise DSP, and in this case
this exercising is accomplished by using different goal priority scenarios, different goal
targets for NETD, and different starting points for the design variables. The six different
scenarios are broken into three groups of two. The first group of scenarios represents
256256
aggressive preferences for product performance with production cost secondary, and the
second group corresponds to more relaxed preferences for product performance with
production cost again secondary. The third and final group of scenarios corresponds to
placing production cost goals at higher priorities than product performance. The deviation
function formulations for each of these six scenarios are given in Table 6.12.
Table 6.12 Deviation Function Scenarios for Phase B
Deviation Function
Scenarios PriorityLevel 1
PriorityLevel 2
PriorityLevel 3
PriorityLevel 4
1. Aggressive ProductPerformance I
d1+ 100 • d2
+ d4+ d3
+
2. Aggressive ProductPerformance II
d1+ 100 • d2
+ d3+ d4
+
3. Relaxed ProductPerformance I
d1+ 100 • d2
+ d4+ d3
+
4. Relaxed ProductPerformance II
d1+ 100 • d2
+ d3+ d4
+
5. Afocal OpticsProduction Cost
d3+ d4
+ d1+ 100 • d2
+
6. Focal Plane ArrayProduction Cost
d4+ d3
+ d1+ 100 • d2
+
In all of the scenarios in Table 6.12 the preemptive approach is used, placing the
different goals in separate priority levels. For all four goals only the overachievement of
each goal target is penalized; coming in under the target value is completely acceptable.
This is shown in Table 6.12 by the existence of only the di+ terms in each priority level.
Because the standard deviation of NETD is known to be fairly small, in each deviation
function d2+ is multiplied by 100 to ensure that small improvements are recognized.
257257
Scenario 1 represents an aggressive target of 0.05 °C for NETD mean, with the NETD
mean goal placed at highest priority. NETD standard deviation is placed at second priority,
followed by focal plane array production cost and finally afocal optics production cost.
Scenario 2 is nearly identical, except that the priorities for afocal optics production cost
and focal plane array production cost are reversed. (Placing the cost goals in separate
priority levels is warranted because the cost values have been normalized; the cost values
thus can not be added together to yield total cost measures of any meaning.) Both Scenario
1 and Scenario 2 correspond to a FLIR customer saying, “Achieving the target of 0.05 °C
for mean NETD is of the utmost importance. Values less than 0.05 °C are equally
preferable. It would also be nice to minimize the variability of NETD, but this
improvement should not be made at the expense of mean NETD. Finally, a low cost
system would be appreciated, but cost improvements should not be made at the expense of
product performance.” Scenario 3 and Scenario 4 map nearly identically to Scenarios 1
and 2; the only difference is that the aggressive target of 0.05 °C for NETD mean is relaxed
to a more reasonable value of 0.1 °C. The remaining goal priorities are identical.
In contrast, in Scenario 5 and Scenario 6 a different tack is taken. In Scenario 5
the goal of reducing afocal optics production cost is placed at highest priority, followed by
the goal of reducing focal plane array production cost in priority level 2. Achieving a target
of 0.1 °C for mean NETD is placed at priority level 3, followed by the goal for reducing
NETD variability in priority level 4. Scenario 5 corresponds to a FLIR customer saying,
“Producing a low-cost FLIR system is of the utmost importance (where the primary driver
of system cost is the production cost of the afocal optics). Achieving the target of 0.1 °C
for mean NETD is of much less importance, and may even be sacrificed in attempts to
achieve the lowest cost system possible. Reducing NETD variability is also nice, but it is
far from critical.” Similarly in Scenario 6 the goal of reducing focal plane array
production cost is placed at highest priority, followed by the goal of reducing afocal optics
258258
production cost in priority level 2. Achieving a target of 0.1 °C for mean NETD is placed at
priority level 3, followed by the goal for reducing NETD variability in priority level 4.
Scenario 6 corresponds to a FLIR customer saying, “Producing a low-cost FLIR system is
of the utmost importance (where the primary driver of system cost is the production cost of
the focal plane array). Achieving the target of 0.1 °C for mean NETD is of much less
importance, and may even be sacrificed in attempts to achieve the lowest cost system
possible. Reducing NETD variability is also nice, but it is far from critical.”
For each of these six scenarios, high and low starting points are explored in order
to establish the convergence of solutions. In an attempt to “cover” the design space the
different starting points for each scenario are set at the upper and lower bounds of the
feasible regions of each design variable. Because there is only one discrete design variable
(TDI) with only three values, standard DSIDES is employed rather than the foraging
version, and an exhaustive search is performed for the TDI values. For each DSIDES run
the value of TDI is hard-wired at 1, 2, or 4. However, there is an important fact that serves
to distance the scenarios in Table 6.12 from the actual DSIDES runs performed: the focal
plane array production cost goal (Goal 3 in Figure 6.16 and equation 6.3 developed in
Section 6.4.2.1) is a function only of the number of TDI stages. Because TDI is hard-
wired into the formulation, neither the FPA goal nor the FPA constraint have to be
processed in each DSIDES run. Therefore, in generating the DSIDES runs for the six
scenarios in Table 6.12 the following approach is taken:
• The FPA cost goal is removed from the deviation function, thus reducing the
six scenarios to three.
• Each of the three scenarios is exercised for different (high and low) starting
points and different values of TDI.
259259
• A value of “4” for TDI violates the FPA production cost constraint in Figure
6.16, so only two values for TDI are possible: the set of {1, 2}.
• In all, (3 scenarios)(2 starting points)(2 TDI values) = 12 DSIDES runs are
performed. The outputs from all 12 runs are given in Appendix B.3. For each
run the FPA cost is recorded along with the remaining three goals.
By examining the output from each of these 12 DSIDES runs and adding the FPA cost goal
back into the mix, the results for each of the six scenarios of Table 6.12 are obtained.
These best solutions are given in Table 6.13. For all six scenarios feasible, converged
solutions are obtained.
Before discussing these best solutions, it is important first to ensure that they are
verified. This verification is performed by ensuring the trends in solutions are as expected,
and an important trend to support solution verification is if both the high and low starting
points converge to the same region for each design variable. To this end convergence plots
are generated for the DSIDES runs in all scenarios and across both values of TDI. The
convergence plots for Scenarios 3 and 4 of Table 6.12 are illustrated in Figure 6.17 and
Figure 6.18 for both values of TDI. As can be seen in the figures, each of the five
continuous design variables do indeed converge to the same regions from both starting
points. Similar plots are also generated for Scenarios 1 and 2 and Scenarios 5 and 6, but
for the sake of brevity these plots are not shown. These relative smoothness of these
convergence plots indicate a fairly well-behaved design space; even the more jagged plots
for optics f number (FNUM), optics focal length (FL) and optics transmission (T0) are still
fairly benign.
The plots in Figure 6.18 show somewhat more unconventional behavior than any
of the previous convergence plots. The DSIDES run for the high starting point converges
quickly, in around 20 cycles. The DSIDES run for the low starting point, however,
260260
continues on until the maximum of 100 cycles is reached. Although the same solution
region is identified, the stationarity criteria for run termination are never satisfied. This is a
function of the parameter settings used in the DSIDES run itself and holds no significant
consequences for the actual solutions achieved.
1
1.5
2
2.5
3
0 20 40 60 80 100Cycles
FNUM
5
10
15
20
25
0 20 40 60 80 100Cycles
FL
0
0.2
0.4
0.6
0.8
1
0 20 40 60 80 100Cycles
T0
18
20
22
24
26
28
30
32
0 20 40 60 80 100Cycles
DXY
0.5
0.7
0.9
1.1
1.3
1.5
1.7
0 20 40 60 80 100Cycles
DSTA
R
Figure 6.17 Design Variable Convergence from High and Low StartingPoints, Phase B, Scenarios 3 and 4, TDI = 1
261261
1
1.5
2
2.5
3
0 20 40 60 80 100Cycles
FNUM
5
10
15
20
25
0 20 40 60 80 100Cycles
FL
0
0.2
0.4
0.6
0.8
1
0 20 40 60 80 100Cycles
T0
18
20
22
24
26
28
30
32
0 20 40 60 80 100Cycles
DXY
0.5
0.7
0.9
1.1
1.3
1.5
1.7
0 20 40 60 80 100Cycles
DSTA
R
Figure 6.18 Design Variable Convergence from High and Low StartingPoints, Phase B, Scenarios 3 and 4, TDI = 2
Thus these solutions are verified. The encouraging nature of these convergence
plots yields some comfort with the reasonableness of these solutions generated from this
exploration of the design space, and it therefore becomes justifiable to draw a set of
implications and recommendations from them. These implications and recommendations
are discussed in the next section.
262262
Table 6.13 Best Solutions for Phase B
SCENARIO 1 2 3Level 1 NETD=0.05 NETD=0.05 NETD=0.1
GOAL Level 2 NETD Std Dev NETD Std Dev NETD Std DevPRIORITIES Level 3 FPA Cost Optics Cost FPA Cost
Level 4 Optics Cost FPA Cost Optics Cost
NETD Mean (°C) 0.049 0.049 0.100GOAL NETD Std Dev (°C) 0.011 0.011 0.023
VALUES Optics Production Cost 47.44 47.44 48.53FPA Production Cost 32.38 32.38 9.81
FNUM 1.50 1.50 2.05DESIGN FL 10.00 10.00 10.20
VARIABLE T0 0.542 0.542 0.633VALUES DXY 29.92 29.92 30.00
DSTAR 1.275 1.275 1.500TDI 2 2 1
Level 1 0.000 0.000 0.000DEVIATION Level 2 1.110 1.110 2.280FUNCTION Level 3 31.380 46.437 8.810
Level 4 46.437 31.380 47.529
SCENARIO 4 5 6Level 1 NETD=0.1 Optics Cost FPA Cost
GOAL Level 2 NETD Std Dev FPA Cost Optics CostPRIORITIES Level 3 Optics Cost NETD=0.1 NETD=0.1
Level 4 FPA Cost NETD Std Dev NETD Std Dev
NETD Mean (°C) 0.100 0.146 0.146GOAL NETD Std Dev (°C) 0.023 0.034 0.034
VALUES Optics Production Cost 48.53 1.085 1.085FPA Production Cost 9.81 9.81 9.81
FNUM 2.05 1.86 1.86DESIGN FL 10.20 10.02 10.02
VARIABLE T0 0.633 0.301 0.301VALUES DXY 30.00 30.00 30.00
DSTAR 1.500 1.500 1.500TDI 1 1 1
Level 1 0.000 0.085 8.81DEVIATION Level 2 2.280 8.81 0.085FUNCTION Level 3 47.529 0.046 0.046
Level 4 8.810 3.35 3.35
263263
6 .4 .6 Phase B Results and Recommendations: Trading Off ProductPerformance with Production Cost
The first point of interest in Table 6.13 is the redundancy of some of the scenarios.
The solutions for Scenarios 1 and 2 are equivalent, as are the solutions for Scenarios 3 and
4 and Scenarios 5 and 6. In essence, both production cost goals push the solution in
similar directions, so breaking them out and treating them separately in this instance was
unnecessary. Looking to the goal values in Table 6.13, as the target value for NETD mean
is first relaxed and then as the priority level of NETD mean is reduced, we see a
concomitant decrease in both production cost measures. Thus with this phase of the case
study we have made a clear shift in perspective from a narrow, product performance focus
to a more broad, integrative focus. Substantial complexities appear in the trade-offs
between product performance and manufacturing cost. It still remains possible to achieve
aggressive targets for product performance, but as this target is relaxed substantial gains
can be made in production costs. Therefore the scenarios in Table 6.13 represent a range
of potential solutions, any of which may be the “best” solution depending on the actual
customer preferences. This range of solutions may provide a fruitful ground for
discussions with given customers, allowing them to tailor their exact requirements and
priorities.
Is implementing this enterprise design approach worthwhile? An important
comparison can be made between the solution of Scenario 1 and Scenario 2 in Table 6.13
and either solution from Phase A in Table 6.6. In both cases the aggressive target of 0.05°
is met for NETD mean, and in both comparably low values are achieved for NETD
standard deviation. However, plugging in the Phase A solutions into the production cost
models yields an afocal lens assembly production cost of 48.33 and an FPA production
cost of 77.54, both of which are higher than the costs achieved in Phase B. Therefore, by
applying the approach to enterprise design we identify a system design that yields
264264
substantially lower production costs without sacrificing anything in performance. These
cost savings are achieved through added design effort that did not exceed one person-week,
and result in decision formulations that were not appreciably different in computational
efficiency. It is fair to say that significantly more useful results were achieved with a
marginal increase in design effort. Thus, in context of Section 1.5.1, these solutions
support the validation of the approach to enterprise design.
Has the “integration” of product design and manufacturing process design been
achieved? Manufacturing process design variables have not been explicitly included in the
formulation, because they will actually be set by other designers at some point later in time.
However, with the information available at the current point in the design timeline,
solutions have been identified that hold the greatest potential for reducing production cost.
These solutions themselves, while in the traditional product design domain, have implicit
but significant effects on the downstream manufacturing design decisions. The system-
level specifications set by this decision will drive the die size needed for the FPA as well as
influence the number of lenses, coatings and lens materials needed. These component-level
specifications will then ultimately drive the setting of specific manufacturing processing
parameters. Thus integration has indeed been achieved, but not in a traditional manner.
Although this phase of the case study serves as a valid example of how enterprise
design and integration can be achieved across different design domains and through time,
there are further extensions that can be made to the decision formulation that draw it into the
domain of organization design as well. These extensions are developed in the sections to
follow.
265265
6.5 CASE STUDY C: INTEGRATING ORGANIZATION DESIGNISSUES
In Phase B of this case study suite a range of FLIR system designs are identified
based on measures of product performance (NETD mean and standard deviation) and
measures of production cost (afocal lens assembly and focal plane array costs). However,
there are even further enterprise-wide implications for this system design decision, as are
shown in Figure 6.4. In the subsections to come Phase C of the case study is developed,
with the aim of integrating organization design issues into the decision formulation. This
integration is accomplished by applying the approach of enterprise design.
6 .5 .1 Expand Scope of Decision
(Based on Section 3.4.3)
In this section enterprise design Task 1 is applied to the FLIR system design
decision of the previous phase in order to identify its potential impacts on the duration of
the remaining design process. For this phase of the case study we are again focused on the
FLIR system-level design decision at the
heart of both Phase A and Phase B.
Therefore our baseline decision has the
design variables of optics f number
(FNUM), optics transmission (T0), optics
focal length (FL), detector detectivity
(DSTAR), the number of TDI stages (TDI), and detector size (DXY). There are goals for
NETD mean and NETD standard deviation and a constraint for mean NETD that have
grown from Phase A, and there are goals and constraints for afocal lens assembly
production cost and focal plane array production cost that are identified in Phase B. In this
section we return to Task 1 of the enterprise design method, and again consider expanding
DesignVariables
Goals and Constraints
Awareness of InternalCustomers
Awareness ofExternal Customers
Brainstorm Potential Effects
GenerateAdditional
Goals and Constraints
Raw List of Potential Goals and Constraints
Final List of Potential Goals and Constraints
TTii
TT
ii
ii
ii
ii ?? iiPreliminary Selection
266266
the scope of the decision by generating additional enterprise-wide effects from the
organization design domain.
In nearly every contract received for the design and delivery of a FLIR system the
customer calls for a specific delivery date. Although schedule overruns were tolerated for
the most part in times past, more recent contracts call out specific penalties for late delivery.
In other words, in nearly every case the external customers are both cost-conscious and
schedule-conscious. Therefore, an awareness of the external customers results in the
generation of additional goals for meeting delivery dates.
A primary driver that influences the ultimate delivery date of a FLIR system is the
duration of the product development process. It is not uncommon for the design phase to
account for a significant portion of the contract period. Therefore it is important to plan,
monitor and manage the development process to ensure that milestones are met on time.
This activity is usually the responsibility of project manager, and as it entails components
of job design and work design it is in the traditional realm of organization design.
Although product design and project management are most often pursued
separately, in fact the decisions made during product design have substantial implications
for the actual course that the development process takes. If, for example, new or untested
technology is selected to fulfill a particular system requirement, there is a much greater
chance that additional testing, development and rework will be required during system
integration than if an off-the-shelf, standard technology is used. Therefore the goals of
meeting specific targets for development process duration are added to a raw list of
potential goals for consideration.
We now examine our six design variables to determine if they might have any
bearing on the ultimate duration of the product development process. The focal plane array
must operate at extremely low temperatures as discussed in Section 6.2 and therefore has a
dedicated refrigeration subsystem for heat dissipation. The focal plane array itself includes
267267
signal processing and amplification circuitry for the voltage signals for each of the detector
elements, and this circuitry generates a significant amount of heat that must be accounted
for. The more amplification that is required, the more heat is generated, and thus the
chances increase that either the refrigeration subsystem or the circuitry itself would have to
be redesigned. The amount of amplification required for each detector element is driven by
the element’s physical size; a larger element collects a larger number of photons during a
set time, thus yielding a greater voltage that requires less amplification. Therefore we see
that a given value for DXY can easily affect the duration of the focal plane array design
process.
Table 6.14 Goals and Constraints for Phase C Decision
Constraint Value Objective Target
System PerformanceNoise EquivalentTemperature Difference (°C)
≤ 0.2 °C 0.05 °C
Production CostAfocal Optics (NMU) ≤ 80 1
Focal Plane Array (NMU) ≤ 70 1
Design Process ScheduleFocal Plane Array DesignProcess Duration (hours)
≤ 150 hours 110 hours
Similarly, detector detectivity (DSTAR) is primarily a function of the detector
material, but it is measured at the output of the FPA signal processor. Therefore the
amount of effort that is expended on the FPA signal processing design is increased as the
requirement for DSTAR increases. Although there are many drivers that contribute to the
overall product development process duration, the specific measure that is selected for this
phase of the case study is that of the duration of the FPA design process. It clearly exists
268268
in the overlap between the issues of concern from the external customers and the range of
effects captured by the design variables of the decision at hand. Thus the final list of
potential goals and constraints for the decision formulation is given in Table 6.14.
Table 6.14 serves as the output for this enterprise design task; we can now move to
identifying how they are to be computed and what input information is required for
analysis. These tasks are performed in the next section.
6 .5 .2 Identify Modeling Techniques and Determine Input Needed forAnalysis
(Based on Section 3.4.4 and Section 5.4)
In this section enterprise design Tasks 2a and 2b are applied to quantify the impact
of FLIR system design on the duration of the focal plane array design process. There are a
fair number of modeling techniques available for estimating the duration of a task-based
process, as evidenced by the number of
applicable modeling schemes reviewed in
Section 2.4.4. Both discrete-event and
system dynamic simulation could be applied,
as well as queuing theory, critical path
analysis, and a design structure matrix
approach. These modeling options are all
functions of a similar set of design variables --
given a structure and sequence of tasks, the
durations of each task, and the probabilities of
iteration between tasks, the overall process
duration can be computed. It would also be possible to gather historical data and build
regression models to correlate the values of our system-level design variables with process
duration.
Goal or Constraint
Existing Modeling
Techniques
Generate Modeling
Alternatives
Design Variables
List of Alternatives
Evaluate Each Alternative
Select Modeling Option
Model Accuracy, Cost, Input Data, etc.
Satisfactory?
Create NewModeling Techniques
Model for Goal or
Constraint
Additional Input Data
Needed
Yes
No
Create Model
ii
ii
ii
ii
ii
ii
ii
TT
TT TT
TT ????
SelectedModeling
Option
Function of Design
Variables?
Identify Noise
Factors
Yes
NoNo
Incorporate
Choose to Enforce
Coordination?Yes
Look to Past?
Yes
Build Model Using Historical
Data
Model
Build Robustness Metamodel
Specify Transformations or Correlations
Build Mean Response
Metamodel
TASKS 2 TASK 3 TASK 4
No
Model
Model
ii
iiii
ii
TT TT
TTTT
TT
??
?? ??
269269
In this instance of modeling the FPA development process, it is known ahead of
time that a standard design process will be invoked, similarly to the situation illustrated in
Figure 6.4. Therefore, since this process will follow similarly to those in the past, it would
be desirable to “look to the past” in terms of Figure 5.8 and build regression models for
correlation. However, in this case such historical data is not available, so the only feasible
options that remain are to build models that operate on the task structure itself. Fortunately
this task structure is known with some certainty and is illustrated in Figure 6.19.
Technical Meetings
Breadboard
Worst Case Analysis
Configuration / Process Definition
Electrical Analysis
Thermal Analysis
Design Calculations
ChecklistsDesign Review END
Redo Analysis
Redo Design
Figure 6.19 Task Structure for the FPA Development Process
In the structure of tasks shown in Figure 6.19 the durations for each task are
known in a broad sense, and variability is captured by assuming that the durations are each
distributed normally. The three initial tasks of technical meetings, breadboarding and
configuration and process definition are performed by a team of four designers, which then
branch off to address each of four analysis tasks in parallel. The team then comes back
together to evaluate the FPA design in terms of generating checklists and ultimately by
270270
engaging in a design review. In each of these two concluding tasks there is a given
probability that problems will be uncovered, requiring the iteration of either reworking the
analysis or the design itself. This task structure is based on a documented design process
regularly implemented in industry, but modifications to the structure have been made to
protect the interests of the industry sources.
Particular aspects of this structure help to narrow down the number of feasible
modeling options. The existence of multiple “paths” through the structure, as well as the
stochastic and iterative nature of the task durations, make critical path analysis and the
design structure matrix approach unsuitable. Further, these details shift this application
outside the traditional areas that are amenable to system dynamic simulation. Realistically,
then, the two modeling options that rise to the fore are those of queuing theory and
discrete-event simulation. For this relatively simple example the accuracies of each
approach would likely be equivalent, but for processes with other than exponentially
distributed arrivals the queuing theory formulations become cumbersome. In addition, in
this case a discrete-event simulation package is readily available, so discrete-event
simulation is selected as the modeling option. The package SIMAN is employed and a
baseline model is created; the input files for this model are given in Appendix C. (There
are two separate input files: a model file denoted “.MOD” in Appendix C.1 and an
experiment file denoted “.EXP” in Appendix C.2.) This model yields task durations that
are found to be satisfactory compared to given knowledge of the existing process, so these
tasks of the enterprise design method are drawn to a close. The SIMAN model file and
experiment file are passed as output of the tasks, and contain by default the additional input
data required for analysis.
271271
6 .5 .3 Define Boundaries of Decision
(Based on Section 3.4.5 and Section 5.4)
From the previous enterprise design tasks a model has been created to estimate the
duration of the FPA design process. However, as discussed in Section 6.5.1, FLIR
system design decisions have legitimate impacts on this duration, and these impacts are as
yet unaddressed in the model. As in Section 6.4.3, we again have an interdependent
triangle of decisions; this relationship is
shown in Figure 6.20. FLIR system design
decisions have direct influence on downstream
component design decisions -- in this case, the
detector detectivity, element size, and number
of TDI stages will influence the actual circuit
diagrams, materials, and layout of the focal
plane array itself. These details of FPA design drive the actual task durations and
probabilities of iteration between tasks in the design process itself. Thus the issue is how
to make the links explicit between FLIR system design and project management.
PASSAGE OF TIME
FLIR System Design(optics transmission and f number, FPA size and
configuration, etc.)
FLIR Component Design
(materials, circuit diagrams, FPA
layout, etc.)
Project Management(task durations, probabilities
of iteration)
Design Process Duration
Figure 6.20 Decision Interdependence for FPA Design Process Duration
Controlled by someone else?
AdditionalInput Data
Additional Design Variables
Fixed Parameters
Gather & Classify Data
Known now in timeline?
Yes
No
NoYes
Incorporate or Make Robust
Transform or Make Robust
Make Robust
Incorporate Noise Factors
1 3
2 4
ii
ii
ii
ii
TT TT
TTTT
TT
??
??
272272
In terms of Figure 3.14, the “additional input data” for estimating the FPA design
process duration (the task durations and probabilities of iteration) is recognized to be
somewhat under the control of FLIR system designers but is also recognized to be fairly
uncertain until later in the product design timeline. Thus we move into option 3, which is
either to invoke robust design or to pursue variable transformations. In terms of Figure
5.8, the selected modeling option (SIMAN) is not a function of the FLIR system design
variables, and historical data is not available to allow “looking to the past”, so once again
we turn to the option of specifying variable transformations or correlations. This situation
is shown in Figure 6.21.
T
Downstream Decision
SIMAN
TaskDurations, Probabilities
of Iteration
Current Decision
?
T
Transform or
CorrelateTDI, f#, FL, T0,
D*, DXY
Figure 6.21 Focal Plane Array Design Process Duration ModelTimeline Interdependency
Examining the system design variables at our disposal, we note that the detector
size (DXY) affects the amplification required for the FPA, which affects the amount of heat
dissipation required, thus increasing the probability that the refrigeration system may be
overtaxed and causing the FPA design to be reworked. Thus a direct transformation can be
drawn from the value of DXY to the probability of having to redo the FPA design. This
probability is known from experience to vary from roughly one chance in fifty to one
273273
chance in seven, so a transformation is created to map this probability linearly to the
possible range of values for DXY. This transformation is as follows:
Probability of Design Iteration = 0.02 + (30E-6-DXY)*0.12/10 [6.6]
Similarly, detector detectivity (DSTAR) is primarily a function of the detector material, but
it is measured at the output of the FPA signal processor. Therefore the amount of effort
that is expended on the FPA signal processing design is increased as the requirement for
DSTAR increases. This is felt to be a primary driver of the duration of the design
calculation time in Figure 6.19, which can vary from 24 to 48 hours. Thus the following
transformation is created:
Design Calculations Task Duration = 24+(DSTAR - 7E10)*24/8E10 [6.7]
These transformations are both created using engineering judgment; it would be ideal to
verify them against historical data, but again if historical data were available the path of
building regression models would have been selected instead in Tasks 2a and 2b. It is felt
that these transformations capture legitimate trends in the FPA design process durations, so
their worth should be adequate for performing comparative studies. Their value as true
predictors of process duration is not emphasized.
Again similarly to Phase B of this case study, the creation of these transformations
has resulted in a self-contained decision formulation that is purely a function of the system-
level design variables, noise factors, and specified fixed parameters. Modeling schemes
have been identified and created for every goal and constraint, but again the issues remain
as to whether model transformations or approximations are required. These issues are
addressed in the next section.
274274
6 .5 .4 Transform and Integrate Models
(Based on Section 3.4.6)
In this section Task 4 of the enterprise design method is applied to FLIR system
design in order to create efficient and portable models to quantify the impact of the system
design variables on the duration of the focal
plane array design process Although the
computational efficiency of SIMAN is fairly
high for this relatively small model of FPA
development process duration, the package is
only available on the Windows platform and as
such warrants the creation of a more portable metamodel. In addition the stochastic nature
of the simulation calls for the creation of models both of the mean response and its standard
deviation.
Table 6.15 Design Variables for FPA Design Process DurationMetamodel
DESIGNVARIABLES DESCRIPTION FEASIBLE REGIONDetector 1) DXY detector element size, µm continuous; 20 ≤ x ≤ 30
2) DSTAR detector detectivity, 10^11 cmHz^(1/2)/W
continuous; 0.7 ≤ x ≤ 1.5
As we consider whether screening is necessary in this case to identify a smaller set
of design variables, an additional benefit of creating the transformations in the previous
section appears. Although SIMAN in general requires a fair amount of detailed
information, through the variable transformations we are able to construct a model that is a
function of only our system-level design variables. Screening is therefore not a
Selected Modeling
Option
Screen for Significant
Factors
Efficiency or Integration an
Issue?Robustness
Needed?
Significant Factors
Select Metamodeling
Technique Build Mean Response
Metamodel
Build Robustness Metamodel
Verify & Validate
σ model^
y model^Models
No No
Yes Yes
Use existing model as is
Design Variables
Noise Factors
Metamodeling Techniques
Fixed Parameters
ii
ii
ii
ii
ii
ii
ii
ii
ii ?? ??
??TT
TT
TT
TT
275275
requirement here. The list of system design variables, along with their descriptions and
feasible regions, is given in Table 6.15. Thus we move in enterprise design Task 4 from
screening to selecting a metamodeling technique, the recommendations for which are found
in Section 4.5.2.
Table 6.16 Central Composite Design for Focal Plane Array DesignProcess Duration Metamodel
Design Probability Design ProcessDXY DSTAR Calc's Task of Design Ten Duration
RUN (coded) (coded) Duration Iteration Replications MEAN STDEV1 - 1 - 1 42 0.05 120.4 2.902 - 1 1 30 0.05 127.3 3.693 1 - 1 42 0.11 134.3 3.364 1 1 30 0.11 142.0 4.165 - 2 0 36 0.02 118.1 2.616 2 0 36 0.14 146.5 3.417 0 - 2 48 0.08 124.7 3.208 0 2 24 0.08 142.2 5.339 0 0 36 0.08 132.1 3.00
Similarly to Section 6.4.4 this problem size is small enough that nearly any
metamodeling technique could be applied to build approximations of the focal plane array
design process duration, but again the advantages of selecting response surfaces are strong
enough that an alternative method would be selected only if there would be significant
problems with RSM. In this case no such problems appear. Therefore a central composite
design is constructed using the two factors of detector size (DXY) and detectivity
(DSTAR). This 9-run design, consisting of a 4-run 22 full factorial experiment with 4 star
points and one center point, is given in Table 6.16. Values for DXY and DSTAR are
transformed into values for the design calculations task duration and the probability of
design iteration using Equation 6.6 and Equation 6.7, and then these values are used in
SIMAN input files for each run. Because random number streams are used for task
276276
duration variability and probabilities of iteration, the simulation is stochastic and sets of ten
replications are generated for each run in the CCD, where each replication captures the
mean behavior of a sample of one hundred trials. Values for sample mean and sample
standard deviation are then computed for each of the runs of the CCD. These values are
shown in Table 6.16, and the complete sets of replications are given in Appendix C.4.
Armed with the design matrix and results in Table 6.16 it is possible to build
metamodel approximations of SIMAN using response surface methodology. As a first-
pass effort second-order polynomials are selected for model structure, and model
coefficients are fit to the SIMAN data using least-squares regression. Models are built for
both the mean duration of the design process (DPMEAN) and the standard deviation of the
process (DPSDEV). Noting that the models are functions of the CODED values for the
design variables, the models are:
DPMEAN = 130.74422 + (-7.132*DSTARC) + (4.14*DXYC) +
(-0.201*DSTARC*DXYC) + (0.3078333*DSTARC*DSTARC) +
(0.5948333*DXYC*DXYC) [6.8]
DPSDEV = 3.1867535 + (-0.2123987*DSTARC) + (0.4876791*DXYC)
-0.0029388*DSTARC*DXYC -0.032226*DSTARC*DSTARC
+ (0.2811089*DXYC*DXYC) [6.9]
Are these approximations sufficiently accurate? There are a number of measures to aid in
this evaluation. First, the R-square value is high for both models, on the order of 0.994
for DPMEAN and 0.976 for DPSDEV. Second, plots of the predicted values versus the
actual values for both development process mean and standard deviation show that there is
a close correlation; these plots are shown in Figure 6.22. Third, the residuals are plotted
against each model factor, and no significant trends appear that would indicate the necessity
of higher-order terms in the model.
277277
DPS
DEV
2.5
3.0
3.5
4.0
4.5
5.0
5.5
2.5 3.0 3.5 4.0 4.5 5.0 5.5DPSDEV Predicted
DPM
EAN
115
120
125
130
135
140
145
150
115 120 125 130 135 140 145 150DPMEAN Predicted
Figure 6.22 Plots of Predicted vs. Actual Values for FPA DevelopmentProcess Duration Mean and Standard Deviation
In the plots of Figure 6.22 the horizontal axis indicates the actual design process
durations computed from SIMAN for each of the 9 experimental runs, and the vertical axes
indicates the values predicted from the response surface equations 6.8 and 6.9,
respectively. The angled line represents the ideal fit (actual and predicted values being
equal) around which the predicted data is scattered, and the horizontal dashed line in the
plot represents the response mean value. The dashed lines bounding each angled line are
confidence bands for the predicted responses, indicating whether the model is significant at
the 5% level. If these confidence bands completely contain the horizontal dashed line, the
model is not significant. The bands are relatively tight in Figure 6.22, which is desirable.
Since the confidence curves cross the horizontal line, both metamodels are significant at the
5% confidence level.
As a final test of model accuracy, additional confirmation runs are performed with
SIMAN by varying both the design calculations task duration and the probability of redoing
the FPA design over the full ranges of their values. These values are then transformed and
278278
plugged into the polynomial approximations above. The design matrix in real values, the
SIMAN results, and the differences in the predicted values are given in Table 6.17.
Table 6.17 Confirmation Runs for FPA Design Process DurationMetamodel
Design Probability SIMAN MetamodelCalc's Task of Design Actual Predicted Error
RUN Duration Iteration MEAN STDEV MEAN STDEV MEAN STDEV1 24 0.02 120.4 2.90 112.61 2.79 -1.06 -0.132 48 0.02 127.3 3.69 139.53 3.62 -1.25 0.133 48 0.14 134.3 3.36 157.70 5.59 0.12 0.294 24 0.14 142.0 4.16 127.57 4.72 -0.42 0.095 27 0.07 118.1 2.61 119.53 2.67 0.14 0.176 38 0.04 146.5 3.41 128.60 3.10 -0.58 0.037 44 0.09 124.7 3.20 142.34 3.61 0.40 0.258 35 0.11 142.2 5.33 134.27 3.92 -0.85 -0.04
Throughout the entire range of design process durations the predicted values are
less than one percent away from the actual SIMAN values. All of these tests support the
conclusion that these second-order approximations of FPA development process duration
mean and standard deviation are sufficiently accurate. The next question would be to
investigate whether the actual SIMAN values were accurate. This sort of investigation is
not possible here; for the purposes of this example we rely on the fact that the simulation
model behaves in a justifiable fashion. In practice, of course, this model would warrant
further verification against historical data. With this model created and verified it can now
be integrated into a unified decision formulation; this is done in the next section.
279279
6 .5 .5 Generate Potential Solutions
(Based on Section 3.4.7)
In this section Task 5 of the enterprise design method is applied to FLIR system
design in order to capture and quantify the trade-offs between system performance,
production costs, and design process duration.
The outputs of the previous four enterprise
design tasks are used to augment the
compromise DSP math formulation generated
in Phase B, formulated through the keywords
given, find, satisfy, and minimize. This
formulation is shown in Figure 6.23. A set of potential solutions is generated by
exercising this compromise DSP, using DSIDES to perform a multi-objective search of the
product design space considering product performance issues (NETD) as well as
manufacturing process design (FPA and afocal lens assembly production cost) and
organization design (FPA design process duration) issues. Representative DSIDES input
files, a data file and FORTRAN file, are given in Appendix C.6 and C.7, respectively.
The formulation of Figure 6.23 incorporates all of the FLIR system information
generated in Phase B of the case study and in the previous sections. Comparing Figure
6.23 to the Phase B compromise DSP math formulation in Figure 6.16, the cumulative
nature of the formulation is visible. All of the design variables, goals, constraints, and
bounds are retained from the Phase B formulation. In addition, the execution of the
previous four enterprise design tasks for Phase C have augmented the formulation.
Additions include the design process schedule goal given in Table 6.14, the response
surface metamodel created for FPA design process duration mean (Equation 6.8), and the
response surface metamodel created for FPA design process duration standard deviation
Goals and Constraints
Models
Design Variables
Noise Factors
Additional Input
Create Math Formulation
Exercise C-DSP
Many Potential Solutions
Verification& Validation
Final Competing Solutions
Select Best Solution
C-DSP Template
Solutions Acceptable?
YesNoii
ii
ii
ii
ii ii ii
ii
TT
TTTT
?? ??
280280
(Equation 6.9). (These equations are used for computing the left hand side of both the
constraints and goals as shown in Figure 6.23.)
Given Approximations (equations) for:
• NETD, mean and standard deviation• FPA production cost• Afocal lens assembly production cost• FPA design process duration, mean and standard deviation
FindValues for the design variables
FNUM, FL, T0, DXY, DSTAR, TDIValues for the deviation variables
di+, di-
Satisfy System constraints
NETDmean(FNUM, T0, DXY, DSTAR, TDI) ≤ 0.2 [6.1]LNSCST(FNUM, FL, T0) ≤ 80 [6.5]FPACST(TDI) ≤ 70 [6.3]DesProcmean(DXY, DSTAR) ≤ 150 [6.8]
System goalsNETDmean + d1- - d1+ = Target [6.1]
NETDstdev + d2- - d2+ = 0 [6.2]
LNSCST + d3- - d3+ = 1 [6.5]
FPACST + d4- - d4+ = 1 [6.3]
DesProcmean + d5- - d5+ = 110 [6.8]
DesProcstdev + d6- - d6+ = 0 [6.9]Bounds
1.5 ≤ FNUM ≤ 3 10 ≤ FL ≤ 250.3 ≤ T0 ≤ 0.7 20 ≤ DXY ≤ 300.7 ≤ DSTAR ≤ 1.5 TDI = {1, 2, 4}di+, di- ≥ 0 ; di+ • di- = 0
Minimize Preemptive deviation function
Z = [ f1(di+, di-), ..., f6(di+, di-) ] (varies by scenario)
FLIR92Product Array
Historical Data
Regres-sion
PRICE-HCCD and
RSE
SIMANProduct Array
Product Design(System Performance)
Organization Design(Design Process
Duration)
Manufacturing Process Design
(Production Cost)
Figure 6.23 C-DSP Math Formulation for Phase C
281281
Recall that a detailed description of the compromise DSP in terms of goal
formulations, deviation variables, the deviation function and so on is given in Section
3.3.3. The objective of this formulation is to find the values of the six design variables that
satisfy the mean NETD constraint, the afocal optics production cost constraint, the FPA
production cost constraint, the FPA design process duration constraint, and the bounds on
the design variables and ultimately minimize the deviation function to achieve as closely as
possible the goals for NETD mean, NETD standard deviation, afocal optics production cost
FPA production cost, FPA design process mean, and FPA design process standard
deviation.
The design space is explored by “exercising” the compromise DSP, and in this case
this exercising is accomplished by using different goal priority scenarios, different goal
targets for NETD, and different starting points for the design variables. Eight different
scenarios are constructed, broken into four groups of two; the deviation function
formulations for each of these eight scenarios are given in Table 6.18. The first two
scenarios represent aggressive preferences for product performance with production cost
secondary and design process schedule tertiary, and the second two scenarios correspond
to more relaxed targets for product performance. The third two scenarios correspond to
placing the FPA design process duration at first priority, followed by production costs and
then by product performance. The fourth and final group of scenarios corresponds to
placing the variability of design process duration at first priority, followed by mean process
duration, production costs, and finally product performance.
In all of the scenarios in Table 6.18 the preemptive approach is used, placing the
different goals in separate priority levels. For all six goals only the overachievement of
each goal target is penalized; coming in under the target value is completely acceptable.
This is shown in the table by the existence of only the di+ terms in each priority level.
282282
Table 6.18 Deviation Function Scenarios for Phase C
Deviation Function
Scenarios PriorityLevel 1
PriorityLevel 2
PriorityLevel 3
PriorityLevel 4
PriorityLevel 5
PriorityLevel 6
1. AggressivePerformance I
d1+ 100 • d2
+ d4+ d3
+ d5+ d6
+
2. AggressivePerformance II
d1+ 100 • d2
+ d3+ d4
+ d5+ d6
+
3. RelaxedPerformance I
d1+ 100 • d2
+ d4+ d3
+ d5+ d6
+
4. RelaxedPerformance II
d1+ 100 • d2
+ d3+ d4
+ d5+ d6
+
5. Design ProcessDuration I
d5+ d4
+ d3+ d1
+ d6+ 100 • d2
+
6. Design ProcessDuration II
d5+ d3
+ d4+ d1
+ d6+ 100 • d2
+
7. Design ProcessStd. Dev. I
d6+ d5
+ d4+ d3
+ d1+ 100 • d2
+
8. Design ProcessStd. Dev. II
d6+ d5
+ d3+ d4
+ d1+ 100 • d2
+
Because the standard deviation of NETD is known to be fairly small, in each
deviation function d2+ is multiplied by 100 to ensure that small improvements are
recognized. Scenario 1 represents an aggressive target of 0.05 °C for NETD mean, with
the NETD mean goal placed at highest priority. NETD standard deviation is placed at
second priority, followed by focal plane array production cost, afocal optics production
cost, FPA design process mean, and finally FPA design process standard deviation.
Scenario 2 is nearly identical, except that the priorities for afocal optics production cost
and focal plane array production cost are reversed. (Placing the cost goals in separate
priority levels is warranted because the cost values have been normalized; the cost values
thus can not be added together to yield total cost measures of any meaning.) Scenario 3
283283
and Scenario 4 map nearly identically to Scenarios 1 and 2; the only difference is that the
aggressive target of 0.05 °C for NETD mean is relaxed to a more reasonable value of 0.1
°C. The remaining goal priorities are identical.
In contrast, in Scenarios 5 through 8 a different tack is taken. In Scenario 5 the
goal of reducing FPA design process duration is placed at highest priority, followed by the
production cost goals in priority levels 2 and 3. Achieving a target of 0.1 °C for mean
NETD is placed at priority level 4, followed by the goal for reducing FPA design process
duration variability in priority level 5 and the goal of reducing NETD variability in priority
level 6. Scenario 6 follows similarly, with the priorities of the two production cost goals
reversed. Finally in Scenario 7 and Scenario 8 the goal of reducing the variability of
FPA design process duration is moved to first priority, followed by the goal of reducing its
mean duration. Production cost goals are placed in levels 3 and 4, and product
performance goals are placed at the bottom.
For each of these eight scenarios, high and low starting points are explored in order
to establish the convergence of solutions. In an attempt to “cover” the design space the
different starting points for each scenario are set at the upper and lower bounds of the
feasible regions of each design variable. Because there is only one discrete design variable
(TDI) with only three values, standard DSIDES is employed rather than the foraging
version, and an exhaustive search is performed for the TDI values. For each DSIDES run
the value of TDI is hard-wired at 1, 2, or 4. However, there is an important fact that serves
to distance the scenarios in Table 6.18 from the actual DSIDES runs performed: the focal
plane array production cost goal (Goal 3 in Table 6.18 and equation 6.3 developed in
Section 6.4.2.1) is a function only of the number of TDI stages. Because TDI is hard-
wired into the formulation, neither the FPA goal nor the FPA constraint have to be
processed in each DSIDES run. Therefore, in generating the DSIDES runs for the eight
scenarios in Table 6.18 the following approach is taken:
284284
• The FPA cost goal is removed from the deviation function, thus reducing the
eight scenarios to four.
• Each of the four scenarios is exercised for different (high and low) starting
points and different values of TDI.
• A value of “4” for TDI violates the FPA production cost constraint in Figure
6.16, so only two values for TDI are possible: the set of {1, 2}.
• In all, (4 scenarios)(2 starting points)(2 TDI values) = 16 DSIDES runs are
performed. The outputs from all 16 runs are given in Appendix C.8. For each
run the FPA cost is recorded along with the remaining three goals.
By examining the output from each of these 16 DSIDES runs and adding the FPA cost goal
back into the mix, the results for each of the eight scenarios of Table 6.18 are obtained.
These best solutions are given in Table 6.19 and Table 6.20. For all eight scenarios
feasible, converged solutions are obtained.
Before discussing these best solutions, it is important first to ensure that they are
verified. This verification is performed by ensuring the trends in solutions are as expected,
and an important trend to support solution verification is if both the high and low starting
points converge to the same region for each design variable. To this end convergence plots
are generated for the DSIDES runs in all scenarios and across both values of TDI. The
convergence plots for Scenarios 3 and 4 of Table 6.18 are illustrated in Figure 6.24 and
Figure 6.25 for both values of TDI. As can be seen in the figures, each of the five
continuous design variables do indeed converge to the same regions from both starting
points.. Similar plots are also generated for Scenarios 1 and 2, Scenarios 5 and 6, and
Scenarios 7 and 8, but for the sake of brevity these plots are not shown.
285285
1
1.5
2
2.5
3
0 20 40 60 80 100Cycles
FNUM
5
10
15
20
25
0 20 40 60 80 100Cycles
FL
0
0.2
0.4
0.6
0.8
1
0 20 40 60 80 100Cycles
T0
18
20
22
24
26
28
30
32
0 20 40 60 80 100Cycles
DXY
0.5
0.7
0.9
1.1
1.3
1.5
1.7
0 20 40 60 80 100Cycles
DSTA
R
Figure 6.24 Design Variable Convergence from High and Low StartingPoints, Phase C, Scenarios 3 and 4, TDI = 1
The relative smoothness of these convergence plots indicate a fairly well-behaved
design space; even the more jagged plots for optics f number (FNUM), optics focal length
(FL), and optics transmission (T0) are fairly benign. Thus these solutions are verified.
The encouraging nature of these convergence plots yields some comfort with the
reasonableness of these solutions generated from this exploration of the design space, and
it therefore becomes justifiable to draw a set of implications and recommendations from
them. These implications and recommendations are discussed next.
286286
1
1.5
2
2.5
3
0 20 40 60 80 100Cycles
FNUM
5
10
15
20
25
0 20 40 60 80 100Cycles
FL
0
0.2
0.4
0.6
0.8
1
0 20 40 60 80 100Cycles
T0
18
20
22
24
26
28
30
32
0 20 40 60 80 100Cycles
DXY
0.5
0.7
0.9
1.1
1.3
1.5
1.7
0 20 40 60 80 100Cycles
DSTA
R
Figure 6.25 Design Variable Convergence from High and Low StartingPoints, Phase C, Scenarios 3 and 4, TDI = 2
Scenarios 1 through 4, along with their resulting solutions in terms of goal values,
design variable values, and deviation function , are given in Table 6.19. Scenarios 5
through 8 are given similarly in Table 6.20. In the identical solutions to Scenario 1 and
Scenario 2 we can see that the aggressive product performance goal is indeed met,
contrasted with relatively high figures for both production costs and design process
duration. If, however, concessions are made in product performance then a range of
287287
different improvements can be made in both production costs and in FPA design process
duration. These solutions are represented by Scenarios 3 and 4.
Table 6.19 Results from Phase C, Scenario 1 through Scenario 4
SCENARIO 1 2 3 4Level 1 NETD=0.05 NETD=0.05 NETD=0.1 NETD=0.1Level 2 NETD StDev NETD StDev NETD StDev NETD StDev
GOAL Level 3 FPA Cost Optics Cost FPA Cost Optics CostPRIORITIES Level 4 Optics Cost FPA Cost Optics Cost FPA Cost
Level 5 FPA DP Mean FPA DP Mean FPA DP Mean FPA DP MeanLevel 6 FPA DP StDev FPA DP StDev FPA DP StDev FPA DP StDev
NETD Mean (°C) 0.051 0.051 0.100 0.100NETD StDev (°C) 0.011 0.011 0.023 0.023
GOAL Optics Production Cost 56.11 56.11 72.66 32.19VALUES FPA Production Cost 32.38 32.38 9.81 32.38
FPA Duration Mean (h) 130.4 130.4 112.6 112.6
FPA Duration StDev (h) 3.48 3.48 2.79 2.79
FNUM 1.51 1.51 1.50 1.50DESIGN FL 10.02 10.02 10.00 10.00
VARIABLE T0 0.587 0.587 0.660 0.451
VALUES DXY 30.00 30.00 30.00 29.97 DSTAR 1.259 1.259 0.700 0.700
TDI 2 2 1 2
Level 1 0.001 0.001 0.000 0.000Level 2 1.140 1.140 2.340 2.350
DEVIATION Level 3 31.380 55.113 8.810 31.185FUNCTION Level 4 55.113 31.380 71.657 31.380
Level 5 20.384 20.384 2.618 2.643Level 6 3.480 3.480 2.793 2.786
In the solutions to Scenarios 5 and 6 we see that as NETD is moved lower in the list
of priorities, even greater reductions are accomplished in production cost at a substantial
decrease in product performance. Finally we see in Scenarios 7 and 8 that focusing on
design process duration variability does not decrease this measure by a great degree, and so
these solutions probably would not be pursued. There are some larger conclusions that can
be drawn from these results; these and other issues are discussed in the next section.
288288
Table 6.20 Results from Phase C, Scenario 5 through Scenario 8
SCENARIO 5 6 7 8Level 1 FPA DP Mean FPA DP Mean FPA DP StDev FPA DP StDevLevel 2 FPA Cost Optics Cost FPA DP Mean FPA DP Mean
GOAL Level 3 Optics Cost FPA Cost FPA Cost Optics CostPRIORITIES Level 4 NETD=0.1 NETD=0.1 Optics Cost FPA Cost
Level 5 FPA DP StDev FPA DP StDev NETD=0.1 NETD=0.1Level 6 NETD StDev NETD StDev NETD StDev NETD StDev
NETD Mean (°C) 0.200 0.200 0.200 0.200NETD Std Dev (°C) 0.047 0.047 0.047 0.048
GOAL Optics Production Cost 14.00 3.98 17.87 7.81VALUES FPA Production Cost 9.81 32.38 9.81 32.38
FPA Duration Mean (h) 112.6 112.6 115.0 115.0
FPA Duration Std Dev (h) 2.79 2.79 2.43 2.43
FNUM 1.69 1.79 1.51 1.75DESIGN FL 10.00 10.01 10.00 10.00
VARIABLE T0 0.374 0.311 0.341 0.339
VALUES DXY 30.00 30.00 27.12 27.12 DSTAR 0.700 0.700 0.700 0.700
TDI 1 2 1 2
Level 1 2.615 2.626 2.427 2.427Level 2 8.810 2.985 4.965 4.965
DEVIATION Level 3 13.003 31.380 8.810 6.808FUNCTION Level 4 0.100 0.100 16.873 31.380
Level 5 2.794 2.795 0.100 0.100Level 6 4.740 4.740 4.740 4.760
6 .5 .6 Results and Recommendations
In this, Phase C of the FLIR example, the enterprise-wide effects of an early-stage
product design decision are generated, modeled, and integrated into a unified decision
formulation; these effects span issues traditionally addressed in manufacturing process
design (reducing production costs) and organization design (project management and
designing the product development process). As described in Section 1.1 the decision is
ultimately made by satisfying as many of the enterprise-wide goals as possible, while being
as robust as possible to external decisions that are beyond the decision-maker’s control. In
289289
this vein any one of the solutions from each of the eight scenarios may be the best choice,
depending on the actual priorities for the design effort.
Notice that in this example integration is achieved not by forcing the other
interdependent decisions of manufacturing process design and organization design into a
joint formulation; instead a more “loose” form of integration is pursued based on
approximation, prediction and robust design. This approach is particularly relevant when
interdependencies arise between decisions along the same design timeline that intrinsically
cannot be resolved by strict enforcement of concurrency.
There is an intriguing point in this case study suite that is worthy of emphasis.
Although through each phase of the case study an increasing number of enterprise effects
are identified for a given design decision, the set of design variables themselves remains
relatively static. This is a natural consequence of pursuing integration through promoting
empowerment. Each decision-maker retains his or her authority and decision-making
capability instead of abdicating it to a larger group or higher authority. In this fashion each
local decision is made separately while considering its enterprise-wide effects. Thus the
benefits of integration are realized while ideally still fostering an open environment and
maintaining an engaging and fulfilling workplace.
Through this decision-based approach to enterprise design, integration can still be
achieved in the more established manner of forming cross-functional teams and creating
coupled decision formulations for group approval. However, through the approach of
promoting empowerment, integration can be achieved in a much more subtle, and perhaps
more powerful, manner. Beginning with a typical product design decision in Phase A, in
Phase B its manufacturing process design effects are integrated in to the formulation, and in
Phase C some of its organization design effects are integrated as well. In effect, the
boundaries between the three design domains start to lose their meaning by applying this
approach to enterprise design. Any decision from a given design domain, when expanded
290290
to include its enterprise-wide effects, transcends its original domain to encompass broader
design issues. Such a shift may hold potential for fostering true communication and
coordination between designers across a given enterprise, but only time will tell.
6.6 CRITICAL EVALUATION OF CASE STUDY SUITE
In this case study suite a product design decision is sequentially expanded to
include issues from the manufacturing process design domain and the organization design
domain. In the final phase of the suite, Phase C, the integration of these three design
domains is illustrated in terms of quantifying trade-offs between product performance,
manufacturing cost, and design process schedule. One important motive for developing
this case study is to provide a detailed description of how the enterprise design approach
can be implemented, and to this end the case study suite serves adequately. However,
another important role of this case study suite is the testing of hypotheses as introduced in
Section 1.5.3. The extent to which the suite succeeds in testing these hypotheses has not
yet been addressed and is thus the focus of Section 6.6.2. Because the area covered by the
research questions of Section 1.4 is extremely broad, it is inevitable that not all of the range
will be covered by this hypothesis testing, so in the next section some potential limitations
of the enterprise design approach is discussed, and issues are raised for generalizing the
approach to encompass the entire realm of enterprise design applications.
6 .6 .1 Limitations and Generalization
Although the FLIR case study does serve to illustrate how a decision-based
approach to enterprise design can be implemented, it still exists primarily in the product
design domain. Thus it does not establish conclusively the domain-independence of the
enterprise design approach, and a significant number of additional and larger case studies
will be needed to yield fully tested answers to the fundamental research questions of this
291291
dissertation. A perspective on the limitations of the FLIR case study is given in Figure
6.26. Assumptions on which the FLIR case study are predicated include:
• The decision alternatives can be codified mathematically as the feasible regions
of design variables,
• the enterprise behaviors of interest can be quantified as functions of these
design variables, and
• the resulting decision formulations can be generated of a tractable size.
Given these three conditions a decision-based approach to enterprise design can be applied
to any enterprise design decision, but as is shown in Figure 6.26 there are many
applications which may call these conditions into question.
FLIR
Other Products
Strategic Issues spanning multiple enterprises
Organization Design and Manufacturing Process Design issues
spanning multiple products
Figure 6.26 Context and Generalization of Case Study Suite
First, as discussed in Section 6.1, the actual implementation of the FLIR system
design example is fairly narrow. There are a wealth of other enterprise-wide effects that
potentially should be included in the formulation; they range from assembly costs and
292292
maintenance costs to the potential effects for manufacturing outsourcing, sales forecasts,
and competitive strategy. Second, even within this narrow implementation the models used
for quantifying the effects on production costs and design process duration are only
exercised in a limited fashion. For production cost only the cost of two subsystems are
estimated within a fairly narrow band of system architectures, and for design process
duration only one task duration and one probability of iteration are varied. The production
costs of other subsystems could be estimated as well, and ideally a wider range of system
architectures should be encompassed. Similarly, factors deserving attention in the design
process simulations are the number of employees allocated, their wage levels, additional
task durations, and even the restructuring of the task sequence itself. Incorporating these
extensions to the formulation may begin to call into question the three conditions given
above.
Further, FLIR system design is just one example of product design, and there are
characteristics of this example that likely make it more tractable than others. The
technologies are well-defined, the system architecture is well-established, and quantitative
analysis tools are readily available. Applying the enterprise design approach to other
products that are larger in scope (like an F-22 aircraft) or more flexible and ill-defined (like
a washing machine) may easily highlight problems or limitations of the approach. Some of
the behaviors may resist quantification, the design space itself may be very hard to define in
terms of design variables, constraints and bounds, and the sheer size of the problem may
make the gathering of models for a decision formulation a logistical nightmare.
And yet, this expanded region of product design applications is still only a subset of
larger issues in organization design and manufacturing process design that impact multiple
products or product families. Manufacturing outsourcing is a typical example of this --
choosing to outsource a particular manufacturing capability may hold significant impact for
a wide range of products. How can this impact be assessed in a tractable manner? Again,
293293
the realm of design alternatives may be very hard to identify, the system’s behavior may be
very hard to quantify, and any attempt at building realistic models may quickly grow to
untractable proportions.
Extending the scope of application even further, these types of organization design
and manufacturing process design issues can still be analyzed within the boundaries of one
enterprise, but there is an even larger and more encompassing set of design issues that
commonly affect multiple enterprises. Competitive strategy falls into this category, as does
long-term strategic planning. At the heart of each of these issues decisions are still made,
and these decisions may or may not be quantifiable, so there is a possibility that a decision-
based approach to enterprise design can be applied. But as yet they remain only
possibilities, because they have not been demonstrated or tested.
The potential applications in Figure 6.26 are all contained within the concept of
enterprise design as set forth in Chapter 1, so there is a significant gap between the area
covered by the research questions and hypotheses in this dissertation, and the actual area
that is demonstrated and tested by the case study. The true extent of this gap is addressed
in the next section in terms of the hypothesis testing procedures introduced in Section
1.5.3.
6 .6 .2 Testing of Hypotheses
In Section 1.5.3 the specific procedures for testing each hypothesis in this
dissertation are laid out. In the chapters that have followed a decision-based approach to
enterprise design has been developed and applied to the FLIR case study suite. The testing
of these hypotheses is a primary role of the case study, so in this section we return to these
testing procedures and evaluate the extent to which they have been satisfied. We begin
with the four sub-hypotheses, and then draw larger conclusions for the two primary
hypotheses.
294294
Testing Hypothesis 1.1:
(A method for implementing mathematically rigorous decision support in
any enterprise domain can be created using the hybrid paradigm for decision
support and the compromise DSP.)
This method for decision support is developed in detail throughout Section 3.4, and its
structure pervades the case study suite. Using the hybrid paradigm for decision support as
illustrated in Figure 3.10, the method is used to formulate and solve compromise DSPs as
documented in Sections 6.3.5, 6.4.5, and 6.5.5. Is this decision support mathematically
rigorous? As stated in Section 1.4 mathematically rigorous is taken to mean quantitative
and repeatable. The mathematical structures of Figure 6.7, Figure 6.11 and Figure 6.14
stand as testament to the quantitative nature of each decision formulation, and the plots of
Figures 6.9, 6.10, 6.11, 6.17, 6.18, 6.23, and 6.24 illustrate the repeatability of the
solutions for each scenario, as different starting points converge to the same solution
region.
Can this decision support be implemented in any enterprise domain? The FLIR
example of the case study suite applies only from the product design domain. Although
issues are integrated from manufacturing process design and organization design, the
decision formulation retains its product design flavor. There appear to be no intrinsic
barriers to implementing this approach across organization design and manufacturing
process design decisions given the conditions of the previous section; at the very least,
discrete-event simulation models such as the SIMAN model developed in Section 6.5.2 can
certainly be tailored to specific decisions in manufacturing process design and organization
design. However, these applications have not been demonstrated conclusively and account
for much of the gap between hypothesis and demonstration. As such they will warrant
continued investigation.
295295
Testing Hypothesis 1.2:
(Existing system modeling techniques can be used to quantify the behavior
of several enterprise domains, and statistical metamodeling techniques meet
the necessary conditions for transforming this enterprise behavior in a
format amenable to decision support and design.)
The system modeling techniques employed in the FLIR case study suite are FLIR92 for
product performance (Section 6.3.2), historical data for focal plane array production cost
(Section 6.4.2.1), PRICE-H for afocal lens assembly production cost (Section 6.4.2.2),
and the SIMAN discrete-event simulation language for focal plane array design process
duration (Section 6.5.2). It is fair to say that these modeling techniques span applications
across product design, manufacturing process design, and organization design. In each
case, statistical metamodeling techniques are applied to yield approximate models that are
efficient, portable, and of acceptable accuracy.
In terms of the spanning set of five scenarios for model integration offered in
Section 4.2, these system modeling techniques do not cover every option. Scenario 2, that
of models as computer analysis routines, and Scenario 3, that of models existing explicitly
in historical data, are both addressed by the case study. Scenario 1, that of models existing
in the form of analytical equations, has been well documented in existing applications of
compromise DSPs such as in (Mistree, et al., 1990b; Smith, 1988; and Karandikar and
Mistree, 1992) and therefore should not require further demonstration. However, the
fourth and fifth scenarios of models existing implicitly, embodied in the behavior of test
equipment, and of models existing in the form of expert opinion, have not yet been tested.
Although these two scenarios should theoretically be equivalent to the others, this similarity
has not been demonstrated conclusively and deserves further attention.
296296
Testing Hypothesis 2.1:
(The combination of the compromise DSP and statistical metamodeling
techniques meets the necessary conditions for handling decision
interdependence across enterprise domains.)
At a high level of abstraction an argument can be made that these conditions have been met.
As discussed in Section 3.4.5 decision interdependence across enterprise domains arises
when additional input data required for the decision formulation are found to be design
variables controlled by other decision-makers in different enterprise domains. In the
simplest of terms there are only two options for these external design variables -- they can
be controlled by the local decision maker, or they can be recognized as uncontrollable. If
they can be controlled then they are incorporated into the decision formulation, and if they
can’t be controlled then they are classified as noise factors and the decision is formulated to
be robust to changes in their values. (These options correspond to the avenues of
enforcing coordination or promoting empowerment as discussed in Section 5.2.3.)
Do the compromise DSP and statistical metamodeling techniques meet the necessary
conditions for resolving such interdependencies? In terms of incorporating design
variables, because the compromise DSP is a multivariate construct, there is no inherent
limit on the number of design variables that can be incorporated. (Computer infrastructure
constraints, however, currently limit the number of design variables to around 200.)
Similarly, the robust design aspect of statistical metamodeling allows the effects of
uncontrolled variables to be assessed. Decision interdependence across enterprise domains
may also lead to the identification of large numbers of potential design variables, but again
the screening aspect of statistical metamodeling allows an arbitrarily smaller set of variables
to be identified. (Which of course, may degrade the accuracy of such approximations.)
Therefore, these necessary conditions have potentially been met, but again the limited
nature of the FLIR case makes additional examples necessary for more solid support.
297297
Testing Hypothesis 2.2:
(The combination of the compromise DSP and statistical metamodeling
techniques meets the necessary conditions for handling decision
interdependence along a design timeline.)
In the FLIR case study suite three instances of decision interdependence through time arise:
that of focal plane array production cost estimation, afocal optics production cost
estimation, and the estimation of focal plane array design process duration. Decision
interdependencies through time are resolved in each instance by applying the timeline
procedure of Section 5.4, which is based on the foundation of statistical metamodeling and
the multivariate capabilities of the compromise DSP. Has this hypothesis been tested
thoroughly? Again, the answer is no. While the FLIR case study does support the veracity
of this hypothesis, much more additional support, through the testing of more case studies,
is needed in order to create a more substantial argument.
Implications for Testing Hypotheses 1 and 2:
Hypothesis 1: (A decision-based perspective is a key to achieving a
domain-independent and mathematically rigorous approach to enterprise
design.)
Hypothesis 2: (Integration is achieved through 1) a unified method for
decision support, 2) mathematical tools for assessing the impact of
individual decisions on the enterprise, and 3) methods for identifying,
modeling and resolving interdependencies between the decision at hand and
other decisions across enterprise domains and through time.)
In this dissertation a decision-based approach to enterprise design is developed that
certainly has elements of mathematical rigor, and it appears to have the potential for
domain-independence and for achieving integration. From a design methods standpoint,
298298
much of the approach is based on heuristics, so its mathematical rigor is contained to the
capabilities of system modeling, statistical metamodeling, and the compromise DSP.
Mathematical rigor does not pervade the approach itself. In terms of achieving true
domain-independence and integration across any enterprise domain, through the case study
enterprise behaviors are modeled from the product design, manufacturing process design,
and organization design domains, and these models are integrated into a unified decision
formulation. By applying the categorization scheme of Section 2.4.1, an even larger set of
system modeling techniques should be able to be incorporated similarly. The actual
demonstration of the case study thus only hints at the potential for domain-independence,
and hence the gap between demonstration and hypothesis will need to be filled by a
substantial number of case studies across additional enterprise domains. Developing case
studies for each of the examples of Section 5.3 may be a good start.
With these assessments of the testing of each hypothesis in this dissertation, the
next step is to examine the extent to which the fundamental research questions have been
answered. Such a discussion is offered in the next chapter.
299299
7. CHAPTER 7
CLOSURE, ACHIEVEMENTS, ANDRECOMMENDATIONS
In this chapter the dissertation argument initiated in Section 1.4 is brought to its
conclusion. In Section 7.1 closure is sought by returning to the two central research
questions driving this work and reviewing the answers that have been offered. In Section
7.2 the achievements generated in this research are revisited by drawing together and
discussing the set of contributions that are embedded in this dissertation. Building upon
these contributions, additional avenues for further research are explored in Section 7.3, and
this chapter closes with some concluding remarks in Section 7.4.
300300
7.1 CLOSURE: ANSWERING THE RESEARCH QUESTIONS
The vision statement for enterprise design offered in Section 1.1 is framed in terms
of integrating the processes of product design, manufacturing process design, and
organization design. The fundamental motivation driving this research is then embodied by
two central research questions:
Question 1 How can a domain-independent and mathematically
supported approach be implemented for designing products,
manufacturing processes, and the organization itself?
Question 2 How can the design of these interdependent entities be
integrated at any point along a common design timeline?
As stated in Section 1.5.2, this dissertation will stand as a complete and self-
contained entity if answers are found for both of these questions, and if these answers can
be verified and validated. In essence, this section is a continuation of the hypothesis testing
discussions of Section 6.6.2. We have already established that there is a significant gap
between the encompassing ground covered by the hypothesis and the specific area tested by
the FLIR case study. However, the approach to enterprise design developed in this
dissertation holds legitimate promise for answering the research questions, and in this
section such promise is explored.
Answering Question 1: Throughout this dissertation a decision-based
approach to enterprise design is developed, consisting of both a task-based method for
formulating enterprise design decisions as compromise DSPs and an overall philosophy for
implementing this method within an enterprise. This approach has been developed in
301301
collaboration with an industry partner and as such holds real promise to be implemented as
part of their standard design practices.
Is this approach domain-independent? Perhaps the strongest arguments for the
domain-independency of this approach are offered in Section 1.2, in which the
pervasiveness of decision-making is established across the domains of designing,
engineering, and management. (Through this reasoning any decision-based design
approach holds the capabilities for true domain-independence.) This argument is
strengthened as the development of this enterprise design approach proceeds from Chapter
2 through Chapter 5. Each element of the approach is developed without the constraints of
any specific design context, and no element is dependent on knowledge from any specific
design domain.
Is this approach mathematically supported? This decision-based approach to
enterprise design utilizes at its heart the compromise DSP (Section 3.3.3). With the
compromise DSP decisions can be formulated mathematically and quantitative solutions can
be identified in a rigorous and repeatable manner, as is shown in detail in Section 6.3.5,
Section 6.4.5, and Section 6.5.5. In addition, the compromise DSP has been subject to
years of successful applications in the engineering design community which stand as
testament to its mathematical rigor. Further, this enterprise design approach builds upon
the wealth of system modeling techniques (Section 2.4) and statistical metamodeling
techniques (Section 4.4) for the quantitative formulation of decisions. While the
mathematical rigor of these techniques is not proven in this dissertation, the actual rigor of
these techniques as a whole is not truly subject to question.
Can this approach be implemented across product design, manufacturing process
design, and organization design? It is in this aspect that the answer to this question falls
short. As discussed in Section 6.6.2, the potential for applying this approach in any
enterprise domain may exist, but such applications have not been demonstrated
302302
conclusively. Although applications in each domain should proceed similarly due to the
domain-independent nature of a decision-based approach, significant issues lurk in the
shadows that may raise difficulties. These limitations, raised in Section 6.6.1, include the
specters of mathematical codification of design alternatives, identifying modeling
techniques for quantification of enterprise behaviors, and maintaining decision formulations
of tractable size. However, in this dissertation this second limitation has received some
attention. Through the literature reviews of Section 2.4.2, 2.4.3, and 2.4.4, it is clear that
there are a significant number of quantitative modeling schemes that exist across all three of
these design domains. We can also look to the case study suite of Chapter 6 for support.
Although the design variables of this case study remain in the product design domain
throughout all phases of the case study, the solutions reached in Phase B and Phase C are
actually manufacturing process design and organization design decisions as well. This
point is developed more fully in the discussion of Question 2 to follow. In addition, more
traditional decisions in the manufacturing process design domain and organization design
domain are also compatible with the approach of this dissertation; for example a machining
center is designed to meet given levels of throughput and inventory in (Peplinski, et al.,
1996a). Similarly, although this avenue was not pursued in the case study suite, additional
parameters of the FPA development process simulation model of Section 6.5.2 could easily
have been classified as design variables to yield a more typical organization design
decision. (Examples would be quantifying the effect of increasing the number of
designers, incorporating their skill levels, and so on.) There appear to be no intrinsic
boundaries to applying this approach to any design domain within the enterprise.
Therefore a potential answer to Question 1 has indeed been reached. A domain-
independent and mathematically rigorous approach has been developed that holds promise
for designing products, manufacturing processes, and the organization itself.
303303
Answering Question 2: The case study suite of Chapter 6 stands as a
demonstration of how product design, manufacturing process design, and organization
design can be integrated along a common design timeline; they are in fact all integrated into
the same decision formulation as shown in Figure 6.22, the C-DSP math formulation for
Phase C. By integrating system modeling techniques as discussed in Section 2.3.3, by
applying the statistical metamodeling techniques of Section 4.4 and by following the
timeline procedure of Section 5.4, a unified decision formulation is created that spans all
three design domains. However, this point may take time to digest because this integration
is achieved in a non-traditional manner. It would be obvious that integration had been
achieved if decision variables had been drawn from the various enterprise domains into one
joint formulation, but this path of enforcing coordination is not pursued in the case study.
In working with an industry partner it did not appear reasonable to pursue such a joint
formulation, even though such a formulation is certainly a feasible option within this
enterprise design approach.
Instead, in the case study suite of Chapter 6, integration is achieved in a much more
subtle, and perhaps more powerful, manner. Beginning with a typical product design
decision in Phase A, in Phase B its manufacturing process design effects are integrated in
to the formulation, and in Phase C some of its organization design effects are integrated as
well. In effect, the boundaries between the three design domains dissolve by applying this
approach to enterprise design. Any decision from a given design domain, when expanded
to include its enterprise-wide effects, transcends its original domain to encompass broader
design issues. Such a shift may hold potential for fostering true communication and
coordination between designers across a given enterprise, but only time will tell.
In addition, the potential for integrating design decisions in varied applications
across organization design and manufacturing process design is proposed in some detail
throughout Section 5.3. In these examples legitimate interdependencies are identified
304304
between decisions from different design domains, and in each case system modeling
schemes do exist for quantifying the requisite enterprise behavior. Relying on the
categorization of system modeling schemes developed in Section 2.4.1, it is legitimate to
see how any of these modeling schemes of Section 5.3 could be plugged into the approach
to enterprise design and used to create decision formulations.
Therefore an answer to Question 2 has also been reached, again with substantial
limitations. In Phase C of the case study in Chapter 6, issues from product design,
manufacturing process design, and organization design are all brought together from
different points in separate design timelines and integrated into a unified decision
formulation. Additional applications throughout the enterprise are also identified
throughout Section 5.3 and their development will ideally proceed similarly in a
straightforward manner.
Have these answers been verified and validated? The actual “answer” to
both of these research questions is in effect embodied by the approach to enterprise design
developed throughout this dissertation. This approach is developed by the testing of
hypotheses, and these testing procedures are offered in Section 1.5.3 and reviewed in
Section 6.6.2. The approach as a whole is verified against recent industry practice in Phase
A of the case study, specifically in Section 6.3.5 and 6.3.6. The solutions generated do
indeed match solutions generated by alternate design methods, so the verification of the
approach is supported. The approach as a whole is validated throughout Phase B and
Phase C of the case study, particularly in Section 6.4.6 and 6.5.6. Useful results, clear
improvements over Phase A, are indeed generated with a reasonable amount of extra effort.
The solutions from Phase B were presented to engineers and managers in industry and
were met with clear interest and enthusiasm. Thus the validation of this approach to
enterprise design is also supported.
305305
With this section closure for the dissertation has been reached. Answers have been
reached for both of the fundamental research questions, and the verification and validation
of these answers has been supported. Therefore it is indeed reasonable for this dissertation
to stand as a complete and self-contained entity. The next question in this chapter is
whether all this work has been worthwhile. Such justification is provided in the next
section as the contributions embedded in this dissertation are reviewed.
7.2 ACHIEVEMENTS: REVIEW OF CONTRIBUTIONS
This dissertation is intended as a stepping-stone along a branching and widening
path of continued research, or as an element in the foundation of a nascent structure of
continuing research. (Either metaphor will suffice.) As such an entity this dissertation
must contain sufficient substance to support the weight of the research efforts to come.
This substance is embodied by the set of contributions developed in this research; these
contributions must be of sufficient worth to be either an addition to the fundamental
knowledge of the field or a new and better interpretation of facts already known. In this
section the body of contributions embedded in this dissertation are revisited in order to
confirm the value of this research effort. (The potential avenues for further exploration are
perused in the next section.)
As given in Section 1.4, the set of contributions offered in this dissertation are (1) a
hybrid paradigm for decision support in enterprise design developed throughout Section
1.3, (2) a rigorous and industry-tested method for implementing this decision support
developed throughout Section 3.4, (3) a categorization of system modeling techniques in a
format amenable to decision support developed in Section 2.4.1, (4) a revised formulation
of the Task Support Problem to aid in the rigorous representation and support of the
enterprise design method itself given in Section 3.4.2, (5) guidelines and recommendations
for selecting and applying statistical metamodeling techniques for model approximation and
306306
integration given throughout Section 4.5, (6) a procedure, based on the compromise DSP
and statistical metamodeling techniques, for handling interdependent decisions along a
design timeline developed in Section 5.4, (7) a definition for integrating models into
decision formulations in Section 2.3.3, definitions for interdependence between decisions
in Section 3.4.5, and definitions for the integration of decisions and design processes in
Section 5.2.3, and finally (8) an overall philosophy for implementing the method of
enterprise design which is developed in Section 3.2. These contributions combine to form
a decision-based approach to enterprise design that is offered as the fundamental
contribution of this research. This structure is revisited in Figure 7.1.
Method for Enterprise Design
Timeline Procedure
Revised Task SP
Guidelines for Metamodeling
Categorizationof System Modeling
Definition of Interdependence
between Decisions
Integration of Models into Decisions
Integration of Decisions and Design Processes
Overall Philosophy for Implementing Method
Empowerment and Bounded Rationality
Engineering Design / DBD/ OR / Systems Theory
Metamodeling, System Modeling, Decision Support Problem Technique
Hybrid Paradigm for Decision Support
Decision-Based Approach to Enterprise Design
Figure 7.1 Overall Structure of Contributions, Revisited
307307
What is the value in these contributions? The hybrid paradigm for decision support
has proven to be an effective vehicle for communicating the decision-based nature of this
research both in industry and academia. In industry, this hybrid paradigm was used at the
core of several presentations to upper-level engineering management and designers and was
a valuable aid in enlisting their support and agreement. In academia this paradigm is
already seeing use in continuing research in the engineering design field.
The method for enterprise design, framed in terms of formulating and solving
multi-objective decisions across multiple enterprise domains, has proven to have real value
in industry, thus holding legitimate promise to receive endorsement and implementation. It
is at the heart of an internally-funded research program in a specific organization, one
scheduled for continued funding and development.
The categorization of system modeling schemes will be crucial for continued
applications of this enterprise design approach across the wide span of enterprise domains.
The categorization will aid in the identification of enterprise behavior that can be quantified
and will serve as a guide for integrating these system models into decision formulations.
The revised formulation of the Task Support Problem is a contribution to the
construct that is the DSPT Palette, and as such it will help strengthen the continuing work
in developing rigorous descriptions and representations of design processes.
The guidelines and recommendations offered for selecting and applying statistical
metamodeling techniques address a current need in the engineering design community that
is receiving significant attention. An initial publication, that of (Simpson, et al., 1997) has
sparked considerable interest from colleagues and faculty in the fields of engineering design
and statistics.
The procedure for handling interdependent decisions along a design timeline has
received less exposure than the previous contributions and therefore has not yet been
incorporated into any continuing research or industry efforts. Its promise, however, lies
308308
perhaps in its high-level structure that brings together the disparate techniques of robust
design, modeling uncertainty, collaborative decision-making and game theory. The
remaining contributions of definitions and philosophy exist synergistically with the above
contributions and thus are an implicit component of all of the above discussions.
In the preceding discussions the industrial value of these contributions may be more
clear than their academic value. In terms of scholarship, it is hoped that this dissertation
contains a core set of contributions to the fields of enterprise integration, decision-based
design, and statistics. The alternative approach to enterprise integration offered in Section
2.3, that of achieving integration through mathematical decision formulations, may hold
true value for the enterprise integration community, and the categorization of system
modeling schemes offered in Section 2.4.1 may hold comparable value as well. Similarly,
the hybrid paradigm for decision support of Section 1.3.3 and the method for enterprise
design of Section 3.4 will ideally aid future research efforts in the rigorous description of
decision-based design methods. Finally, the guidelines and recommendations for statistical
metamodeling have already sparked interest from the statistics community as previously
noted.
Perhaps, however, the most significant academic value of this work will remain
more ephemeral, as the tentative first steps of defining a new direction for continued
research. In this dissertation a large area of potential research has been surveyed and
marked off, and far from all of these areas have been developed fully. It is the hope that
this development will be the focus of research efforts to come, and that this dissertation will
serve as a foundation and as a guide for their implementation. These potential avenues for
further research are explored in the next section.
309309
7.3 RECOMMENDATIONS: AVENUES FOR FURTHER RESEARCH
There are a substantial number of issues that have arisen through the development
of this dissertation, and each of these issues may easily flower into a research area in its
own right. The two primary issues that drive the applicability of this decision-based
approach to enterprise design are 1) to what extent decisions in engineering and
management are quantifiable, and 2) how large a decision formulation can practically
become, framed in terms of the number of variables, the number of goals and constraints,
and the costs of computing time. Both of these issues are seen as boundaries to be pushed
outward through continuing research efforts. Returning to the words of Herbert Simon
(1977) as he discusses the application of mathematical techniques to management decision
making, “the area of application is large. It continues to grow. But there is no indication
that it will cover the whole of management decision making. Difficulties in quantifying set
one boundary; limits on computing power set another. Although the boundaries are
movable, they have a long way to go before they will encompass all of management."
The need for pushing these boundaries is well voiced by Heim and Compton
(1992), who state that “performance evaluation is a process applied throughout the
manufacturing organization to measure the effectiveness in achieving its goals. Because of
the variety, complexity, and interdependencies found in the collection of unit processes and
subsystems that define the manufacturing system, appropriate means are needed to describe
and quantify rigorously the performance of each activity.” Similarly, Simon (1977)
recognizes this trend in the context of implementing decision-making activities with
computers:
We are now well into a technological revolution of the decision-makingprocess. That revolution has two aspects, one considerably further advancedthan the other. The first aspect, concerned largely with decisions close to theprogrammed end of the continuum, is the province of the field called'operations research' or 'management science'. The second aspect, concernedwith unprogrammed as well as programmed decisions, is the province of a setof techniques that have come to be known as 'heuristic programming', or
310310
sometimes 'artificial intelligence'. We are gradually acquiring the technologicalmeans, through these techniques, to automate all management decisions, nonprogrammed as well as programmed.
The boundary of problem size is currently being pushed by ongoing, most notably that of
Patrick Koch (1997), framed in terms of decomposing a formulation into a hierarchy of
smaller units and then solving this structure as an integrated entity. Research opportunities
will arise in applying this formalism in an enterprise design context.
Similarly, the boundary of quantifying will be pushed by implementing the
decision-based approach to enterprise design, along the vein of Chapter 6, to examples in
varied enterprise domains as surveyed throughout Section 5.3. In addition there are further
domains for application that hold significant promise, such as modeling the re-
manufacturing or recycling activities of an organization, modeling and integrating the
activity of a supply chain of manufacturers, and achieving enterprise goals within the
context of an industrial ecosystem, where manufacturers work symbiotically and
environmental issues are brought to the fore.
Both of these extensions of enterprise design into supply chain management and
industrial ecosystems highlight an issue at a more fundamental and mathematical level, that
of incorporating game-theoretic protocols into these decision formulations in which
multiple interdependent players may have competing interests. A game theoretic approach
to Decision-Based Design is being established by Lewis (1996), framed in terms of
achieving coordination and cooperation between different subsystems of a product being
designed and also between different disciplines in a multi-disciplinary design team. There
are significant implications for applying these formulations to the more competitive
behavior that exists as the boundary of an enterprise shifts from the walls of a given
organization to the more ephemeral concept of a “virtual enterprise”, framed in terms of a
conglomeration of manufacturers, distributors, and suppliers held loosely together by
contracts, licensing agreements and limited partnerships.
311311
These potential avenues for further research are just a sample of the realm of topics
waiting to be addressed. It is the hope of the author that at least some of these topics are
indeed taken on by researchers to come, so that this work may live on.
7.4 CONCLUDING REMARKS
The enormity of the research topic addressed in this dissertation has made the
journey a wild and delicate ride, flashing alternately from brazen certainty to crushing
doubt, much like the two sides of a coin tossed in the sunlight. It is fitting that we turn
again to Herbert Simon, who captures both images so well:
On the one hand,
It is easy for the operations research enthusiast to underestimate the stringencyof the conditions for the applicability of his methods. This leads to an ailmentthat might be called mathematician's aphasia. The victim abstracts the originalproblem until the mathematical or computational intractabilities have beenremoved (and all semblance of reality lost), solves the new simplified problem,and then pretends that this was the problem he wanted to solve all along. Hehopes the manager will be so dazzled by the beauty of the mathematicalformulation that he will not remember that his practical operating problem hasnot been handled. (Simon, 1977, p. 59)
And on the other hand,
For the operations research approach to work, nothing needs to be exact -- itjust has to be close enough to give better results than could be obtained bycommon sense without the mathematics, and that is often not a difficult criterionto beat. (Simon, 1977, p. 59)
May these sentiments guide the work of researchers to come.
312312
APPENDIX A
DATA AND RESULTS FROM CASE STUDYPHASE A
In this appendix the supporting data behind the computations in Phase A of the
FLIR case study are given.
• Appendix A.1 contains a sample FLIR92 input file, giving the precise
description of the FLIR systems used for analysis.
• Appendix A.2 contains the design matrix and FLIR92 results used to screen
16 factors down to seven for building a NETD metamodel.
• Appendix A.3 contains the design matrix (a central composite design in five
control factors, with an outer array of two noise factors) and FLIR92 results
used for the creation of metamodels for mean NETD and NETD standard
deviation.
• Appendix A.4 contains the linear transformations used to map the “coded”
values of the design matrix in Appendix A.3 to the “uncoded” or real values
used in the actual FLIR92 input files.
• Appendix A.5 contains the design matrix and FLIR92 results used for
verifying the NETD metamodels.
• Appendix A.6 contains a sample DSIDES data file used for this phase.
• Appendix A.7 contains a sample DSIDES Fortran file used for this phase.
• Appendix A.8 contains the results of the twelve DSIDES runs for Phase A.
313313
A.1 SAMPLE FLIR92 INPUT FILE
This text input file is used to define and analyze the performance of a FLIR system.
FLIR92 is used for computing system Noise Equivalent Temperature Difference (NETD)
as is discussed in Section 6.3.2.
**FLIR Example with 240 element detector, 2:1 interlace.>spectral spectral_cut_on 7.65 microns spectral_cut_off 10.450 microns diffraction_wavelength 0.000 microns>optics_1 eff_f_number 1.5 -- eff_focal_length 10 cm eff_aperture_diameter 0.0 cm optics_blur_spot 0.027 mr average_optical_trans 0.3 -->optics_2 HFOV:VFOV_aspect_ratio 1.333 -- magnification 0.000 -- frame_rate 30.000 Hz fields_per_frame 2.000 -->optics_3 horizontal_FOV 0.000 degrees vertical_FOV 0.000 degrees>detector horz_size 20 microns vert_size 20 microns peak_D_star 70000000000 cm-sqrt(Hz)/W integration_time 00.000 microsec 1/f_knee_frequency 10.000 Hz>fpa_parallel #_detectors_in_TDI 1 -- #_vert_detectors 240.000 -- #_samples_per_HIFOV 1.744 -- #_samples_per_VIFOV 2.000 -- 3dB_response_frequency 0.000 Hz scan_efficiency 0.6 -->electronics high_pass_3db_cuton 10.000 Hz high_pass_filter_order 1 -- low_pass_3db_cutoff 10000 Hz low_pass_filter_order 1 -- boost_amplitude 1.000 -- boost_frequency 10000.000 Hz sample_and_hold HORZ NO_HORZ__VERT
314314
>display display_brightness 8 milli-Lamberts display_height 22.880 cm display_viewing_distance 40.080 cm>crt_display #_active_lines_on_CRT 0.000 -- horz_crt_spot_size 0.01 mr vert_crt_spot_size 0.01 mr>3d_noise_default noise_level NO NO_LO_MOD_HI>eye threshold_SNR 2.5 -- eye_integration_time 0.1 sec MTF EXP EXP_or_NL>noise_power_spectrum#_points: 7 Hz_____________NPS
1.00 1.00 10.00 1.00 100.00 1.00 1000.00 1.00 10000.00 1.00 100000.00 1.00 1000000.00 1.00
>spectral_detectivity#_points: 10 microns_____detectivity
7.70 0.01 7.90 0.50 8.00 0.89 9.00 1.00 9.50 0.77 9.70 0.86 10.00 0.90 10.20 0.75 10.30 0.50 11.00 0.10
>end
315315
A.2 DESIGN MATRIX AND RESULTS FOR NETD SCREENING
This matrix defines the experimental design, interms of factors, runs, and factor
settings, used for determining the significant factors for system NETD in Section 6.3.4.
Each row represents a FLIR92 run, and ethe FLIR92 output is shown in the NETD
column. (Note: Matrix is in CODED values. “+1” -> upper bound; “-1” -> lower
bound.)
RUN FNUM FL T0 DX DY DSTAR TDI SCEFF ORDHPF CUTLPF ORDLPF CRTBRT CRTX CRTY SPCTN EYEINT NETD
1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 0.2822 -1 -1 -1 -1 1 1 -1 -1 1 -1 1 1 -1 1 1 1 0.199
3 -1 -1 -1 1 -1 1 -1 1 -1 1 -1 1 1 -1 1 1 0.115
4 -1 -1 -1 1 1 -1 -1 1 1 1 1 -1 1 1 -1 -1 0.1635 -1 -1 1 -1 -1 1 1 -1 -1 1 1 -1 1 1 -1 1 0.028
6 -1 -1 1 -1 1 -1 1 -1 1 1 -1 1 1 -1 1 -1 0.0917 -1 -1 1 1 -1 -1 1 1 -1 -1 1 1 -1 1 1 -1 0.053
8 -1 -1 1 1 1 1 1 1 1 -1 -1 -1 -1 -1 -1 1 0.016
9 -1 1 -1 -1 -1 1 1 1 1 -1 -1 -1 1 1 1 -1 0.08610 -1 1 -1 -1 1 -1 1 1 -1 -1 1 1 1 -1 -1 1 0.122
11 -1 1 -1 1 -1 -1 1 -1 1 1 -1 1 -1 1 -1 1 0.094
12 -1 1 -1 1 1 1 1 -1 -1 1 1 -1 -1 -1 1 -1 0.06613 -1 1 1 -1 -1 -1 -1 1 1 1 1 -1 -1 -1 1 1 0.158
14 -1 1 1 -1 1 1 -1 1 -1 1 -1 1 -1 1 -1 -1 0.04915 -1 1 1 1 -1 1 -1 -1 1 -1 1 1 1 -1 -1 -1 0.038
16 -1 1 1 1 1 -1 -1 -1 -1 -1 -1 -1 1 1 1 1 0.122
17 1 -1 -1 -1 -1 1 1 1 1 1 1 1 -1 -1 -1 -1 0.22818 1 -1 -1 -1 1 -1 1 1 -1 1 -1 -1 -1 1 1 1 0.738
19 1 -1 -1 1 -1 -1 1 -1 1 -1 1 -1 1 -1 1 1 0.568
20 1 -1 -1 1 1 1 1 -1 -1 -1 -1 1 1 1 -1 -1 0.17521 1 -1 1 -1 -1 -1 -1 1 1 -1 -1 1 1 1 -1 1 0.418
22 1 -1 1 -1 1 1 -1 1 -1 -1 1 -1 1 -1 1 -1 0.29523 1 -1 1 1 -1 1 -1 -1 1 1 -1 -1 -1 1 1 -1 0.227
24 1 -1 1 1 1 -1 -1 -1 -1 1 1 1 -1 -1 -1 1 0.322
25 1 1 -1 -1 -1 -1 -1 -1 -1 1 1 1 1 1 1 -1 1.70626 1 1 -1 -1 1 1 -1 -1 1 1 -1 -1 1 -1 -1 1 0.526
27 1 1 -1 1 -1 1 -1 1 -1 -1 1 -1 -1 1 -1 1 0.30428 1 1 -1 1 1 -1 -1 1 1 -1 -1 1 -1 -1 1 -1 0.985
29 1 1 1 -1 -1 1 1 -1 -1 -1 -1 1 -1 -1 1 1 0.171
30 1 1 1 -1 1 -1 1 -1 1 -1 1 -1 -1 1 -1 -1 0.24131 1 1 1 1 -1 -1 1 1 -1 1 -1 -1 1 -1 -1 -1 0.139
32 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0.098
33 0 0 0 0 0 0 -1/3 0 -1/3 0 -1/3 0 0 0 0 0 0.144
d.r. A B C D E =ABCDE =ABC =ABD =ABE =ACD =ACE =ADE =BCD =BCE =BDE =CDE
(“d.r.” are the defining relations used to create the fractional factorial design.)
316316
A.3 DESIGN MATRIX AND RESULTS FOR NETD METAMODEL
Outer Array: SPCTN -1 -1 1 1SCEFF -1 1 -1 1
Run DXY D* F# T0 TDI NETD NETD NETD NETD Mean Stdev1 -1 -1 -1 -1 -1 0.145 0.136 0.22 0.206 0.177 0.042
2 -1 -1 -1 -1 1 0.103 0.096 0.156 0.146 0.125 0.033 -1 -1 -1 1 -1 0.103 0.096 0.156 0.146 0.125 0.034 -1 -1 -1 1 1 0.073 0.068 0.11 0.103 0.089 0.0215 -1 -1 1 -1 -1 0.257 0.241 0.389 0.364 0.313 0.0756 -1 -1 1 -1 1 0.182 0.17 0.275 0.258 0.221 0.0537 -1 -1 1 1 -1 0.182 0.171 0.276 0.258 0.222 0.0538 -1 -1 1 1 1 0.129 0.121 0.195 0.183 0.157 0.0379 -1 1 -1 -1 -1 0.106 0.1 0.161 0.151 0.13 0.031
10 -1 1 -1 -1 1 0.075 0.07 0.114 0.107 0.092 0.02211 -1 1 -1 1 -1 0.076 0.071 0.114 0.107 0.092 0.02212 -1 1 -1 1 1 0.053 0.05 0.081 0.076 0.065 0.016
13 -1 1 1 -1 -1 0.188 0.176 0.285 0.267 0.229 0.05514 -1 1 1 -1 1 0.133 0.125 0.202 0.189 0.162 0.03915 -1 1 1 1 -1 0.134 0.125 0.202 0.189 0.163 0.03916 -1 1 1 1 1 0.094 0.088 0.143 0.134 0.115 0.02817 1 -1 -1 -1 -1 0.123 0.115 0.186 0.174 0.15 0.03618 1 -1 -1 -1 1 0.087 0.081 0.131 0.123 0.106 0.02519 1 -1 -1 1 -1 0.087 0.081 0.132 0.123 0.106 0.02620 1 -1 -1 1 1 0.062 0.058 0.093 0.087 0.075 0.01821 1 -1 1 -1 -1 0.217 0.203 0.328 0.307 0.264 0.06322 1 -1 1 -1 1 0.153 0.143 0.232 0.217 0.186 0.04523 1 -1 1 1 -1 0.154 0.144 0.233 0.218 0.187 0.045
24 1 -1 1 1 1 0.109 0.102 0.165 0.154 0.133 0.03225 1 1 -1 -1 -1 0.09 0.084 0.136 0.127 0.109 0.02626 1 1 -1 -1 1 0.063 0.059 0.096 0.09 0.077 0.01927 1 1 -1 1 -1 0.064 0.06 0.096 0.09 0.078 0.01828 1 1 -1 1 1 0.045 0.042 0.068 0.064 0.055 0.01329 1 1 1 -1 -1 0.159 0.149 0.24 0.225 0.193 0.04630 1 1 1 -1 1 0.112 0.105 0.17 0.159 0.137 0.03331 1 1 1 1 -1 0.113 0.105 0.171 0.16 0.137 0.03332 1 1 1 1 1 0.08 0.075 0.121 0.113 0.097 0.02333 -2.354 0 0 0 0 0.129 0.121 0.196 0.183 0.157 0.03834 2.354 0 0 0 0 0.086 0.081 0.131 0.122 0.105 0.025
35 0 -2.354 0 0 0 0.163 0.152 0.246 0.23 0.198 0.04736 0 2.354 0 0 0 0.076 0.071 0.115 0.107 0.092 0.02237 0 0 -2.354 0 0 0.046 0.043 0.07 0.065 0.056 0.01338 0 0 2.354 0 0 0.184 0.172 0.278 0.26 0.224 0.05339 0 0 0 -2.354 0 0.172 0.161 0.261 0.244 0.21 0.0540 0 0 0 2.354 0 0.074 0.069 0.112 0.105 0.09 0.02241 0 0 0 0 -2 0.179 0.168 0.271 0.254 0.218 0.05242 0 0 0 0 2 0.08 0.075 0.121 0.113 0.097 0.02343 0 0 0 0 0 0.103 0.097 0.157 0.147 0.126 0.03
317317
A.4 NETD FACTOR TRANSFORMATIONS
Factor Coded Uncoded Coded UncodedFNUM -2.35386 1.5 2.35386 3
T0 -2.35386 0.3 2.35386 0.7DSTAR -2.35386 7.0E+10 2.35386 1.5E+11
TDI - 2 1 2 5DXY -2.35386 20 2.35386 30
SPCTN - 1 7.65 1 8.65SCEFF - 1 0.7 1 0.8
These transformations are used in the context of the design matrix in Appendix A.3.
For each coded value in the design matrix, a linear transformation is performed to the
uncoded scale, and this uncoded value is used in the FLIR92 input files. This procedure is
discussed in Section 6.3.4.
318318
A.5 NETD VERIFICATION RUNS
Outer Array: SPCTN -1 -1 1 1SCEFF -1 1 -1 1
Run DXY D* F# T0 TDI NETD NETD NETD NETD1 -2.35386 -2.35386 -2.35386 -2.35386 2 0.117 0.109 0.177 0.1652 -2.35386 -2.35386 2.35386 2.35386 -2 0.447 0.418 0.677 0.6333 -2.35386 2.35386 -2.35386 2.35386 -2 0.052 0.049 0.079 0.0744 -2.35386 2.35386 2.35386 -2.35386 2 0.218 0.204 0.33 0.3085 2.35386 -2.35386 -2.35386 2.35386 2 0.033 0.031 0.05 0.0476 2.35386 -2.35386 2.35386 -2.35386 -2 0.695 0.65 1.053 0.9857 2.35386 2.35386 -2.35386 -2.35386 -2 0.081 0.076 0.123 0.1158 2.35386 2.35386 2.35386 2.35386 2 0.062 0.058 0.094 0.0889 -1.1 -1.1 -1.1 -1.1 1 0.104 0.098 0.158 0.148
10 -1.1 -1.1 1.1 1.1 -1 0.19 0.177 0.287 0.26811 -1.1 1.1 -1.1 1.1 -1 0.072 0.067 0.109 0.10212 -1.1 1.1 1.1 -1.1 1 0.139 0.13 0.21 0.19713 1.1 -1.1 -1.1 1.1 1 0.059 0.055 0.09 0.08414 1.1 -1.1 1.1 -1.1 -1 0.229 0.215 0.347 0.32515 1.1 1.1 -1.1 -1.1 -1 0.087 0.081 0.131 0.12316 1.1 1.1 1.1 1.1 1 0.079 0.074 0.119 0.112
NOTE: Design points are in CODED values.
Mean Mean Stdev StdevRun (actual) (calc) ERROR (actual) (calc) ERROR
1 0.142 0.146 0.004 0.034 0.035 0.0012 0.544 0.409 -0.135 0.130 0.097 -0.0333 0.064 0.082 0.019 0.015 0.018 0.0034 0.265 0.244 -0.021 0.063 0.058 -0.0055 0.040 0.083 0.043 0.010 0.018 0.0086 0.846 0.547 -0.299 0.203 0.130 -0.0737 0.099 0.104 0.005 0.024 0.023 0.0008 0.076 0.083 0.007 0.018 0.019 0.0019 0.127 0.128 0.001 0.030 0.031 0.000
10 0.231 0.231 0.001 0.055 0.055 0.00011 0.088 0.087 0.000 0.021 0.021 0.00012 0.169 0.168 -0.001 0.040 0.040 0.00013 0.072 0.072 0.000 0.018 0.017 0.00014 0.279 0.277 -0.002 0.067 0.066 -0.00115 0.106 0.107 0.002 0.025 0.026 0.00016 0.096 0.091 -0.005 0.023 0.022 -0.001
319319
A.6 REPRESENTATIVE PHASE A DSIDES DATA FILE(FLIRA11H.DAT)
PTITLE : Problem Title, User Name and Date FLIR system-level design; phase a, scenario 1, 1 TDI, high start Jesse Peplinski, May 17 1997NUMSYS : # system variables - real, discrete, boolean 4 0 0SYSVAR: name, number, min,max,guess (real vars first) FNUM 1 1.5 3 3 : optics f number T0 2 0.3 0.7 0.7 : optics transmission, nmu DXY 3 20 30 30 : size of detector element, micro m DSTAR 4 0.7 1.5 1.5 : peak detectivity, 10^11 cm Hz^(1/2)/WNUMCAG:lincon,nlin<con,nlin=con,lingol,nlingol 0 1 0 0 2DEVFUN: 2 : levels 1 1 : level 1, 1 terms; (+1,1.0) 2 1 : level 2, 1 terms; (+2,1.0)STOPCR: Stopping criteria - 1,0,MaxSynthCycles,DevFcnTol,sysvar tol 1 0 100 0.01 0.05:NLINGO : names of nonlinear goals;start numbering AFTER lingols netg 1: noise equivalent temperature netsdg 2: NET standard deviationNLINCO : names of nonlinear constraints netc 1: required NET performanceALPOUT : Input/output Control 1 1 1 0 1 1 0 0 1 0 OPTIMP: -0.001 0.1 0.005:ENDPRB:
320320
A.7 REPRESENTATIVE PHASE A DSIDES FORTRAN FILE(FLIRA11H.F)
C***********************************************************************CC Subroutine USRSETCC Purpose: Evaluate non-linear constraints and goals.C NOTE - Do not specify the deviation variablesCC-----------------------------------------------------------------------C Arguments Name Type DescriptionC --------- ---- ---- -----------C Input: IPATH int = 1 evaluate constraints and goalsC = 2 evaluate constraints onlyC = 3 evaluate goals onlyC NDESV int number of design variablesC MNLNCG int maximum number of nonlinearC constraints and goalsC NOUT int unit number of output data fileC DESVAR real vector of design variablesCC Output: CONSTR real vector of constraint values C GOALS real vector of goal valuesCC Input/Output: noneC-----------------------------------------------------------------------C Common Blocks: noneCC Include Files: noneCC Called from: GCALCCC Calls to: noneC-----------------------------------------------------------------------C Development HistoryCC Author: JESSE PEPLINSKIC Date: May 21, 1996CC Modifications:CC***********************************************************************C-C SUBROUTINE USRSET (IPATH, NDESV, MNLNCG, NOUT, DESVAR, & CONSTR, GOALS)CC---------------------------------------C Arguments:C---------------------------------------C INTEGER IPATH, NDESV, MNLNCG, NOUT
321321
C REAL DESVAR(NDESV) REAL CONSTR(MNLNCG), GOALS(MNLNCG)CC---------------------------------------CC ///////// DESIGN VARIABLES ///////////C FNUM = optics f numberC T0 = optics transmission, nmuC DXY = size of detector element, micro mC DSTAR = peak detectivity, 10^11 cm Hz^(1/2)/WC TDI = # detectors in TDICC ///////// INTERMEDIATE VAR'S ///////////C FNUMC = optics f number, codedC T0C = optics transmission, codedC DXYC = size of detector element, codedC DSTARC = peak detectivity, codedC TDIC = # detectors in TDI, codedCC ///////// RESPONSES ///////////C NETD = calculated NETD mean, KC NETSDV = calculated NETD standard deviationCC ///////// GOAL AND CONSTRAINT TARGETS ///////////C NETNOM = nominal (threshold) NETDC NETGOL = target (ideal) NETDCC---------------------------------------C REAL FNUM, T0, DXY, DSTAR, TDI REAL FNUMC, T0C, DXYC, DSTARC, TDIC REAL NETD, NETSDV REAL NETNOM, NETGOLCC 1.0 Set the values of the local design variables (optional)C FNUM = DESVAR(1) T0 = DESVAR(2) DXY = DESVAR(3) * 1E-6 DSTAR = DESVAR(4) * 1E11 TDI = 1CC +++++ Coded values for NET RSM calculations +++++C FNUMC = (FNUM - 2.25)*2.35386/0.75 T0C = (T0 - 0.5)*2.35386/0.2 DXYC = (DXY - 25E-6)*2.35386/5E-6 DSTARC = (DSTAR - 1.1E11)*2.35386/4E10 TDIC = (TDI - 3)CC 2.0 Perform analysis relevant to non-linear constraints and goalsCC NETNOM = 0.2
322322
NETGOL = 0.05C NETD = 0.1254697+ (-0.011832*DXYC) + (-0.022123*DSTARC) & + (0.0018516*DXYC*DSTARC) + (0.0385671*FNUMC) & + (-0.003367*DXYC*FNUMC) + (-0.006086*DSTARC*FNUMC) & + (-0.024524*T0C) + (0.0021172*DXYC*T0C) & + (0.0037734*DSTARC*T0C) + (-0.006695*FNUMC*T0C) & + (-0.025606*TDIC) + (0.0020391*DXYC*TDIC) & + (0.0037578*DSTARC*TDIC) + (-0.006773*FNUMC*TDIC) & + (0.0041797*T0C*TDIC) + (0.0007129*DXYC*DXYC) & + (0.0032171*DSTARC*DSTARC) + (0.0022696*FNUMC*FNUMC) & + (0.0040744*T0C*T0C) + (0.0074483*TDIC*TDIC)C NETSDV = 0.0302713 + (-0.00286*DXYC) + (-0.005271*DSTARC) & + (0.0004286*DXYC*DSTARC) + (0.009203*FNUMC) & + (-0.000776*DXYC*FNUMC) + (-0.00143*DSTARC*FNUMC) & + (-0.005902*T0C) + (0.0005213*DXYC*T0C) & + (0.0008875*DSTARC*T0C) + (-0.001562*FNUMC*T0C) & + (-0.006079*TDIC) + (0.0004655*DXYC*TDIC) & + (0.0009766*DSTARC*TDIC) + (-0.001618*FNUMC*TDIC) & + (0.0009553*T0C*TDIC) + (0.0001428*DXYC*DXYC) & + (0.0007025*DSTARC*DSTARC) + (0.0004885*FNUMC*FNUMC) & + (0.0009593*T0C*T0C) + (0.0016884*TDIC*TDIC)CC**************************************************************C PRINT *, '' PRINT *,' NETD mean = ',NETD PRINT *,'NETD std dev = ',NETSDVCC 3.0 Evaluate non-linear constraintsC IF (IPATH .EQ. 1 .OR. IPATH .EQ. 2) THENCC <<< Constraint is VIOLATED when RHS is NEGATIVE. >>>C CONSTR(1) = NETNOM - NETDC END IFCC 4.0 Evaluate non-linear goalsC IF (IPATH .EQ. 1 .OR. IPATH .EQ. 3) THENCC Minimum resolvable temperature GOALS(1) = NETD/NETGOL - 1 GOALS(2) = 100*NETSDVC END IFCC 5.0 Return to calling routineC RETURN END
323323
A.8 DSIDES OUTPUT FROM PHASE A
Run: a11h
Phase “A”
Scenario 1
One TDI stage
High starting point
Scenario 1 (NETD mean at higher priority):
Run a11l a11h a12l a12h a14l a14hGOAL NETD Mean (°C) 0.070 0.070 0.050 0.054 0.043 0.048
VALUES NETD Std Dev (°C) 0.016 0.016 0.012 0.012 0.010 0.011
FNUM 1.50 1.51 1.50 1.58 1.50 1.58DESIGN T0 0.584 0.584 0.579 0.578 0.531 0.534
VARIABLE DXY 28.95 30.00 25.75 30.00 26.76 28.29VALUES DSTAR 1.321 1.294 1.319 1.288 1.156 1.186
TDI 1 1 2 2 4 4
DEVIATION Level 1 0.396 0.408 0.007 0.077 0.000 0.000FUNCTION Level 2 1.561 1.574 1.152 1.218 1.018 1.109
Scenario 2 (NETD standard deviation at higher priority):
Run a21l a21h a22l a22h a24l a24hGOAL NETD Mean (°C) 0.071 0.072 0.051 0.053 0.043 0.047
VALUES NETD Std Dev (°C) 0.016 0.016 0.011 0.012 0.010 0.011
FNUM 1.50 1.53 1.50 1.57 1.50 1.57DESIGN T0 0.619 0.590 0.591 0.563 0.531 0.534
VARIABLE DXY 26.66 30.00 26.17 30.00 26.76 28.29VALUES DSTAR 1.356 1.363 1.337 1.313 1.156 1.181
TDI 1 1 2 2 4 4
DEVIATION Level 1 1.598 1.598 1.150 1.205 1.018 1.101FUNCTION Level 2 0.426 0.440 0.012 0.069 0.000 0.000
324324
APPENDIX B
DATA AND RESULTS FROM CASE STUDYPHASE B
In this appendix the supporting data behind the computations in Phase B of the
FLIR case study are given.
• Appendix B.1 contains a sample DSIDES data file used for this phase.
• Appendix B.2 contains a sample DSIDES Fortran file used for this phase.
• Appendix B.3 contains the results of the twelve DSIDES runs performed for
Phase B.
325325
B.1 REPRESENTATIVE PHASE B DSIDES DATA FILE
PTITLE : Problem Title, User Name and Date FLIR system-level design; phase b, scenario 1, 1 TDI, high start Jesse Peplinski, May 17 1997NUMSYS : # system variables - real, discrete, boolean 5 0 0SYSVAR: name, number, min,max,guess (real vars first) FNUM 1 1.5 3 3 : optics f number FL 2 10 25 25 : focal length, cm T0 3 0.3 0.7 0.7 : optics transmission, nmu DXY 4 20 30 30 : size of detector element, micro m DSTAR 5 0.7 1.5 1.5 : peak detectivity, 10^11 cm Hz^(1/2)/WNUMCAG:lincon,nlin<con,nlin=con,lingol,nlingol 0 5 0 0 5DEVFUN: 3 : levels 1 1 : level 1, 1 terms; (+1,1.0) 2 1 : level 2, 1 terms; (+2,1.0) 3 1 : level 3, 1 terms; (+5,1.0)STOPCR: Stopping criteria - 1,0,MaxSynthCycles,DevFcnTol,sysvar tol 1 0 100 0.01 0.05:NLINGO : names of nonlinear goals;start numbering AFTER lingols netg 1: noise equivalent temperature netsdg 2: NET standard deviation dpmg 3: development process mean dpsdg 4: development process standard deviation lensg 5: afocal lens assembly costNLINCO : names of nonlinear constraints netc 1: required NET performance dpmc 2: required development process time lensc 3: max lens assy cost netsdc 4: net std dev dpsdc 5: development process std devALPOUT : Input/output Control 1 1 1 0 1 1 0 0 1 0 OPTIMP: -0.001 0.1 0.005:ENDPRB:
326326
B.2 REPRESENTATIVE PHASE B DSIDES FORTRAN FILE
C***********************************************************************CC Subroutine USRSETCC Purpose: Evaluate non-linear constraints and goals.C NOTE - Do not specify the deviation variablesCC-----------------------------------------------------------------------C Arguments Name Type DescriptionC --------- ---- ---- -----------C Input: IPATH int = 1 evaluate constraints and goalsC = 2 evaluate constraints onlyC = 3 evaluate goals onlyC NDESV int number of design variablesC MNLNCG int maximum number of nonlinearC constraints and goalsC NOUT int unit number of output data fileC DESVAR real vector of design variablesCC Output: CONSTR real vector of constraint values C GOALS real vector of goal valuesCC Input/Output: noneC-----------------------------------------------------------------------C Common Blocks: noneCC Include Files: noneCC Called from: GCALCCC Calls to: noneC-----------------------------------------------------------------------C Development HistoryCC Author: JESSE PEPLINSKIC Date: May 21, 1996CC Modifications:CC***********************************************************************C-C SUBROUTINE USRSET (IPATH, NDESV, MNLNCG, NOUT, DESVAR, & CONSTR, GOALS)CC---------------------------------------C Arguments:C---------------------------------------C INTEGER IPATH, NDESV, MNLNCG, NOUTC
327327
REAL DESVAR(NDESV) REAL CONSTR(MNLNCG), GOALS(MNLNCG)CC---------------------------------------CC ///////// DESIGN VARIABLES ///////////C FNUM = optics f numberC FL = focal length, cmC T0 = optics transmission, nmuC DXY = size of detector element, micro mC DSTAR = peak detectivity, 10^11 cm Hz^(1/2)/WC TDI = # detectors in TDICC ///////// INTERMEDIATE VAR'S ///////////C FNUMC = optics f number, codedC T0C = optics transmission, codedC DXYC = size of detector element, codedC DSTARC = peak detectivity, codedC TDIC = # detectors in TDI, codedC REDES = probability of iteration in design processC DESCL = mean time for design calculations, hoursC REDESC = probability of iteration, codedC DESCLC = design calculation time, codedCC ///////// RESPONSES ///////////C NETD = calculated NETD mean, KC NETSDV = calculated NETD standard deviationC DPMEAN = calculated mean development process time, hoursC DPSDEV = calculated development process time std. deviationC LNSCST = calculated afocal lens assy cost, NMUCC ///////// GOAL AND CONSTRAINT TARGETS ///////////C NETNOM = nominal (threshold) NETDC NETGOL = target (ideal) NETDC DPNOM = nominal (threshold) development process timeC DPGOL = target (ideal) development process timeC LNSNOM = nominal (threshold) lens costC LNSGOL = target (ideal) lens costCC---------------------------------------CC REAL FNUM, FL, T0, DXY, DSTAR, TDI REAL FNUMC, T0C, DXYC, DSTARC, TDIC REAL REDES, DESCL, REDESC, DESCLC REAL NETD, NETSDV, DPMEAN, DPSDEV, LNSCST REAL NETNOM, NETGOL, DPNOM, DPGOL, LNSNOM, LNSGOLCCC 1.0 Set the values of the local design variables (optional)C FNUM = DESVAR(1) FL = DESVAR(2) T0 = DESVAR(3) DXY = DESVAR(4) * 1E-6
328328
DSTAR = DESVAR(5) * 1E11 TDI = 1CC +++++ Coded values for NET RSM calculations +++++C FNUMC = (FNUM - 2.25)*2.35386/0.75 T0C = (T0 - 0.5)*2.35386/0.2 DXYC = (DXY - 25E-6)*2.35386/5E-6 DSTARC = (DSTAR - 1.1E11)*2.35386/4E10 TDIC = (TDI - 3)CC 2.0 Perform analysis relevant to non-linear constraints and goalsCC DESCL = 24 + (DSTAR - 0.7E11)*24/8E10 DESCLC = (DESCL - 36)/6 REDES = 0.02+(30E-6 - DXY)*0.12/10E-6 REDESC = (REDES - 0.08)/0.03C NETNOM = 0.2 NETGOL = 0.05 DPNOM = 150 DPGOL = 120 LNSNOM = 80 LNSGOL = 1C NETD = 0.1254697+ (-0.011832*DXYC) + (-0.022123*DSTARC) & + (0.0018516*DXYC*DSTARC) + (0.0385671*FNUMC) & + (-0.003367*DXYC*FNUMC) + (-0.006086*DSTARC*FNUMC) & + (-0.024524*T0C) + (0.0021172*DXYC*T0C) & + (0.0037734*DSTARC*T0C) + (-0.006695*FNUMC*T0C) & + (-0.025606*TDIC) + (0.0020391*DXYC*TDIC) & + (0.0037578*DSTARC*TDIC) + (-0.006773*FNUMC*TDIC) & + (0.0041797*T0C*TDIC) + (0.0007129*DXYC*DXYC) & + (0.0032171*DSTARC*DSTARC) + (0.0022696*FNUMC*FNUMC) & + (0.0040744*T0C*T0C) + (0.0074483*TDIC*TDIC)C NETSDV = 0.0302713 + (-0.00286*DXYC) + (-0.005271*DSTARC) & + (0.0004286*DXYC*DSTARC) + (0.009203*FNUMC) & + (-0.000776*DXYC*FNUMC) + (-0.00143*DSTARC*FNUMC) & + (-0.005902*T0C) + (0.0005213*DXYC*T0C) & + (0.0008875*DSTARC*T0C) + (-0.001562*FNUMC*T0C) & + (-0.006079*TDIC) + (0.0004655*DXYC*TDIC) & + (0.0009766*DSTARC*TDIC) + (-0.001618*FNUMC*TDIC) & + (0.0009553*T0C*TDIC) + (0.0001428*DXYC*DXYC) & + (0.0007025*DSTARC*DSTARC) + (0.0004885*FNUMC*FNUMC) & + (0.0009593*T0C*T0C) + (0.0016884*TDIC*TDIC)C
DPMEAN = 130.74422 + (7.132*DESCLC) + (4.14*REDESC) & + (0.201*DESCLC*REDESC) + (0.3078333*DESCLC*DESCLC) & + (0.5948333*REDESC*REDESC)C DPSDEV = 3.1867535 + (0.2123987*DESCLC) + (0.4876791*REDESC) & + (0.0029388*DESCLC*REDESC) + (-0.032226*DESCLC*DESCLC)
329329
& + (0.2811089*REDESC*REDESC)C LNSCST = 33.85384 + (-83.68089*FNUM) + (9.1531936*FL) & + (-47.15439*T0) + (24.736707*FNUM*FNUM) & + (-3.768831*FL*FNUM) + (0.1031586*FL*FL) + (217.00534*T0*T0)CC**************************************************************C CCC 3.0 Evaluate non-linear constraintsC IF (IPATH .EQ. 1 .OR. IPATH .EQ. 2) THENCC <<< Constraint is VIOLATED when RHS is NEGATIVE. >>>C CONSTR(1) = NETNOM - NETD CONSTR(2) = DPNOM - DPMEAN CONSTR(3) = LNSNOM - LNSCST CONSTR(4) = 100 - NETSDV CONSTR(5) = 100 - DPSDEVC END IFCCC 4.0 Evaluate non-linear goalsC IF (IPATH .EQ. 1 .OR. IPATH .EQ. 3) THENCC Minimum resolvable temperature GOALS(1) = NETD/NETGOL - 1 GOALS(2) = 100*NETSDVC Development process time GOALS(3) = DPMEAN/DPGOL - 1 GOALS(4) = DPSDEVC Afocal lens assembly cost GOALS(5) = LNSCST/LNSGOL - 1 END IFCCC 5.0 Return to calling routineC RETURN ENDC
330330
B.3 DSIDES OUTPUT FROM PHASE B
“Run b11h” -> “phase B, scenario 1, one TDI stage, high starting point”
Scenario Priority Level 1 Priority Level 2 Priority Level 3
Scenario 1 NETD mean = 0.05 NET std dev LENSCOST
Scenario 2 NETD mean = 0.1 NET std dev LENSCOST
Scenario 3 LENSCOST NETD mean = 0.1 NET std dev
Run b11l b11h b12l b12h b21l b21h FNUM 1.50 1.50 1.50 1.50 2.05 2.05
FL 10.00 10.00 10.00 10.00 10.32 10.20 T0 0.589 0.589 0.542 0.542 0.634 0.633
DXY 30.00 30.00 29.92 29.92 29.97 30.00 DSTAR 1.275 1.275 1.274 1.275 1.498 1.500
TDI 1 1 2 2 1 1
NET 0.069 0.069 0.049 0.049 0.100 0.100NETSD 0.016 0.016 0.011 0.011 0.023 0.023
LENSCOST 56.90 56.90 47.51 47.44 49.24 48.53FPACOST 9.81 9.81 32.38 32.38 9.81 9.81
Run b22l b22h b31l b31h b32l b32h FNUM 2.70 2.72 1.86 1.86 1.86 1.86
FL 10.00 12.20 10.00 10.02 10.00 10.02 T0 0.697 0.687 0.301 0.301 0.301 0.301
DXY 29.94 30.00 29.92 30.00 29.91 30.00 DSTAR 1.495 1.500 1.494 1.500 1.493 1.500
TDI 2 2 1 1 2 2
NET 0.100 0.101 0.146 0.146 0.110 0.110NETSD 0.023 0.024 0.034 0.034 0.026 0.025
LENSCOST 60.87 61.29 1.08 1.09 1.09 1.09FPACOST 32.38 32.38 9.81 9.81 32.38 32.38
331331
APPENDIX C
DATA AND RESULTS FROM CASE STUDYPHASE C
In this appendix the supporting data behind the computations in Phase C of the
FLIR case study are given.
• Appendix C.1 contains a sample SIMAN model file (.MOD) used for
computing the duration of the focal plane array design process.
• Appendix C.2 contains the corresponding SIMAN experiment file (.EXP)
used for computing the duration of the focal plane array design process.
• Appendix C.3 contains the complete set of ltransformations used for mapping
between coded and uncoded values for DSTAR, DXY, the design calculations
task duration, and the probability of design iteration.
• Appendix C.4 contains the design matrix and SIMAN results used for
building the metamodels for FPA design process duration.
• Appendix C.5 contains the design matrisx and SIMAN results used for
verifying the FPA design process duration metamodels.
• Appendix C.6 contains a sample DSIDES data file used for this phase.
• Appendix C.7 contains a sample DSIDES Fortran file used for this phase.
• Appendix C.8 contains the results of the sixteen DSIDES runs performed for
Phase C.
332332
C.1 REPRESENTATIVE SIMAN MODEL FILE
BEGIN; CREATE:,1; GoStart QUEUE, StartQ: MARK(ArrTime);
TechMtg QUEUE, TechMtgQ; SEIZE: Designer,4; DELAY: TechMtgTime; RELEASE: Designer,4;BreadBoard QUEUE, BreadBoardQ; SEIZE: Designer,4; DELAY: BreadBoardTime; RELEASE: Designer,4;ConfigDef QUEUE, ConfigDefQ; SEIZE: Designer,4; DELAY: ConfigDefTime; RELEASE: Designer,4; BRANCH,4,10: WITH,1,WorstCase: ALWAYS,ElecStress: ALWAYS,DesignCalc: ALWAYS,ThermAnal;
WorstCase QUEUE, WorstCaseQ; SEIZE: Designer; DELAY: WorstCaseTime; RELEASE: Designer: NEXT(Checklist);
ElecStress QUEUE, ElecStressQ; SEIZE: Designer; DELAY:
333333
ElecStressTime; RELEASE: Designer: NEXT(Checklist);
DesignCalc QUEUE, DesignCalcQ; SEIZE: Designer; DELAY: DesignCalcTime; RELEASE: Designer: NEXT(Checklist);
ThermAnal QUEUE, ThermalAnalysisQ; SEIZE: Designer; DELAY: ThermalAnalysisTime; RELEASE: Designer;
Checklist QUEUE, ChecklistQ; COMBINE:4; SEIZE: Designer,4; DELAY: ChecklistTime; RELEASE: Designer,4; BRANCH,1,1: WITH,P_PassAnalysis,DesReview: WITH,P_RedoAnalysis,ConfigDef;
DesReview QUEUE, DesignReviewQ; SEIZE: Designer,4; DELAY: DesignReviewTime; RELEASE: Designer,4; BRANCH,1,3: WITH,P_PassDesign,Finish: WITH,P_RedoDesign,BreadBoard;
Finish TALLY: Flowtime, INTERVAL(ArrTime); COUNT: JobsDone: NEXT(GoStart);
334334
C.2 REPRESENTATIVE SIMAN EXPERIMENT FILE
BEGIN; PROJECT, Development Process;
VARIABLES: TechMtgMean, 8 : BreadBoardMean, 40 : ConfigDefMean, 8 : WorstCaseMean, 24 : ElecStressMean, 16 : DesignCalcMean, 24: ThermalAnalysisMean, 30: ChecklistMean, 8 : DesignReviewMean, 16 : SDev, 0.5: P_RedoAnalysis, 0.1: P_RedoDesign, .05: P_PassAnalysis, 0.9: P_PassDesign, .949999988079071;
RESOURCES: Designer, CAPACITY(4); ! Number of designers.
EXPRESSIONS: TechMtgTime, NORM(TechMtgMean,SDev,1): BreadBoardTime, NORM(BreadBoardMean,SDev,2): ConfigDefTime, NORM(ConfigDefMean,SDev,3): WorstCaseTime, NORM(WorstCaseMean,SDev,4): ElecStressTime, NORM(ElecStressMean,SDev,5): DesignCalcTime, NORM(DesignCalcMean,SDev,6): ThermalAnalysisTime, NORM(ThermalAnalysisMean,SDev,7): ChecklistTime, NORM(ChecklistMean,SDev,8): DesignReviewTime, NORM(DesignReviewMean,SDev,9);
ATTRIBUTES: ArrTime; ! Arrival time of part
QUEUES: StartQ: TechMtgQ: BreadBoardQ: ConfigDefQ: WorstCaseQ: ElecStressQ: DesignCalcQ: ThermalAnalysisQ: ChecklistQ: DesignReviewQ;
COUNTERS: JobsDone,100;
TALLIES: Flowtime;
DSTATS: NR(Designer)/4,Designer Utilization;
REPLICATE, 10; ! Ten replications
335335
C.3 DESIGN VARIABLES AND TRANSFORMATIONS FORDEVELOPMENT PROCESS SIMULATION
DESIGN MATRIX:
DXY DSTAR design redo des calc redo desRUN DXYC DSTARC (real) (real) calc's design coded coded
1 - 1 - 1 2.8E-05 1.3E+11 42 0.05 1 - 12 - 1 1 2.8E-05 9.0E+10 30 0.05 - 1 - 13 1 - 1 2.3E-05 1.3E+11 42 0.11 1 14 1 1 2.3E-05 9.0E+10 30 0.11 - 1 15 - 2 0 3.0E-05 1.1E+11 36 0.02 0 - 26 2 0 2.0E-05 1.1E+11 36 0.14 0 27 0 - 2 2.5E-05 1.5E+11 48 0.08 2 0.008 0 2 2.5E-05 7.0E+10 24 0.08 - 2 0.009 0 0 2.5E-05 1.1E+11 36 0.08 0 0.00
TRANSFORMATIONS:
CODED-2 -1 0 1 2
design calculations 24 30 36 42 48 UNCODEDredo design 0 0.05 0.08 0.11 0.14
DXY (real or “uncoded” values): DXY = DXYC*(5E-6)/2+25E-6
DSTAR (real or “uncoded” values): DSTAR = DSTARC*4E10/2 + 1.1E11
Design calculations (uncoded): DESCL = 24+(DSTAR - 7E10)*24/8E10
Probability of redo design (uncoded): REDES = 0.02+(30E-6 - DXY)*0.12/10E-6
Design calculations (coded): DESCLC = (DESCL - 36)/6
Probability of redo design (coded): REDESC = (REDES - 0.08)/0.03
336336
C.4 DESIGN MATRIX AND RESULTS FOR DEVELOPMENT PROCESSSIMULATION MODEL
(Note: design matrix is in CODED values. Ten replications are generated in
SIMAN because of the stochastic nature of the simulation.
Design Redo REPLICATIONSRUN Calc's Design 1 2 3 4 5
1 -1 -1 119.21 117.85 117.81 124.58 120.362 -1 1 126.68 126.65 123.76 126.98 134.533 1 -1 133.01 131.29 131.25 139.34 134.164 1 1 140.96 141.05 138.04 141.86 150.135 -2 0 117.41 114.2 118.96 118.78 121.456 2 0 145.99 141.04 147.48 148.06 150.687 0 -2 126.11 122.39 120.77 130.29 125.088 0 2 143.97 150.33 137.39 147.81 144.499 0 0 131.47 127.36 132.96 133.18 135.8
REPLICATIONSRUN 6 7 8 9 10 MEAN STDEV
1 125.43 117.22 118.17 121.25 121.92 120.38 2.902 131.41 124.13 125.85 130.04 122.83 127.29 3.693 140.07 130.66 131.85 135.29 135.96 134.29 3.364 146.77 138.53 140.49 145.16 136.99 142.00 4.165 120.68 116.51 113.8 120.44 118.41 118.06 2.616 149.74 144.34 141.2 149.69 147.24 146.55 3.417 127.89 121.19 123.9 121.9 127.35 124.69 3.208 142.23 139 142.57 142.96 131.44 142.22 5.339 134.98 130.18 127.28 134.81 132.6 132.06 3.00
337337
C.5 VERIFICATION RUNS FOR DEVELOPMENT PROCESSSIMULATION MODEL
Design Redo REPLICATIONSRun Calc's Design 1 2 3 4 5 6 7
1 24 0.02 112.76 109.7 108.22 116.04 112 114.18 108.582 48 0.02 139.91 135.59 133.73 145.05 138.64 142.05 134.273 48 0.14 159.57 166.77 152.39 164.25 160.45 157.83 154.364 24 0.14 128.89 134.48 122.92 131.92 129.06 127.18 124.115 27 0.07 116.91 117.08 118.34 123.3 125.19 119.45 117.266 38 0.04 128.41 126.81 124.53 133.91 128.52 132.44 124.477 44 0.09 141.15 137.65 142.66 147.68 149.29 143.5 137.378 35 0.11 132.63 132.65 129.71 133.18 141.03 137.81 130.13
REPLICATIONS Mean Mean StDev StDevRun 8 9 10 (actual) (pred) ERROR (actual) (pred) ERROR
1 110.74 109.28 114.01 111.55 112.61 -1.06 2.67 2.79 -0.132 137.46 134.98 141.15 138.28 139.53 -1.25 3.75 3.62 0.133 157.93 158.56 146.08 157.82 157.70 0.12 5.88 5.59 0.294 127.72 127.9 117.29 127.15 127.57 -0.42 4.81 4.72 0.095 120.2 121.22 117.68 119.66 119.53 0.14 2.83 2.67 0.176 126.69 125.79 128.68 128.03 128.60 -0.58 3.13 3.10 0.037 140.54 144.74 142.78 142.74 142.34 0.40 3.86 3.61 0.258 131.95 136.34 128.73 133.42 134.27 -0.85 3.88 3.92 -0.04
338338
C.6 REPRESENTATIVE PHASE C DSIDES DATA FILE
PTITLE : Problem Title, User Name and Date FLIR system-level design; phase c, scenario 1, 1 TDI, high startpoint Jesse Peplinski, May 17 1997NUMSYS : # system variables - real, discrete, boolean 5 0 0SYSVAR: name, number, min,max,guess (real vars first) FNUM 1 1.5 3 3 : optics f number FL 2 10 25 25 : focal length, cm T0 3 0.3 0.7 0.7 : optics transmission, nmu DXY 4 20 30 30 : size of detector element, micro m DSTAR 5 0.7 1.5 1.5 : peak detectivity, 10^11 cm Hz^(1/2)/WNUMCAG:lincon,nlin<con,nlin=con,lingol,nlingol 0 5 0 0 5DEVFUN: 5 : levels 1 2 : level 1, 2 terms; (+1,1.0) (-1,1.0) 2 1 : level 2, 1 terms; (+2,1.0) 3 1 : level 3, 1 terms; (+5,1.0) 4 1 : level 4, 1 terms; (+3,1.0) 5 1 : level 5, 1 terms; (+4,1.0)STOPCR: Stopping criteria - 1,0,MaxSynthCycles,DevFcnTol,sysvar tol 1 0 100 0.01 0.05:NLINGO : names of nonlinear goals;start numbering AFTER lingols netg 1: noise equivalent temperature netsdg 2: NET standard deviation dpmg 3: development process mean dpsdg 4: development process standard deviation lensg 5: afocal lens assembly costNLINCO : names of nonlinear constraints netc 1: required NET performance dpmc 2: required development process time lensc 3: max lens assy cost netsdc 4: net std dev dpsdc 5: development process std devALPOUT : Input/output Control 1 1 1 0 1 1 0 0 1 0 OPTIMP: -0.001 0.1 0.005:ENDPRB:
ADREMO:10 0.1:
339339
C.7 REPRESENTATIVE PHASE C DSIDES FORTRAN FILE
C***********************************************************************CC Subroutine USRSETCC Purpose: Evaluate non-linear constraints and goals.C NOTE - Do not specify the deviation variablesCC-----------------------------------------------------------------------C Arguments Name Type DescriptionC --------- ---- ---- -----------C Input: IPATH int = 1 evaluate constraints and goalsC = 2 evaluate constraints onlyC = 3 evaluate goals onlyC NDESV int number of design variablesC MNLNCG int maximum number of nonlinearC constraints and goalsC NOUT int unit number of output data fileC DESVAR real vector of design variablesCC Output: CONSTR real vector of constraint values C GOALS real vector of goal valuesCC Input/Output: noneC-----------------------------------------------------------------------C Common Blocks: noneCC Include Files: noneCC Called from: GCALCCC Calls to: noneC-----------------------------------------------------------------------C Development HistoryCC Author: JESSE PEPLINSKIC Date: May 21, 1996CC Modifications:CC***********************************************************************C-C SUBROUTINE USRSET (IPATH, NDESV, MNLNCG, NOUT, DESVAR, & CONSTR, GOALS)CC---------------------------------------C Arguments:C---------------------------------------C INTEGER IPATH, NDESV, MNLNCG, NOUTC
340340
REAL DESVAR(NDESV) REAL CONSTR(MNLNCG), GOALS(MNLNCG)CC---------------------------------------CC ///////// DESIGN VARIABLES ///////////C FNUM = optics f numberC FL = focal length, cmC T0 = optics transmission, nmuC DXY = size of detector element, micro mC DSTAR = peak detectivity, 10^11 cm Hz^(1/2)/WC TDI = # detectors in TDICC ///////// INTERMEDIATE VAR'S ///////////C FNUMC = optics f number, codedC T0C = optics transmission, codedC DXYC = size of detector element, codedC DSTARC = peak detectivity, codedC TDIC = # detectors in TDI, codedC REDES = probability of iteration in design processC DESCL = mean time for design calculations, hoursC REDESC = probability of iteration, codedC DESCLC = design calculation time, codedCC ///////// RESPONSES ///////////C NETD = calculated NETD mean, KC NETSDV = calculated NETD standard deviationC DPMEAN = calculated mean development process time, hoursC DPSDEV = calculated development process time std. deviationC LNSCST = calculated afocal lens assy cost, NMUCC ///////// GOAL AND CONSTRAINT TARGETS ///////////C NETNOM = nominal (threshold) NETDC NETGOL = target (ideal) NETDC DPNOM = nominal (threshold) development process timeC DPGOL = target (ideal) development process timeC LNSNOM = nominal (threshold) lens costC LNSGOL = target (ideal) lens costCC---------------------------------------CC REAL FNUM, FL, T0, DXY, DSTAR, TDI REAL FNUMC, T0C, DXYC, DSTARC, TDIC REAL REDES, DESCL, REDESC, DESCLC REAL NETD, NETSDV, DPMEAN, DPSDEV, LNSCST REAL NETNOM, NETGOL, DPNOM, DPGOL, LNSNOM, LNSGOLCCC 1.0 Set the values of the local design variables (optional)C FNUM = DESVAR(1) FL = DESVAR(2) T0 = DESVAR(3) DXY = DESVAR(4) * 1E-6
341341
DSTAR = DESVAR(5) * 1E11 TDI = 1CC +++++ Coded values for NET RSM calculations +++++C FNUMC = (FNUM - 2.25)*2.35386/0.75 T0C = (T0 - 0.5)*2.35386/0.2 DXYC = (DXY - 25E-6)*2.35386/5E-6 DSTARC = (DSTAR - 1.1E11)*2.35386/4E10 TDIC = (TDI - 3)CC 2.0 Perform analysis relevant to non-linear constraints and goalsCC DESCL = 24 + (DSTAR - 0.7E11)*24/8E10 DESCLC = (DESCL - 36)/6 REDES = 0.02+(30E-6 - DXY)*0.12/10E-6 REDESC = (REDES - 0.08)/0.03C NETNOM = 0.2 NETGOL = 0.05 DPNOM = 150 DPGOL = 110 LNSNOM = 80 LNSGOL = 1C NETD = 0.1254697+ (-0.011832*DXYC) + (-0.022123*DSTARC) & + (0.0018516*DXYC*DSTARC) + (0.0385671*FNUMC) & + (-0.003367*DXYC*FNUMC) + (-0.006086*DSTARC*FNUMC) & + (-0.024524*T0C) + (0.0021172*DXYC*T0C) & + (0.0037734*DSTARC*T0C) + (-0.006695*FNUMC*T0C) & + (-0.025606*TDIC) + (0.0020391*DXYC*TDIC) & + (0.0037578*DSTARC*TDIC) + (-0.006773*FNUMC*TDIC) & + (0.0041797*T0C*TDIC) + (0.0007129*DXYC*DXYC) & + (0.0032171*DSTARC*DSTARC) + (0.0022696*FNUMC*FNUMC) & + (0.0040744*T0C*T0C) + (0.0074483*TDIC*TDIC)C NETSDV = 0.0302713 + (-0.00286*DXYC) + (-0.005271*DSTARC) & + (0.0004286*DXYC*DSTARC) + (0.009203*FNUMC) & + (-0.000776*DXYC*FNUMC) + (-0.00143*DSTARC*FNUMC) & + (-0.005902*T0C) + (0.0005213*DXYC*T0C) & + (0.0008875*DSTARC*T0C) + (-0.001562*FNUMC*T0C) & + (-0.006079*TDIC) + (0.0004655*DXYC*TDIC) & + (0.0009766*DSTARC*TDIC) + (-0.001618*FNUMC*TDIC) & + (0.0009553*T0C*TDIC) + (0.0001428*DXYC*DXYC) & + (0.0007025*DSTARC*DSTARC) + (0.0004885*FNUMC*FNUMC) & + (0.0009593*T0C*T0C) + (0.0016884*TDIC*TDIC)C
DPMEAN = 130.74422 + (7.132*DESCLC) + (4.14*REDESC) & + (0.201*DESCLC*REDESC) + (0.3078333*DESCLC*DESCLC) & + (0.5948333*REDESC*REDESC)C DPSDEV = 3.1867535 + (0.2123987*DESCLC) + (0.4876791*REDESC) & + (0.0029388*DESCLC*REDESC) + (-0.032226*DESCLC*DESCLC)
342342
& + (0.2811089*REDESC*REDESC)C LNSCST = 33.85384 + (-83.68089*FNUM) + (9.1531936*FL) & + (-47.15439*T0) + (24.736707*FNUM*FNUM) & + (-3.768831*FL*FNUM) + (0.1031586*FL*FL) + (217.00534*T0*T0)CC**************************************************************C CCC 3.0 Evaluate non-linear constraintsC IF (IPATH .EQ. 1 .OR. IPATH .EQ. 2) THENCC <<< Constraint is VIOLATED when RHS is NEGATIVE. >>>C CONSTR(1) = NETNOM - NETD CONSTR(2) = DPNOM - DPMEAN CONSTR(3) = LNSNOM - LNSCST CONSTR(4) = 100 - NETSDV CONSTR(5) = 100 - DPSDEVC END IFCCC 4.0 Evaluate non-linear goalsC IF (IPATH .EQ. 1 .OR. IPATH .EQ. 3) THENCC Minimum resolvable temperature GOALS(1) = NETD/NETGOL - 1 GOALS(2) = 100*NETSDVC Development process time GOALS(3) = DPMEAN/DPGOL - 1 GOALS(4) = DPSDEVC Afocal lens assembly cost GOALS(5) = LNSCST/LNSGOL - 1 END IFCCC 5.0 Return to calling routineC RETURN ENDCC
343343
C.8 DSIDES OUTPUT FROM PHASE C
“Run c11h” -> “phase C, scenario 1, one TDI stage, high starting point”
Scenario PriorityLevel 1
PriorityLevel 2
PriorityLevel 3
PriorityLevel 4
PriorityLevel 5
1 NETD = 0.05 NETD std dev LNSCST DPMEAN DPSDEV
2 NETD = 0.1 NETD std dev LNSCST DPMEAN DPSDEV
3 DPMEAN LNSCST NETD = 0.1 DPSDEV NETD std dev
4 DPSDEV DPMEAN LNSCST NETD = 0.1 NETD std dev
Run c11l c11h c12l c12h c21l c21h c22l c22h FNUM 1.50 1.50 1.51 1.51 1.50 1.51 1.50 1.50
FL 10.00 10.00 10.02 10.02 10.00 10.07 10.00 10.03 T0 0.589 0.589 0.586 0.587 0.660 0.660 0.451 0.452
DXY 30.00 30.00 29.99 30.00 30.00 30.00 29.97 30.00 DSTAR 1.275 1.275 1.267 1.259 0.700 0.705 0.700 0.702
TDI 1 1 2 2 1 1 2 2
NET 0.069 0.069 0.051 0.051 0.100 0.100 0.100 0.100NETSD 0.016 0.016 0.011 0.011 0.023 0.023 0.023 0.023
LENSCOST 56.88 56.90 55.94 56.11 72.66 72.65 32.19 32.36FPACOST 9.81 9.81 32.38 32.38 9.81 9.81 32.38 32.38
DPM 131.0 131.0 130.7 130.4 112.6 112.8 112.6 112.7DPSD 3.49 3.49 3.48 3.48 2.79 2.80 2.79 2.80
Run c31l c31h c32l c32h c41l c41h c42l c42h FNUM 1.69 1.55 1.79 1.79 1.51 1.51 1.75 1.77
FL 10.00 10.07 10.00 10.01 10.00 10.00 10.00 10.00 T0 0.374 0.314 0.312 0.311 0.341 0.341 0.339 0.353
DXY 30.00 30.00 29.99 30.00 27.12 27.12 27.12 27.12 DSTAR 0.700 0.704 0.700 0.700 0.700 0.700 0.700 0.700
TDI 1 1 2 2 1 1 2 2
NET 0.200 0.199 0.200 0.200 0.200 0.200 0.200 0.196NETSD 0.047 0.047 0.047 0.047 0.047 0.047 0.048 0.047
LENSCOST 14.00 14.05 3.97 3.98 17.87 17.86 7.81 8.82FPACOST 9.81 9.81 32.38 32.38 9.81 9.81 32.38 32.38
DPM 112.6 112.7 112.6 112.6 115.0 115.0 115.0 115.0DPSD 2.79 2.80 2.79 2.79 2.43 2.43 2.43 2.43
344344
REFERENCES
1982, The Concise Oxford Dictionary, Oxford University Press, Oxford.
1986, Dictionary of Computing, Oxford University Press, New York.
1993, The American Heritage College Dictionary, Houghton Mifflin, Boston.
Allen, J. K., Hong, J., Adyanthaya, S. and Mistree, F., 1989, "The Decision SupportProblem Technique Workbook for Life Cycle Engineering," Systems DesignLaboratory Report, University of Houston, Houston, Texas.
Allen, J. K., Krishmachari, R. S., Masetta, J., Pearce, D., Rigby, D. and Mistree, F.,1990, "Fuzzy Compromise: An Effective Way to Solve Hierarchical DesignProblems," Third Air Force/NASA Symposium on Recent Advances inMultidisciplinary Analysis and Optimization, San Francisco, NASA, pp. 141-147.
Anderson, R., Merrell, S. T., Reyes, H. M., Smittle, J. and Young, D., 1996, “CombatVehicle Thermal Targeting System (CVTTS),” TI Technical Journal, 13(1): 18-23.
Arora, J. S., 1989, Introduction to Optimum Design, McGraw-Hill.
Asimow, M., 1962, Introduction to Design, Prentice Hall, Englewood Cliffs, NJ.
Askin, R. G. and Standridge, C. R., 1993, Modeling and Analysis of ManufacturingSystems, John Wiley & Sons, New York.
Azarm, S., Dierolf, D. A., Naft, J., Pecht, M., Richter, K. J. and Sawyer, B. T., 1988,"Decision Support Requirements in a Unified Life Cycle Engineering (ULCE)Environment," Unclassified, Institute for Defense Analyses, Alexandria, Virginia,IDA Paper P-2064, May 1988.
Badore, N. L.,1992, "Involvement and Empowerment: The Modern Paradigm forManagement Success," Manufacturing Systems: Foundations of World-ClassPractice, J. A. Heim and W. D. Compton, eds, Washington, D.C., NationalAcademy Press, pp. 85-92.
Banks, J., Carson, J. S. and Nelson, B. L., 1996, Discrete-event System Simulation,Prentice Hall, Upper Saddle River, N. J.
Bernus, P. and Nemes, L., 1996, “Editorial: CERA Special Issue on EnterpriseModeling,” Concurrent Engineering: Research and Applications, 4(3): 203-206.
Bessant, J., 1991, Managing Advanced Manufacturing Technology, NCC Blackwell,Manchester.
345345
Biles, W. E., 1984, "Design of Simulation Experiments," Proceedings of the 1984 WinterSimulation Conference (WSC), Dallas, TX, IEEE, pp. 99-104.
Blanchard, B. S. and Fabrycky, W. J., 1990, Systems Engineering and Analysis,Prentice-Hall, Englewood Cliffs, New Jersey.
Blyth, A., 1996, ACM SIGOIS Bulletin Special Issue on Enterprise Modeling, September14.
Box, G., Hunter, W. and Hunter, J., 1978, Statistics for Experimenters, John Wiley &Sons, New York.
Box, G. E. P. and Draper, N. R., 1987, Empirical Model-Building and ResponseSurfaces, John Wiley & Sons, New York.
Bras, B. A., 1992, "Foundations for Designing Decision-Based Design Processes," Ph.D.Dissertation, University of Houston, Houston, Texas.
Bras, B. A. and Mistree, F., 1991, "Designing Design Processes in Decision-BasedConcurrent Engineering," Proceedings SAE Aerotech '91, SAE Publication SP-886, Paper No. 912209, Long Beach, California, SAE International, pp. 15-36.
Byrne, D. M. and Taguchi, S., 1987, "The Taguchi Approach to Parameter Design," 40thAnnual Quality Congress Transactions, Milwaukee, Wisconsin, American Societyfor Quality Control.
Chase, R. B. and Aquilano, N. J., 1995, Production and Operations Management, Irwin,Chicago, Illinois.
Chen, W., Allen, J. K., Mavris, D. and Mistree, F., 1996, “A Concept ExplorationMethod for Determining Robust Top-Level Specifications,” EngineeringOptimization, 26: 137-158.
Chen, W., Allen, J. K., Mistree, F. and Tsui, K.-L., 1995, "Integration of ResponseSurface Method with the Compromise DSP in Developing a General Robust DesignProcedure," Advances in Design Automation, Boston, MA, ASME, pp. 485-492.
Cheng, B. and Titterington, D. M., 1994, “Neural Networks: A Review from a StatisticalPerspective,” Statistical Science, 9(1): 2-54.
Christensen, L. C., Christiansen, T. R., Jin, Y., Kunz, J. and Levitt, R. E., 1996,“Modeling and Simulation in Enterprise Integration -- A Framework and anApplication in the Offshore Oil Industry,” Concurrent Engineering: Research andApplications, 4(3): 247-260.
Christian, A. D., Grasso, K. J. and Seering, W. P., 1996, "Validation Studies of anInformation-Flow Model of Design," Design Theory and Methodology, Irvine,CA, ASME, 96-DETC/DTM-1306.
Clark, D. N., 1992, “A Literature Analysis of the Use of Management Science Tools inStrategic Planning,” Journal of the Operational Research Society, 43(9): 859-870.
346346
Cressie, N., 1988, “Spatial Prediction and Ordinary Kriging,” Mathematical Geology,20(4): 405-421.
Cressie, N., 1989, “Geostatistics,” The American Statistician, 43(4): 197-202.
De Boer, S. J., 1989, Decision Methods and Techniques in Methodical EngineeringDesign, Academisch Boeken Centrum, De Lier, The Netherlands.
Delen, D., Pratt, D. B. and Kamath, M., 1996, "A New Paradigm for ManufacturingEnterprise Modeling: Reusable, Multi-Tool Modeling," Winter SimulationConference Proceedings, Coronado, CA, IEEE, pp. 985-992.
Dixon, J. R., Duffey, M. R., Irani, R. K., Meunier, K. L. and Orelup, M. F., 1988, "AProposed Taxonomy of Mechanical Design Problems," Proceedings ASMEComputers in Engineering Conference, San Francisco, California.
Draper, N. R. and Lin, D. K. J., 1990a, “Connections Between Two-Level Designs ofResolutions III and V,” Technometrics, 32(3): 283-288.
Draper, N. R. and Lin, D. K. J., 1990b, “Small Response-Surface Designs,”Technometrics, 32(2): 187-194.
Dressler, G., 1976, Organization and Management, Prentice-Hall, Englewood Cliffs, NJ.
Duffey, M. R. and Dixon, J. R., 1993, “Managing the Product Realization Process: aModel for Aggregate Cost and Time-to-market Evaluation,” ConcurrentEngineering: Research and Applications, 1(1): 51-59.
Duse, M. N., Gharpure, J. T., Bhuskute, H., Kamath, M., Pratt, D. B. and Mize, J. H.,1993, "Tool-Independent Model Representation," Proceedings of the 2nd IndustrialEngineering Research Conference, Norcross, GA, Institute of Industrial Engineers,pp. 700-704.
Eppinger, S. D., 1991, “Model-Based Approaches to Managing Concurrent Engineering,”Journal of Engineering Design, 2(4): 283-290.
Eppinger, S. D., Whitney, D. E., Smith, R. P. and Gebala, D. A., 1994, “A Model-BasedMethod for Organizing Tasks in Product Development,” Research in EngineeringDesign, 6(1): 1-13.
Evans, B. and Fisher, D., 1994, “Overcoming Process Delays with Decision TreeInduction,” IEEE Expert, 9(February): 60-66.
Forrester, J. W., 1962, Industrial Dynamics, MIT Press, Cambridge, MA.
Gaither, N., 1994, Production and Operations Management, Dryden Press, Fort Worth.
Galbraith, J. R., 1977, Organization Design, Addison-Wesley, Reading, MA.
Gale, T. and Eldred, J., 1996, Getting Results with the Object-Oriented Enterprise Model,SIG Books, New York, NY.
347347
Gero, J. S., 1995, "Role of Fuction-Behavior-Structure Models in Design," Proceedingsof the 2nd Congress on Computing in Civil Engineering, Atlanta, GA, ASCE, pp.294-301.
Giovannitti-Jensen, A. and Myers, R. H., 1989, “Graphical Assessment of the PredictionCapability of Response Surface Designs,” Technometrics, 31(2): 159-171.
Giunta, A. A., Balabanov, V., Haim, D., Grossman, B., Mason, W. H. and Watson, L.T., 1996, "Wing Design for a High-Speed Civil Transport Using a Design ofExperiments Methodology," 6th AIAA/USAF/NASA/ISSMO Symposium onMultidisciplinary Analysis and Optimization, Bellevue, WA, AIAA, Inc., pp. 168-183.
Giunta, A. A., Dudley, J. M., Narducci, R., Grossman, B., Haftka, R. T., Mason, W.H. and Watson, L. T., 1994, "Noisy Aerodynamic Response and SmoothApproximations in HSCT Design," 5th AIAA/USAF/NASA/ISSMO Symposiumon Multidisciplinary Analysis and Optimization, Panama City, FL, pp. 1117-1128.
Gordon, G., 1978, System Simulation, Prentice-Hall, Englewood Cliffs, N.J.
Hameri, A. and Paatela, A., 1995, “Multidimensional Simulation as a Tool for StrategicLogistics Planning,” Computers in Industry, 27: 273-285.
Hanna, D. P., 1988, Designing Organizations for High Performance, Addison-Wesley,Reading, MA.
Hanson, W. C.,1992, "The Integrated Enterprise," Manufacturing Systems: Foundationsof World-Class Practice, J. A. Heim and W. D. Compton, eds, Washington, D.C.,National Academy Press, pp. 158-165.
Healy, M. J., Kowalik, J. S. and Ramsay, J. W., 1975, “Airplane Engine Selection byOptimization of Surface Fit Approximations,” Journal of Aircraft, 12(7, July): 593-599.
Heim, J. A. and Compton, W. D., 1992, "Manufacturing Systems: Foundations ofWorld-Class Practice," Washington, D. C., National Academy Press.
Hopper, G. S.,1993, "Forward-Looking Infrared Systems," Passive Electro-OpticalSystems, S. B. Campana, Ann Arbor, Michigan, ERIM and SPIE, pp. 103-155.
Hubka, V. and Eder, W. E., 1987, “A Scientific Approach to Engineering Design,” DesignStudies, 8(3): 123-137.
Hunter, J. S., 1985, “Statistical Design Applied to Product Design,” Journal of QualityTechnology, 17(4): 210-221.
Hunter, J. S. and Naylor, T. H., 1970, “Experimental Designs for Computer SimulationExperiments,” Management Science, 16(7): 422-434.
Ignizio, J. P., 1985, “Multiobjective Mathematical Programming via the MULTIPLEXModel and Algorithm,” European Journal of Operational Research, 22: 338-346.
348348
Ignizio, J. P., 1990, An Introduction to Expert Systems: The Methodology and itsImplementation, McGraw-Hill, New York.
Ignizio, J. P., Wyskida, R. M. and Wilhelm, M. R., 1972, “A Rationale for HeuristicProgram Selection and Evaluation,” Industrial Engineering, 4(1): 16-19.
Joryz, R. H. and Vernadat, F. B., 1990, “CIM-OSA Part 1: Total Enterprise Modelingand Function View,” International Journal of Computer Integrated Manufacturing,3(3): 144-156.
Journel, A. G., 1986, “Geostatistics: Models and Tools for the Earth Sciences,”Mathematical Geology, 18(1): 119-140.
Kamal, S. Z., 1990, "The Development of Heuristic Decision Support Problems forAdaptive Design," Ph.D. Dissertation, Department of Mechanical Engineering,University of Houston, Houston, Texas.
Karandikar, H. and Mistree, F., 1992, “Designing Composite Material Pressure Vessel forManufacture: A Case Study for Concurrent Engineering,” EngineeringOptimization, 18(4): 235-262.
Karandikar, H. M. and Mistree, F.,1991, "Modeling Concurrency in the Design ofComposite Structures," Structural Optimization: Status and Promise, M. P. Kamat,Washington, D.C., AIAA.
Kateel, G., Kamath, M. and Pratt, D., 1996, "An Overview of CIM Enterprise ModelingMethodologies," Winter Simulation Conference Proceedings, Coronado, CA,IEEE, pp. 1000-1007.
Keeney, R. L. and Raiffa, H., 1976, Decisions With Multiple Objectives: Preferences andValue Tradeoffs, John Wiley & Sons, New York.
Kleijnen, J. P. C., 1987, Statistical Tools for Simulation Practitioners, Marcel Dekker,New York.
Koch, P. N., 1997, "Hierarchical Modeling and Robust Synthesis of Complex Systems inPreliminary Design," Doctoral Dissertation, G. W. Woodruff School of MechanicalEngineering, Georgia Institute of Technology.
Koch, P. N., Allen, J. K. and Mistree, F., 1996, "Robust Concept Exploration forConfiguring Turbine Propulsion Systems," 1996 ASME Design and AutomationConference, Irvine, CA, Paper No. 96-DETC/DAC-1285.
Kuhn, T. S., 1960, The Structure of Scientific Revolutions, University of Chicago Press,Chicago.
Langley, P. and Simon, H. A., 1995, “Applications of Machine Learning and RuleInduction,” Communications of the ACM, 38(11, November): 55-64.
349349
Laslett, G. M., 1994, “Kriging and Splines: An Empirical Comparison of Their PredictivePerformance in Some Applications,” Journal of the American StatisticalAssociation, 89(June): 391-400.
Law, A. M. and Kelton, W. D., 1991, Simulation Modeling and Analysis, McGraw-Hill,New York.
Leech, W. J., 1986, “A Rule Based Process Control Method With Feedback,” AdvancesIn Instrumentation, 41: 169-175.
Lewis, K., 1996, "An Algorithm for Integrated Subsystem Embodiment and SystemSynthesis," Doctoral Dissertation, G. W. Woodruff School of MechanicalEngineering, Georgia Tech.
Lewis, K. and Mistree, F., 1996, "Foraging-Directed Adaptive Linear Programming: AnAlgorithm for Solving Nonlinear Mixed Discrete/Continuous Design Problems,"ASME Design Automation Conference, Irvine, CA, 96-DETC/DAC-1601.
Liles, D. H. and Presley, A. R., 1996, "Enterprise Modeling within an EnterpriseEngineering Framework," Winter Simulation Conference Proceedings, Coronado,CA, IEEE, pp. 993-999.
Lin, S., 1975, “Heuristic Programming as an Aid to Network Design,” Networks, 5(1):33-43.
Little, J. D. C.,1992, "Are There 'Laws' of Manufacturing?," Manufacturing Systems:Foundations of World-Class Practice, J. A. Heim and W. D. Compton,Washington, D.C., National Academy Press, pp. 180-188.
Lucas, J. M., 1974, “Optimum Composite Designs,” Technometrics, 16(4): 561-567.
Lucas, J. M., 1976, “Which Response Surface Design is Best,” Technometrics, 18(4):411-417.
Lucas, J. M., 1991, "Using Response Surface Methodology to Achieve a RobustProcess," 45th Annual Quality Congress Transactions, Milwaukee, WI, ASQC,pp. 383-392.
Lucas, J. M., 1994, “Using Response Surface Methodology to Achieve a RobustProcess,” Journal of Quality Technology, 26(4): 248-260.
Martin, J., 1983, Managing the Data Base Environment, Prentice-Hall, Englewood Cliffs,NJ.
Matheron, G., 1963, “Principles of Geostatistics,” Economic Geology, 58: 1246-1266.
Miller, J. G., 1978, Living Systems, McGraw-Hill Book Company, New York.
Mistree, F., Hughes, O. F. and Bras, B. A.,1993a, "The Compromise Decision SupportProblem and the Adaptive Linear Programming Algorithm," Structural
350350
Optimization: Status and Promise, M. P. Kamat, Washington, D.C., AIAA, pp.247-286.
Mistree, F., Hughes, O. F. and Phuoc, H. B., 1981, “An Optimization Method for theDesign of Large, Highly Constrained, Complex Systems,” EngineeringOptimization, 5(3): 141-144.
Mistree, F., Kamal, S. Z. and Bras, B. A., 1989a, "DSIDES: Decision Support in theDesign of Engineering Systems," Systems Design Laboratory Report, Universityof Houston, Houston, Texas, August 1989.
Mistree, F., Karandikar, S., Bras, B. A. and Bollam, T., 1991, "Formulation, Storageand Solution of Decision Support Problems in a Concurrent EngineeringEnvironment," Proceedings Third National Symposium on ConcurrentEngineering, Washington, D.C., pp. 625-645.
Mistree, F. and Muster, D., 1990, "Conceptual Models for Decision-Based ConcurrentEngineering Design for the Life Cycle," Proceedings of the Second NationalSymposium on Concurrent Engineering, Morgantown, West Virginia, pp. 443-467.
Mistree, F., Muster, D., Shupe, J. A. and Allen, J. K., 1989b, "A Decision-BasedPerspective for the Design of Methods for Systems Design," Recent Experiences inMultidisciplinary Analysis and Optimization, Hampton, Virginia, NASA.
Mistree, F., Muster, D., Srinivasan, S. and Mudali, S., 1990a, “Design of Linkages: AConceptual Exercise in Designing for Concept,” Mech. Mach. Theory, 25(3): 273-286.
Mistree, F., Smith, W. F., Bras, B., Allen, J. K. and Muster, D.,1990b, "Decision-BasedDesign: A Contemporary Paradigm for Ship Design," Transactions, Society ofNaval Architects and Marine Engineers, Jersey City, New Jersey, pp. 565-597.
Mistree, F., Smith, W. F. and Bras, B. A.,1993b, "A Decision-Based Approach toConcurrent Engineering," Handbook of Concurrent Engineering, H. R. Paresai andW. Sullivan, New York, Chapman & Hall, New York, pp. 127-158.
Mitchell, T., Morris, M. and Ylvisaker, D., 1988, “Existence of Smoothed StationaryProcesses on an Interval,” Working Paper.
Montgomery, D. C., 1997, Design and Analysis of Experiments, John Wiley & Sons,New York.
Montgomery, D. C. and Evans, D. M., Jr., 1975, “Second-Order Response SurfaceDesigns in Computer Simulation,” Simulation, 29(6): 169-178.
Mujtaba, M. S., 1994, “Enterprise Modeling and Simulation: Complex Dynamic Behaviorof a Simple Model of Manufacturing,” Hewlett-Packard Journal, 45(December):80-112.
351351
Myers, R. H., Khuri, A. I. and Carter, W. H., 1989, “Response Surface Methodology:1966-1988,” Technometrics, 31(2): 137-157.
Myers, R. H. and Montgomery, D. C., 1995, Response Surface Methodology: Processand Product Optimization Using Designed Experiments, John Wiley & Sons, NewYork.
Nagata, T., Nagata, Y. and Koshimitsu, H., 1993, “Building a CIM System forCompound Plant: Utilization of a Distributed Processing System,” Computers &Industrial Engineering, 24(4): 561-569.
Nakahara, T. and Isonon, Y., 1992, “Strategic Planning for Canon; the Crisis and theNew Vision,” Long Range Planning, 25(1): 63-72.
Nance, R. E., 1995, "Simulation Programming Languages: An Abridged History,"Proceedings of the 1995 Winter Simulation Conference, Arlington, VA, pp. 1307-1313.
Ngwenyama, O. and Grant, D. A., 1994, “Enterprise Modeling for CIM InformationSytems Architectures: An Object-Oriented Approach,” Computers & IndustrialEngineering, 26(2): 279-293.
Noori, H., 1990, Managing the Dynamics of New Technology, Prentice Hall, EnglewoodCliffs, NJ.
Nukala, M. V., Eppinger, S. D. and Whitney, D. E., 1995, "Generalized Models ofDesign Iteration Using Signal Flow Graphs," Design Theory and Methodology,Boston, MA, ASME, pp. 413-422.
O'Brien, C. and Smith, S. J. E., 1993, “Design of the Decision Process for StrategicInvestment in Advanced Manufacturing Systems,” International Journal ofProduction Economics, 30: 309-322.
O'Sullivan, D., 1994, Manufacturing Systems Redesign: Creating the IntegratedManufacturing Environment, PrenticeHall, Englewood Cliffs, NJ.
Pahl, G. and Beitz, W., 1988, Engineering Design - A Systematic Approach, The DesignCouncil/Springer-Verlag, London/Berlin.
Peplinski, J. D., 1994, "Design for Manufacture at the Function Level of Abstraction,"M.S. Thesis, School of Mechanical Engineering, Georgia Institute of Technology,Atlanta, Georgia.
Peplinski, J. D., Allen, J. K. and Mistree, F., 1996a, "Integrating Product Design withManufacturing Process Design Using the Robust Concept Exploration Method,"Design Theory and Methodology, Irvine, CA, ASME, pp. Paper No. 96-DETC/DTM-1502.
Peplinski, J. D., Allen, J. K. and Mistree, F., 1996b, "Manufacturing Enterprise Design:Modeling the Organization, its Products, and its Processes," ISSS Conference,Loisville, KY, International Society for the Systems Sciences, pp. 483-494.
352352
Petrie, C. J., 1992, "Introduction," Proceedings of the First International Conference onEnterprise Modeling, Cambridge, Massachusetts, MIT Press, pp. 1-17.
Phadke, M. S., 1989, Quality Engineering using Robust Design, Prentice Hall,Englewood Cliffs, New Jersey.
Ramberg, J. S., Sanchez, S. M., Sanchez, P. J. and Hollick, L. J., 1991, "DesigningSimulation Experiments: Taguchi Methods and Response Suface Metamodels,"Proceedings of the 1991 Winter Simulation Conference (WSC), Phoenix, AZ,IEEE, pp. 167-176.
Ratches, J. A., Lawson, W. R., Obert, L. P., Bergemann, R. J., Cassidy, T. W. andSwenson, J. M., 1975, "Night Vision Laboratory Static Performance Model forThermal Viewing Systems," ECOM, ECOM-7043.
Reddy, R. P. and Mistree, F., 1992, "Modeling Uncertainty in Selection Using ExactInterval Arithmetic," Design Theory and Methodology, Scottsdale, AZ, AmericanSociety of Mechanical Engineers, pp. 193-201.
Richmond, B., 1994, ithink Business Applications Guide, High Performance Systems,Hanover, NH.
Ross, P. J., 1988, Taguchi Techniques for Quality Engineering, McGraw-Hill, NewYork, NY.
Rumelhart, D. E., Widrow, B. and Lehr, M. A., 1994, “The Basic Ideas in NeuralNetworks,” Communications of the ACM, 37(3): 87-92.
Sacks, J., Schiller, S. B. and Welch, W. J., 1989a, “Designs for Computer Experiments,”Technometrics, 31(1): 41-47.
Sacks, J., Welch, W. J., Mitchell, T. J. and Wynn, H. P., 1989b, “Design and Analysisof Computer Experiments,” Statistical Science, 4(4): 409-435.
Sargent, R. G., 1994, "Verification and Validation of Simulation Models," Proceedings ofthe 1994 Winter Simulation Conference, pp. 77-87.
Senge, P. M., 1990, The Fifth Discipline, Doubleday, New York.
Shoemaker, A. C., Tsui, K. L. and Wu, J., 1991, “Economical Experimentation Methodsfor Robust Design,” Technometrics, 33(4): 415-427.
Shupe, J. A., 1988, "Decision-Based Design: Taxonomy and Implementation," Ph.D.Dissertation, Department of Mechanical Engineering, University of Houston,Houston, Texas.
Simon, H. A., 1976, Administrative Behavior, The Free Press, New York, NY.
Simon, H. A., 1977, The New Science of Management Decision, Prentice-Hall,Englewood Cliffs, NJ.
353353
Simon, H. A., 1981, The Sciences of the Artificial, The MIT Press, Cambridge, MA.
Simon, H. A., 1987, “Two Heads Are Better Than One: The Collaboration Between AIand OR,” Interfaces, 17(4): 8-15.
Simon, H. A., 1990, “Prediction and Prescription in Systems Modeling,” OperationsResearch, 38(1): 7-14.
Simpson, T. W., Chen, W., Allen, J. K. and Mistree, F., 1996, "Conceptual Design of aFamily of Products Through the use of the Robust Concept Exploration Method,"6th AIAA/USAF/NASA/ISSMO Symposium on Multidisciplinary Analysis andOptimization, Bellevue, WA, AIAA, Inc., pp. 1535-1545.
Simpson, T. W., J., P., Koch, P. N. and Allen, J. K., 1997, "On the Use of Statistics inDesign and the Implications for Computer Experiments," Design Theory andMethodology, Sacramento, CA, ASME, 97-DETC/DTM-3881, pp.
Sippl, C. J. and Sippl, R. J., 1980, Computer Dictionary and Handbook, H. W. Sams &Co.
Smith, W. F., 1988, "AUSEVAL - The Application of the Decision Support ProblemTechnique to Ship Design," Proceedings International Workshop on EngineeringDesign and Manufacturing Management, Melbourne, Australia, pp. 127-147.
Steward, D. V., 1981, Systems Analysis and Management: Structure, Strategy andDesign, Petrocelli Books, Inc., New York, NY.
Swann, K. and O'Keefe, W. D., 1990, “Advanced Manufacturing Technology:Investment Decision Process. Part I,” Management Decision, 28(1): 20-31.
Taylor, E. S., 1959, M.I.T. Report.
Tumay, K. and Harrell, C. R., 1995, Simulation Made Easy: A Manager's Guide,Industrial Engineering and Management Press, Norcross, GA.
Unal, R., Stanley, D. O. and Joyner, C. R., 1994, “Parametric Model Building andDesign Optimization Using Reponse Surface Methods,” Journal of Parametrics,14(1): 81-96.
Vadde, S., Krishnamachari, R. S., Allen, J. K. and Mistree, F.,1991, "The BayesianCompromise Decision Support Problem for Hierarchical Design InvolvingUncertainty," Advances in Design Automation, G. A. Gabriele, ed., pp. 209-218.
Vanderplaats, G. N., 1984, Numerical Optimization Techniques for Engineering Design:With Applications, McGraw-Hill, New York.
Venter, G., Haftka, R. T. and Starnes, J. H., Jr., 1996, "Construction of ResponseSufaces for Design Optimization Applications," 6th AIAA/USAF/NASA/ISSMOSymposium on Multidisciplinary Analysis and Optimization, Bellevue, WA, AIAA,Inc., pp. 548-564.
354354
Vernadat, F. B., 1996, “Enterprise Integration: On Business Process and EnterpriseActivity Modeling,” Concurrent Engineering: Research and Applications, 4(3):219-228.
Waddock, S. A. and Isabella, L. A., 1989, “Strategy, Beliefs about the Environment, andPerformance in a Banking Simulation,” Journal of Management, 15(4): 617-632.
Welch, R. V. and Dixon, J. R., 1992, "Representing Function, Behavior and StructureDuring Conceptual Design," Design Theory and Methodology, Scottsdale, AZ,ASME, pp. 11-18.
Welch, W. J., Buck, R. J., Sacks, J., Wynn, H. P., Mitchell, T. J. and Morris, M. D.,1992, “Screening, Predicting, and Computer Experiments,” Technometrics, 34(1):15-25.
Welch, W. J., Yu, W. K., Kang, S. M. and Sacks, J., 1990, “Computer Experiments forQuality Control by Parameter Design,” Journal of Quality Technology, 22(1): 15-22.
Wood, K. L., Antonsson, E. K. and Beck, J. L., 1989, "Comparing Fuzzy andProbability Calculus for Representing Imprecision in Engineering Design," DesignTheory and Methodology, ASME, pp. 187-203.
355355
VITA
Jesse Peplinski was born in Stevens Point, Wisconsin on February 27, 1970. He grew up
in central Wisconsin and attended Stevens Point Area Senior High School. He earned his
Bachelor of Science degree in Engineerng Science from Harvard College in 1992,
graduating magna cum laude. He earned his Master of Science degree in Mechanical
Engineering from the Georgia Institute of Technology in 1994. His graduate work was
funded in part by a National Science Foundation graduate research fellowship and in part
from NSF Grant DMI-96-12327. Upon graduation Jesse has accepted a position as a
Systems Engineer with the Texas Instruments Defense Systems Group.