DEPARTMENT OF MATHEMATICAL SCIENCES

DEPARTMENT OF MATHEMATICAL SCIENCES
DEPARTMENT
OF
MATHEMATICAL SCIENCES
CLEMSON UNIVERSITY
Clemson, South Carolina
Complex Systems Engineering Design
under
Uncertainty and Risk
by
J.A. Reneke and M.W. Wiecek
Technical Report 2005/09/RW
Complex Systems Engineering Design under Uncertainty and Risk
James A. Reneke1 and Margaret M. Wiecek2*
1
2
Department of Mathematical Sciences, Clemson University, Clemson, SC, USA, [email protected]
Department of Mechanical Engineering, Clemson University, Clemson, SC, USA, [email protected]
Abstract
A novel modeling and decision-making framework for engineering design of complex systems operating
under uncertainty and risk is proposed. The framework goes beyond the conventional optimization-based
scheme and follows upon emerging ideas calling for the inclusion of comprehensive uncertainty management
into engineering systems design, and thus providing options for design flexibility and robustness.
The complex system development process is decomposed into an assessment process and a design
process, which operate in the same space of uncertainties but from two different knowledge bases, interact
with each other, and are further decomposable into simpler components. Decomposition of each of the two
processes into components is based on managers’ or designers’ experience and knowledge while
computational models of the components use normal random fields defined over the common space of
uncertainties. The assessment process is decomposed into interacting criteria that evaluate overall designs
produced by the design process. The design process is decomposed into interacting tasks to be performed by
the system and the associated design team is charged with finding methods (partial designs) meeting the
requirements specified in the assessment process.
A methodology is presented for rationalizing the development process including consideration of
communication between the two processes. System performance in the entire space of uncertainties is
measured by risk that is modeled as system performance variance and a function of the uncertainties treated
as independent variables. Using stochastic and multi-criteria analyses, the methodology creates and evaluates
design options, thus calculating the value of engineering flexibility. System robustness as its ability to
perform satisfactorily over the space of uncertainties is also addressed.
The methodology is demonstrated on the automotive vehicle design process selected as an exemplary
complex undertaking due to the large number of disciplines and associated interdisciplinary couplings, the
size and type of manpower concurrently involved in the process, and accompanying computational tasks.
Traditionally, the vehicle design process has been handled within mathematical optimization, which has been
strengthened by considerations of design robustness and reliability in the presence of uncertainty. The
proposed approach unifies technical vehicle design, manufacturing, and management concerns into broadly
understood engineering design in a space of common uncertainties and offers the capability of active
uncertainty management in the sense that designers and managers can exercise decisions as circumstances
warrant. The process of vehicle development is viewed as an interactive decision process between a
management team charged with the exploitation of an economic opportunity, which includes management of
the development of a vehicle design, supply, manufacturing, distribution, and marketing chains, and
investments in those chains, and a design team charged with producing a vehicle design meeting
requirements of the management team and regulatory agencies while minimizing performance risk.
Keywords: design, complex systems, vehicle design, decomposition, coordination, uncertainty, risk
*
On leave from the Department of Mathematical Sciences, Clemson University, Clemson, SC 29634-0975
2
1. Introduction
The science of design assumes a decision-making paradigm for the complex systems development process,
which has been substantiated by the application of mathematical optimization as a tool for modeling and
solving the underlying decision problems. In addition to identifying optimal structural and control
configurations of physical artifacts, optimization methods have been applied to decomposition of systems
into subsystems and integration of optimal subsystem designs into the final overall design. An automotive
vehicle is a system of subsystems and components whose design is a complex undertaking due to a large
number of disciplines and associated interdisciplinary couplings, the size and type of manpower concurrently
involved in the process, and accompanying computational tasks. Traditionally, the vehicle design process
has been handled within a mathematical optimization framework. Due to the complexity and multiplicity of
entities engaged in this process, optimization-based vehicle design naturally lends itself to decomposition of
the entire problem into smaller sub-problems, each represented by a smaller optimization problem.
Classes of conventional decomposition methodologies for complex design problems include (1) object
decomposition into physical components; (2) aspect decomposition into knowledge domains which has been
the original motivation for multidisciplinary optimization (MDO) [Ref. 1-5]; (3) sequential decomposition by
directed flow of elements or information [Ref. 6-13]; (4) model-based decomposition by mathematical
functional representations [Ref. 14]. Object and aspect decomposition assume a “natural” decomposition of
the problem, while the other two types result from the modeling and optimization methodology used in
design. Michelena and Papalambros [Ref. 14] offer a critique of decomposition methods: 1) a drawback of
object decomposition is that in large and highly integrated systems, drawing ‘boundaries’ around physical
components is very subjective; 2) aspect decomposition, often defined by management considerations, may
fail to account for disciplinary coupling; 3) sequential decomposition presumes unidirectionality of design
information flows, which contradicts the interactive and cooperative behavior of components.
In particular, multidisciplinary optimization (MDO) has emerged as a scientific discipline offering a
comprehensive approach to engineering system design. Sobieski and Kodiyalam [Ref. 15] provide a state-ofthe-art study of MDO in the context of methods, requirements and applications to vehicle design. Belie [Ref.
16] recognizes that non-technical issues such as data collection and manipulation, collaboration within and
among design teams, multiplicity of disciplines and specializations, company structure, slow down the
widespread implementation of MDO in industry. Blouin et al. [Ref. 17] contrast MDO methods with
analytical target cascading (ATC), a coordination method based on hierarchical model decomposition which
has been of recent interest to the engineering design community, and call for more comparative studies to
address issues of complexity. Michlelena et al. [Ref. 18] apply ATC to a heavy tactical truck while Kim et al.
[Ref. 19] to a chassis design problem. They prove that ATC is useful in costly design iterations late in the
vehicle development process.
The presence of uncertainty additionally complicates the complex systems development problem. While
in general, uncertainty is understood as the inability to determine the true state of affairs of a system, there
are several schools of thought defining and modeling uncertainty across engineering and sciences. One of the
pioneers studying uncertainty is Knight [Ref. 20] who makes distinction between uncertainty and risk in
economics and whose approach we follow. Randomness of the system, due to incomplete knowledge arising
when a description of the system cannot be presented with complete confidence because of a lack of
understanding or limitation of knowledge, cannot be quantified and is referred to as uncertainty. On the other
hand, randomness of the system caused by stochastic variability resulting from inherent fluctuations that the
system experiences with respect to time, space, or the system’s individual characteristics is quantified as risk.
The engineering community makes distinction between aleatory and epistemic uncertainty [Ref. 21].
Aleatory uncertainty results from inherent variations associated with the system or its environment and it is
most commonly modeled with probability theory. Epistemic uncertainty results from lack of knowledge
about the system or the environment due to lack of data, limited understanding, or occurrence of fault events.
In this case, the use of probability theory is limited and other mathematical theories such as evidence theory,
possibility theory, interval analysis, etc., have been applied. In any case, the mathematical representation of
epistemic uncertainty remains a challenging research issue.
In the operations research community, uncertainty has been modeled with stochastic programming [Ref.
22-23], which uses probabilistic information about the problem data, and robust optimization [Ref. 24-25],
which integrates goal programming formulations with a scenario-based description of the data. Zimmermann
3
advocates that uncertainty modeling should not be done context free and that there is no single uncertainty
theory that could model all types of uncertainty [Ref. 26]. He suggests an approach to context-dependently
determine a suitable method to model uncertainty.
The conventional optimization-based framework for engineering design has also been extended to
account for uncertainty. Type I and Type II uncertainty have been proposed to model variations in design
evaluation due to variations in random design parameters and random design variables, respectively. This
approach resulted in a robust design formulation in which the mean and the variance of the design objective
function are optimized [Ref. 27]. Chen at al. [Ref. 28], Shimoyama et al. [Ref. 29], and Dai et al. [Ref. 30]
optimize the vehicle performance in the robust design framework. Variations in design feasibility have been
captured by reliability-based design in which the design feasibility is modeled by the probability of
constraint satisfaction (reliability) assuming that complete probabilistic distributions describing the
stochastic nature of inputs are available [Ref. 31-35]. Several methods for modeling the design feasibility in
the presence of input variations are reviewed and compared by Du and Chen [Ref. 36]. Some of them assume
that complete probabilistic distributions describing the stochastic nature of inputs are available while others
do not. More recent efforts have focused on integrating robust design with reliability-based design [Ref. 37],
or using fuzzy set theory in design. Mourelatos and Zhou [Ref. 38] introduce possibility-based design in
which uncertainty is represented by fuzzy inputs. This approach yields a conservative design compared with
reliability-based designs obtained with different probability distributions.
Other researchers have undertaken efforts to develop a comprehensive approach to design. Zhao and Shah
[Ref. 39] propose a normative framework accounting for tradeoffs between design benefit and manufacturing
cost under design, manufacturing, and economic uncertainties. They recognize that the conversion of design
utility into a market value remains a challenging issue. Kim et al. [Ref. 40] and Michalek et al. [Ref. 41]
examine engineering design in the broader context of the company’s utility, profit, marketing decisions and
customer satisfaction. Wassenaar et al. [Ref. 42] include uncertain customer preferences into vehicle design
by using the discrete choice analysis to model uncertain customer demand. A vehicle engine case study
demonstrates the approach. Hossoy et al. [Ref. 43] model multiple customer preferences for vehicle interior
design. Cooper et al. [Ref. 44] propose an enterprise decision model for design of a dual-use vehicle under
uncertain demand in commercial and military markets.
A recent study by de Neufville et al. [Ref. 45] calls for even more comprehensive approaches to
uncertainty management for engineering systems planning and design. They claim that “the unexpected
developments that surround our environment can be actively managed through flexible designs that can
adjust to changed conditions”. De Neufville et al. admit that designing for uncertainty changes designers’
perspectives, namely, the conventional design meeting fixed specifications typically set outside the
engineering process should evolve into design for flexibility and robustness with special consideration given
to various scenarios and events that expand the domain of effective engineering design. Flexible designs
should not only account for potential system failures but also for opportunities resulting from the
consideration of a whole spectrum of outcomes and changing conditions. The term management designates
an active and proactive response to uncertainty. The active response enables the system to exploit upside
opportunities resulting from uncertainty rather than downside consequences. For instance, proactive response
may change the load on the system and thus improve its performance.
The authors believe that in the presence of uncertainty, conventional types of decomposition used in
design optimization are neither sufficient for making rational design decisions nor capable of yielding
flexible designs for comprehensively managing uncertainty.
First, those decompositions are driven by physics-based considerations which let designers use their
knowledge and experience about the system only within the category imposed by the decomposition used for
design (knowledge of physical components, disciplines, or mathematical models). However, when designing
a complex system that performs under uncertainty, the designer may not have detailed knowledge of every
component or its mathematical model. Instead, the designer may have the personal insight and wisdom to
make design decisions predicated on system performance and the capability of using them to decompose the
system based on performance before physics-based decomposition is employed. Batill et al. [Ref. 46]
recognize that many decisions in the design process are based upon individual or corporate experience that
had not formally been archived until very recently. They also believe that even though this kind of
4
experience is currently recorded, quantifying the uncertainty associated with it and the related risk is usually
not possible.
Second, those decompositions typically employ mathematical optimization tools which result in modeling
and computational limitations. If decomposed sub-models have a special structure (e.g., linear, quadratic,
polynomial models, etc.), properties (e.g., unimodality, convexity, etc.), and simple interactions between
each other (linking variables), then mathematical optimization can successfully provide their individual
optimal designs and the overall optimal design. However, if the sub-models are nonlinear and the
interactions between them more complex, mathematical optimization does not offer effective solution
techniques for complex engineering systems despite very significant advances during the last decades.
Additionally, mathematical optimization limits uncertainty modeling because the deterministic version of the
optimization problem has to remain solvable.
As mentioned above, the authors follow the distinction between un-quantified randomness referred to as
uncertainty and quantified randomness referred to as risk. In the literature, risk of system performance has
been modeled in various ways. In economics, the (mathematical) variance of the system performance
function is used to represent risk [Ref. 47] while in operations research risk is defined as the triplet
addressing the questions of what happens, with what likelihood and what consequences [Ref. 46]. In
engineering, risk or ‘expected loss’ is defined as the product of the consequences of an unfavorable event
and the probability of its occurrence [Ref. 49] and is typically related to system faults leading to reduced
safety. Haimes examines a variety of optimization-based approaches to risk modeling, assessment, and
management in systems engineering [Ref. 50].
The authors have recently developed a performance-based methodology for complex systems operating
under uncertainty, which remains within the decision-making paradigm but goes beyond traditional
optimization and hence offers a novel modeling and decision-making capability of choosing alternatives so
that the system performs best [Ref. 51-52]. This new approach is driven by designer’s insight and wisdom to
conceptually decompose the system into a multi-level hierarchy of interacting sub-systems (components)
from the top down and to make decisions predicated on system performance from the bottom up.
Computational models of the components use normal random fields defined over a space of uncertainties.
Normal random fields are determined by their means and covariances and are the sum of a deterministic
mean field and a zero mean random field that can be modeled as a linear transformation of a Wiener field.
The linear transformation for discrete fields has a matrix representation determined by the discrete
covariance kernel of the field. System performance in the entire space of uncertainties is measured by risk
that is modeled as system performance variance and a function of the uncertainties treated as independent
variables. The approach employs stochastic and decision analyses to screen feasible alternatives at each level
of decomposition and effectively account for conditions of uncertainty and risk as well as for the interaction
between the sub-systems. Preliminary versions of the approach have been applied to decision making for
service sector systems [Ref. 53] and portfolio selection [Ref. 54].
In this paper, we revise the performance-based methodology and use it to propose a modeling and
decision-making framework for the complex systems development process, which accounts for management
decisions and engineering design decisions for complex systems operating under uncertainty. This
methodology has the capability of risk quantification and hence the identification of a minimum risk design
option while providing risk information for other options. Using the words of Batill et al. [Ref. 46], the
approach makes use of individual or corporate experience and does “quantify the associated uncertainty” and
risk. In agreement with de Neufville et al. [Ref. 45], the approach creates and evaluates design options, thus
calculating the value of engineering flexibility. System robustness as its ability to perform satisfactorily over
the space of uncertainties is also addressed.
The proposed framework unifies technical design, manufacturing, and management concerns into broadly
understood engineering design in a space of common uncertainties. The development process is decomposed
into an assessment process carried out by the management team and a design process conducted by the
design team. The framework encompasses two major stages, modeling and decision making. The modeling is
based on prior designers’ and managers’ knowledge and experience and therefore yields decisions that are of
(longer-term) tactical or strategic rather than (short-term) operational. Concerns, whether on the management
side or the design side are modeled as either uncertainties, risk, or system inputs. The modeling art is the
sorting out of the concerns to produce a satisfying decision model while the decision problem is choosing
5
from among available alternatives. Additionally, the framework offers the capability of active uncertainty
management in the sense that designers and managers can exercise certain decisions in case the
circumstances warrant.
The selected mathematical setting is identical to that used for the performance-based methodology.
System inputs and performances are modeled as normal random fields. Models constructed of linear
transformations obtained from the covariance kernels of those fields provide a framework for decisions both
for the management team and the design team. The models of each team, assessment models for the
management team and design models for the design team, can be assumed unknown by the other team. The
key to the development methodology is passing sufficient information between the teams to enable the
modeling/decision process for each team.
The mathematical tools used in the performance-based methodology are introduced in Section 2, and the
revised methodology as a generic procedure applicable to a complex system is presented in Section 3. A
description of the development process in the context of vehicle design is given in Section 4 while Section 5
presents a computational example. Section 6 concludes the paper.
2. Mathematical foundations
In this section we give a brief review of mathematical concepts and present new mathematical tools we
employ in the modeling and computational part of the performance-based methodology. We define general
random functions and then focus on specific random functions such as random processes and random fields.
The new mathematical tools include numerical representation of these functions in the form of their
discretization and simulation, an algorithm for constructing stochastic linearizations of separable random
fields, and functional representation of variance.
2.1
Let
•
•
•
Random functions
Rm denote the Euclidean space of dimension m, m > 1
Ω denote a sample space (universal set of outcomes)
A be a σ-algebra of sets in Ω. (A countably infinite collection of subsets Aj ⊂ Ω, such that (1) if Aj
∈ A then Ajc ∈ A, and (2) if {Aj, j=1,…,n} ∈ A then ⋃ j Aj ∈ A, is called a σ-algebra of sets in Ω.)
• Rm be the Borel algebra in Rm (minimum σ-algebra over the collection of open sets in Rm)
• X be a (real-valued) random vector defined as a mapping X : (Ω, A ) → (Rm, Rm). Note that if m = 1,
X is a random variable
Introduce probability measure P on (Ω, A) defined as a set function on A satisfying (1) P(A) > 0; ∀ A ∈
A, (2) P(∅) = 0, P(Ω) = 1; (3) P (⋃ j A j) = Σ j P(A j), ∀ A j ∈ A and Ai and Aj being pairwise disjoint, i ≠
j. The triplet (Ω, A, P) is referred to as the probability space and is commonly used to describe any random
experiment.
Let T ⊆ Rn be an arbitrary indexed parameter set. Real random processes defined as functions mapping
elements (t, ω) ∈ T × Ω into real random variables F(t, ω) are often used in engineering modeling and
analysis. More generally, a real random function is a function whose values are real random vectors. A
random function maps elements (t, ω) ∈ T × Ω into a family of random vectors {F(t) = F(t, ω) | t ∈ T, ω ∈
Ω} defined on the probability space (Ω, A, P) and taking values in the measurable space (Rm, Rm).
If n = 1, F(t) is called a random process; if n > 1 and m = 1, F(t) is called a scalar random field; if m > 1,
F(t) is called a vector random field. For every t ∈ T, the A–measurable random vector F(t) is called the state
of the function at t. For every ω ∈ Ω, the mapping t → F(t, ω) defined on T and taking values in Rm is called
a sample function (trajectory, field).
Let E[ . ], var[ . ], and cov[ . , . ] denote the expectation, variance, and covariance operators, respectively.
The role played for a single random variable by its mean and variance is played for random functions by its
mean value function and its covariance kernel. For every t ∈ T, define the mean value function of the random
function as
m(t) = E[ F(t) ]
(1)
6
and the variance function of the random process as
var[ F(t) ] = E[ (F(t) – m(t )) (F(t) – m(t ))T ]
(2)
For every t1, t 2 ∈ T, define the covariance kernel of the random function as
K(t1, t2) = cov[ F(t1), F(t2) ] = E[ (F(t1) – m(t1)) (F(t2) – m(t2))T ]
(3)
2.2 Wiener process
The standard Wiener process (or field) constitutes a class of random functions with special properties. Let n
= m = 1, and t = s ∈ S = T ⊆ R1. The standard Wiener process (also known as the Brownian motion process)
is the random process {W(s) = W(s, ω) | s ∈ S, ω ∈ Ω} defined on the probability space (Ω, A, P) such that
(1) for every ω ∈ Ω, sample functions s → W(s, ω) are continuous in S,
(2) for every ω ∈ Ω, W(0, ω) = 0,
(3) for every s1, s2 ∈ S, s1 < s2, the increments dW(s1, s2) = W(s2) – W(s1)
(i) are normally distributed (that is, W(s) is normally distributed ∀ s ∈ S),
(ii) have E[ W(s2) – W(s1) ] = 0 (that is, E [ W(s) ] = 0 ∀ s ∈ S),
(iii) have var[ W(s2) – W(s1) ] = s2 – s1,
(iv) are a sequence of independent random variables, i.e., if [s1, s2] ∩ [s3, s4] is empty or has no
interior then E[ dW(s1, s2) dW(s3, s4) ] = 0.
Note that the increments W(s2) – W(s1) over disjoint intervals of fixed length are independent normal random
variables with the mean function equal to zero and the variance function equal to the interval’s lenght
regardless of the value W(s1) at the beginning of the interval. More generally, the Wiener process produces
normal random variables with
m(s) = 0 for every s ∈ S
(4)
var[ W(s) ] = σ2 s for every s ∈ S
(5)
K(s1, s2) = σ2 min (s1, s2) for every s1, s2 ∈ S
(6)
where σ2 is the Wiener process parameter (σ2 = 1 for the standard Wiener process).
2.3 Wiener field
Similar properties are fulfilled by the standard Wiener field. Let n = 2 and m = 1, and t = (s, t) ∈ S × T =
T ⊆ R2, where S = [0, S] and T = [0, T]. The standard Wiener field is the random function {W(s, t) =
W((s, t), ω) | (s, t) ∈ S × T, ω ∈ Ω} defined on the probability space (Ω, A, P) such that
(1) for every ω ∈ Ω, sample functions (s, t)→ W((s, t), ω) are continuous in S × T,
(2) for every (s, t) ∈ S × T, ω ∈ Ω, W((s, 0), ω) = W((0, t), ω) = 0,
(3) for every s1, s2 ∈ S, t1, t2∈ T, s1 < s2, t1 < t2, the increments dW(s1, s2, t1, t2) = W(s2, t2) – W(s1, t2) –
(W(s2,t1) – W(s1, t1))
(i) are normally distributed (that is, W(s, t) is normally distributed ∀ (s, t) ∈ S × T),
(ii) have E[ dW(s1, s2, t1, t2) ] = 0 ( that is, E[ W(s, t) ] = 0 ∀ (s, t) ∈ S × T),
(iii) have var[ dW(s1, s2, t1, t2) ] = E[ (dW(s1, s2, t1, t2))2 ] = (s2 – s1) ( t2 – t1),
(iv) are a sequence of independent random vectors, i.e., if [s1, s2]×[t1, t2] ∩ [s3, s4]×[t3, t4] is empty or
has no interior then E[ dW(s1, s2, t1, t2) dW(s3, s4, t3 ,t4) ] = 0.
The (standard) Wiener field produces normal random vectors with
m(s, t) = 0 for every (s, t) ∈ S × T
(7)
var[ W(s, t) ] = var[ dW(s1 ,s2, t1, t2) ] = σs2 σt2 s t for every (s, t) ∈ S × T
(8)
7
K((s1, t1), (s2, t2)) = σs2 σt2 min (s1, s2) min (t1, t2)
(9)
for every (s1, t1) and (s2, t2) ∈ S × T, where σs2 and σt2 are parameters of the (standard) Wiener processes
W( . , T) and W(S, . ), respectively.
2.4 Separable random fields
More generally, we define separable random fields. Let S = [0, S] and T = [0, T]. A random field {F(s,t) |
(s,t) ∈ S × T} is said to be separable if for every (s1, t1) and (s2, t2) ∈ [0, S] × [0, T],
cov[ F(s1, t1), F(s2, t2) ] = cov[ F(s1, T), F(s2, T) ] cov[ F(S, t1), F(S, t2) ] / var[ F(S, T)]
(10)
For example, the random field F(s, t) = F1(s)F2(t), where F1(s) and F2(t) are random processes, is
separable. Clearly, the (standard) Wiener field is separable. For general fields, the separability property may
not hold. However, such a condition is implicit in the common engineering practice of exploring a physical
system by allowing only one variable to vary at a time.
For a general random function, at every point t ∈ T, the random vector F(t) may have a different
probability distribution, not a mathematically tractable case. Working with the Wiener field allows the
assumption that at every point t ∈ T, F(t) is normally distributed with the mean equal to zero and a variance
given by a number. In this way, the Wiener field is completely described by the variance of the random
vector F(t) at t ∈ T. If the mean of the field is not equal to zero, the field can be viewed as the sum of a zero
mean random field and a deterministic mean field. For more details about random functions the reader is
referred to [Ref. 55].
2.5 Simulation of Wiener fields
Suppose F(s, t) is a random field defined on S × T where S = [0, S] and T = [0, T]. Assume that the intervals
[0, S] and [0, T] are discretized so that 0 = s1 < s2 < . . . < sp = S and 0 = t1 < t2 < . . . < tq = T, then Fst is a
discretization of F defined by
Fst(i, j) = F(si, tj)
(11)
for i = 1, . . . , p and j = 1 , . . . , q. Note that Fst can also be thought of as a surface or as an p×q matrix.
Figure 1 depicts surfaces being discretizations of two random fields.
Let Wst be a discretization of the standard Wiener field W(s, t) on S × T where S = [0, S] and T = [0, T].
Notice that W( . , T) is a Wiener process on S and W(S, . ) is a Wiener process on T. Let Ws(i) = W(si), i = 1, .
. . , p, and Wt(j) = W(tj), j = 1 , . . . , q, be discretizations of the standard Wiener process W(.) on S and on T,
respectively. Then T1/2Ws and S1/2Wt are discretizations of the Wiener processes W( . , T) and W(S, . ),
respectively. For 0 = s1 < s2 < . . . < sp = S we calculate the covariance kernel for T 1/2Ws, namely,
cov[ T 1/2Ws (i) T 1/2Ws (m) ] = T min(si, sm) for 1 ≤ i, m ≤ p
(12)
and let Kss(i, m) = cov[ W(si), W(sm) ] = min(si, sm) for 1 ≤ i, m ≤ p. Similarly, for 0 = t1 < t2 < . . . < tq = T we
calculate the covariance kernel for S 1/2Wt, namely,
cov[ S 1/2Wt(j) S 1/2Wt (n) ] = S min(tj, tn)
for 1 ≤ j, n ≤ q
(13)
and let Ktt(j, n) = cov[ W(tj), W(tn) ] = min(tj, tn) for 1 ≤ j, n ≤ q. We calculate the covariance kernel of Wst
and, since W(s, t) is separable, we obtain
cov[ Wst (i, j), Wst (m, n) ] = Kss(i, m) Ktt(j, n)
(14)
for 1 ≤ i, m ≤ p and for 1 ≤ j, n ≤ q, i.e., the covariance kernel of discretization of the standard Wiener field
can be factored into two matrices, each of them being the covariance kernel of the discretization of a Wiener
8
process implied by the Wiener field. We then obtain simulations of Wst as
Wst = (KssC)T Zst KttC
(15)
where Zst is a (0, 1)-normal p × q matrix of independent random variables, and KssC and KttC are Choleski
factors of the covariance kernels Kss and Ktt, respectively [Ref. 56]. The equality in the expression above
holds in the sense that both sides have the same mean value function (equal to zero) and covariance kernel.
2.6 Simulation of separable random fields
Wiener fields play a fundamental role in the representation of more general random fields. For the separable
random field F(s, t) and its discretization Fst, there exist matrices FL = (KssC)-1 AL and FR = (KttC)-1 AR such
that
Fst = FLTWst FR
(16)
or
Fst = ((KssC)-1 AL)T Wst (KttC)-1 AR
(17)
In general, this representation is not unique. However, if AL is an upper triangular p × p matrix with AL(1, 1)
≥ 0 and AL(i, i) > 0 for 2 ≤ i ≤ p, and AR is an upper triangular q × q matrix with AR(1, 1) ≥ 0 and AR(j, j) > 0
for 2 ≤ j ≤ q, then this representation is unique. Eq. (17) defines a linear transformation Ast of the space of
discrete surfaces such that
Fst = Ast[ Wst ]
(18)
Again, the equality holds in the sense that both sides have the same mean value function and covariance
kernel. Note that combining Eqs. (15) and (17) we get
Fst = ALT Zst AR
(19)
so that the representation problem for the linear transformation A as given by Eq. (17) reduces to a search for
appropriate matrices AL and AR.
Additionally, we calculate the covariance kernel K((si, tj), (sm, tn)) of Fst and, using Eq. (19), we obtain
K((si, tj), (sm, tn)) = cov[ [(AW)st ](i, j), [(AW)st ](m, n) ] = [(AL)TAL](i, m)[(AR)TAR](j, n)
(20)
for 1 ≤ i, m ≤ p and 1 ≤ j, n ≤ q. Now, setting
we have
Invoking Eq. (10), we get
and
KL(si, sm) = [(AL)TAL](i, m) and KR(tj, tn) = [(AR)TAR](j, n)
(21)
K((si, tj), (sm, tn)) = KL(si, sm) KR(tj, tn)
(22)
KL(si, sm) = E[ F(si, T)F(sm, T) ] / (E[ F(S, T)2 ])1/2
(23)
KR(tj, tn) = E[ F(S, tj)F(S, tn) ] / (E[ F(S, T)2 ])1/2
(24)
Eq. (20) gives stochastic linearization of the covariance kernel of a separable random field.
More generally, Reneke [Ref. 56] developed a method for finding representations of linear systems
generating random fields satisfying the property of separability. Let G be a separable random field generated
by a linear system, i.e., G = BF, where F is a separable random field and B is a linear operator representing
the linear system. Assuming that G and F are known and using (17), the task is to construct the matrices BL
and BR so that the following holds
Gst = (BF)st = ((KssC)-1 BL)T Fst (KttC)-1 BR
(25)
9
where both sides have the same mean value function (equal to zero) and covariance kernel (see Figure 1).
Figure 1. Discretizations of two random fields over S × T: F(s, t) on the left and G(s, t) = BF(s, t) on the right
We now show how the matrices AL and AR (or BL and BR) can be constructed to obtain the unique
representation given by Eq. (17). Let kL(.) be an increasing nonnegative function defined on S = [0, S] with
kL(0) = 0. Define the matrix Uss as
Uss(i, j) = kL(min(si, sj))
(26)
for i = 1, . . . , p and j = 1 , . . . , q. Note that in view of Eq. (6), kL(.) becomes a linear function for a Wiener
process. Uss is a nonnegative-definite p × q matrix for which there exists a factorization with its Choleski
factors UssC
Uss = (UssC)T UssC
(27)
where UssC is upper triangular with nonnegative entries on the main diagonal. Similarly, let kR(.) be an
increasing nonnegative function defined on T = [0, T] with kR(0) = 0. Define the matrix Vtt as
Vtt(m, n) = kR(min(tm, tn))
(28)
for m = 1, . . . , p and n = 1 , . . . , q. Vtt is a nonnegative-definite p × q matrix for which there exists a
factorization with its Choleski factors VttC
Vtt = (VttC)T VttC
(29)
where VttC is upper triangular with nonnegative entries on the main diagonal. Choose
AL = UssC
and
AR = VttC
(30)
then Eq. (19) (or Eq. (25)) holds and Eqs. (26-30) provide an algorithm for constructing the stochastic
linearization A (or B) of the system generating the random field Fst (or Gst). Note that the starting point for
the algorithm is the choice of increasing nonnegative (deterministic, one-variable) functions kL and kR on S
and T, respectively. Figure 2 depicts four (normalized) functions that could be used in this capacity.
It is possible to derive results on the algebra of representations. Given the representation of A, the
representation of the inverse is given by
(A-1)L = KssC (AL)-1KssC and (A-1)R = KttC (AR)-1KttC
(31)
Additionally, given two representations, say A and B, the representation of their product is given by
10
(AB)L = BL (KssC) -1 AL and
(AB)R = BR (KttC) -1 AR
(32)
In general, the sum of two separable fields is not separable and one does not get formulas for (A + B) L and
(A + B) R. However, given the representation of the product, we are in the position to derive the covariance
kernel of Gst.
Figure 2. Candidate functions for kL(.) or kR(.):
k1(t) = t, k2(t) = (1 - e-4t)/(1 - e-4), k3(t) = (e4t - 1)/(e4 - 1), and k4(t) = (k2(t) + k3(t))/2
Further, given the representation of a separable random field, it is easy to obtain the variance of this field
at a point (si, tj) ∈ S × T. We have
var[ F(si, tj) ] = KL (si, tj) KR(si, tj) = [(AL)TAL](i, i)[(AR)TAR](j, j)
(33)
Using Eqs. (26-30), this becomes
var[ F(si, tj) ] = Uss(i, i) Vtt(j, j) = kL(si) kR(tj)
(34)
Note that the variance is given by the product of the two increasing functions, which numerically is
represented by a matrix and geometrically by a surface on S × T. Figure 3 depicts the variance surface
produced with the functions kL = k2 and kR = k4 shown in Figure 2.
Figure 3. Variance or risk surface R of a separable random field F
11
Recalling that the variance quantifies risk implied by the variability of random field [Ref. 47], we obtain the
point-wise definition of the risk surface R of the field as
R(si, tj) = var[ F(si, tj) ] = kL(si) kR(tj)
(35)
at (si, tj) ∈ S × T for i = 1, . . . , p and j = 1 , . . . , q. Eq. (35) gives functional representation of risk, which is
a novel mathematical tool and offers a new perspective on risk analysis of complex systems. The
development presented in the next sections is based on this representation of risk.
3. Performance-based methodology for complex systems
We now present the performance-based methodology for general complex systems. In Section 4, this
approach will be applied to systems or processes into which the complex systems development process will
be decomposed. The approach consists of three stages including (1) the development of a conceptual model,
(2) the development of computational models, (3) decision making.
3.1 Conceptual model
The development of a conceptual model of the system consists of the following steps: (1) the choice of
uncertainties; (2) the choice of the input and the output; (3) the choice of the decision goal; (4) a
decomposition of the system into interacting components and subcomponents; (5) and the choice of
alternatives.
Uncertainties. We assume that the system operates in a random environment and model the related
uncertainty with independent exogenous variables, say s and t. Since the uncertainties are truly unknown
model elements and cannot be quantified, they are only assumed to be within an uncertainty space. Let s ∈ S
= [0, S] and t ∈ T = [0, T], then S × T determines the uncertainty space in R2. The uncertainty space can also
be modeled as [0, 1] × [0, 1], where [0, 1] is a range of normalized S or T. Depending on the application,
examples of uncertainties include temperature, humidity, market conditions, customer base, etc.
Input and output. Randomness in system performance enters through the random input. We assume that
at every point (s, t) ∈ S × T, system performance varies according to a normal distribution with the mean of
zero so that the variance uniquely determines the distribution. Since the approach is performance-based,
system performance is considered the system response or output. The system response to an input that is
understood as a stress imposed on the system from outside is viewed as the residual stress. It is expected that
the system absorbs the stress and performs best over the entire space of uncertainties. Consequently, the
input and output are both modeled as vector random fields, say F(s, t) and G(s, t), respectively, on the space
of uncertainties S × T. Examples of the input include customer demand or influence of external operating
environment.
Decision goal. The overall objective is to find decision alternatives (designs) so that the random system
performance as the response to random inputs is optimized. The performance is optimal if the effect of the
random input on the process is minimal when feasible alternatives are employed. An alternative is feasible if
its expected performance meets a desired threshold level. Such a feasibility condition accounts for the
consideration of only these alternatives whose expected performance is satisfactory. In this spirit, the
optimization of system performance is converted to the minimization of the performance risk that is available
at every point of the uncertainty space as the variance of the underlying normal distribution according to
which the system performs. In effect, the choice of feasible decision alternatives for system risk management
while optimizing process performance becomes the overall goal.
Decomposition. System performance can be measured in many different ways. Since the output is
generally vector-valued, each vector component creates an implied system component and the system is
expected to perform well according to all its scalar outputs. These components make up the elements of the
conceptual model and the original problem of optimizing the system performance is decomposed into these
components. Depending on the type of system under consideration, the components may represent different
scalar outputs, for example, the tasks of the design process or the criteria of the assessment process.
The components interact with each other, that is, the system performance with respect to one scalar output
influences and is influenced by the system performance with respect to another scalar output. This
interaction is modeled by decision makers based on their knowledge and experience without reference to
12
physics-based models. Each component can be decomposed in turn into interacting subcomponents leading
to a more complex model. The decomposition can continue until subcomponents are simple enough so that
decision makers, if needed, can relate to physics-based models and provide required estimates for the later
computational models. Following the vector output and its decomposition, the input is decomposed into
scalar inputs. The inputs and outputs are random fields that are assumed to have zero means and be
determined by their covariances.
Figure 4 depicts a decomposition of the process into two interacting components Ai, two scalar inputs
Fi(s,t) and two scalar outputs Gi(s, t), i = 1, 2. For each component we have the following relationships
between its output and inputs
G1(s, t) = A1 ( F1 (s, t) + G2(s, t) )
(36)
G2(s, t) = A2 ( F2 (s, t) + G1(s, t) )
(37)
from which we obtain the following equations for the outputs
G1(s, t) = ( I – A1 A2 )-1 A1 ( F1(s, t) + A2 F2(s, t) )
(38)
G2(s, t) = ( I – A2 A1 )-1 A2 ( F2(s, t) + A1 F1(s, t) )
(39)
where I is the p × q identity matrix.
As noted in Section 2, a formula for the representation of the field I – A1 A2 (and similarly (I – A2 A1)) is
not available and we use the separable approximation of this field to obtain the approximate inverse (I – A1
A2)-1 (and similarly ( I – A2 A1 )-1) (see Reneke and Samson [Ref. 58]).
Alternatives. System performance varies not only due to environment variations but primarily because
feasible alternatives can be ‘plugged-in’ into the components to observe the performance. At the top level of
the decomposition, the abstract alternative of the entire system (consisting of alternatives assigned to all
components) can be viewed as a highly complex, nonlinear transformation on the space of the system inputs
and outputs, i.e., a transformation that takes a random vector input F into a random vector output G. Going
down through the intermediate levels of the decomposition, the alternative of a component remains unknown
but at the lowest level of the decomposition, the alternative of the simplest component is assumed to be
known to decision makers and is modeled with a linear transformation that takes a random input of this
component into a random output of this component. A decision maker will construct that linear
transformation in the computational stage of the methodology.
Figure 4. Decomposition of a complex system
3.2 Computational models
The development of computational models includes the construction of random models of inputs and random
models of alternatives for the simplest components and their integration providing a representation of risk for
the overall system. For brevity but also clarity of this presentation, hereafter we refer to the decomposition
depicted in Figure 4.
13
Construction of inputs and alternatives. Inputs and random performance of alternatives are modeled as
random fields on the uncertainty space S × T, which are fully determined by their covariance. The inputs can
be modeled as Wiener fields or more general separable random fields. The condition of separability allows
for modeling with respect to one uncertainty variable at a time, i.e., as random processes on S (or on T),
which are fully determined by their variance. When modeling the alternatives, we assume that decision
makers are not knowledgeable about the risk of the overall process or its components at the higher or
intermediate levels of decomposition, but they are comfortable with assessing the risk of the alternative
performance at the lowest level.
For Wiener fields, the modeling uses Eq. (15). For separable fields, the modeling follows the algorithm
given by Eqs. (26-30): it starts with the choice of nonnegative increasing functions kL(s) and kR(t) and leads
to the construction of a linear operator A. These functions model the variability of an input or alternative
performance and, more precisely, represent the variance or risk of the input or alternative performance as a
function of one uncertainty variable. The fact that the functions are increasing models the property that as
uncertainty increases, the input or alternative variability increases. Holding one uncertainty variable fixed,
this variability as a function of the other uncertainty variable can be (1) linear (U), i.e., the variability
increases uniformly; (2) nonlinear concave (C), i.e., the variability increases more rapidly at lower levels of
uncertainty or is inelastic at higher levels; (3) nonlinear convex (V), i.e., the variability increases more
rapidly at higher levels of uncertainty or is inelastic at lower levels. The decision maker chooses the
functions kL(s) and kR(t) from among the available curves U, C, or V (see Figure 2). This simple act defines
the risk in employing the alternative since the product of these functions, kL(s) kR(t) for all s ∈ S and t ∈ T,
produces a normalized risk surface for the alternative and leads to a linear operator representation of the
alternative. In the example, the decision maker constructs a linear transformation for each input, F1 and F2,
and for each feasible alternative for the components A1 and A2.
Risk. The earlier modeling assumptions lead to a novel representation of risk of the entire system.
Modeling the inputs and outputs as separable normal random fields with mean zero and modeling the
alternatives as linear operators on the space of these fields allows for extracting the output covariance value
at each point of the uncertainty space and creating a risk surface in this space (Eq. (35)). Figure 5 depicts risk
surfaces derived from random fields. The decision maker constructs risk surfaces for each output of the
system for each feasible collection of alternatives. In the example, a collection reduces to a pair (A1, A2) of
alternatives.
3.3 Decision-making
The decision problem is choosing an ordered set of alternatives that produce the tuple of risk surfaces, the
risk “profiles,” that are best. Nondominated tuples of risk surfaces, for which every other tuple of risk
surfaces associated with a feasible set of alternatives yields a larger value at one or more grid points, are
considered to have best risk profiles. The ordered set of alternatives associated with a nondominated tuple of
surfaces is called an efficient set of alternatives. Note that in the example a tuple becomes a pair.
Optimization step. The optimization step involves the identification of a collection of nondominated
tuples of surfaces and a corresponding collection of efficient sets of alternatives.
Decision step. The decision step, that involves the selection of a preferred nondominated tuple of surfaces
from among the nondominated tuples, is not automatic, but requires an expression of preference from the
decision maker. One preference rule can come from choosing the preferred efficient set of alternatives that
has the tuple of risk surfaces closest to the “utopia tuple.” The utopia tuple might not be associated with any
particular set of alternatives and is the lower envelope of all nondominated tuples of surfaces of all efficient
sets of alternatives. Other preference rules might be the following: choose the preferred efficient set of
alternatives that has the tuple of risk surfaces whose risk exceeds no other over the largest region in the
uncertainty space or whose risk exceeds no other on a region of larger uncertainty or whose risk exceeds no
other for a range of inputs. For other preference rules the reader is referred to [Ref. 57].
3.4 Summary of the performance-based methodology
We claim that the presented performance-based methodology (PM) offers a new paradigm for complex
systems decision-making under uncertainty and justify this claim with a comparison between the PM and
optimization-based methodologies (OMs) reviewed in the introduction. The comparison follows some
specific topics that are highlighted below.
14
Figure 5. Two pairs of risk surfaces:
the left (right) column if for G1(s, t) (G2(s, t)) and the upper (lower) row is for a feasible alternative #1 (#2)
Type of model. OMs are traditional in the sense that they employ analytic physics-based models that are
testable using a known state of the system and allow for an input-output analysis. The PM uses a decision
model that is not testable because the real state of the system is unknown: one does not know how the system
will perform under uncertainty.
Mathematical tools. OMs typically use analytic functions of real and/or random variables with various
assumptions imposed on the functions to guarantee the solvability of the resulting optimization problems.
The PM uses random functions of two (or more) variables known as random fields. The random fields are
assumed to be completely described by their mean function and covariance kernel.
Inputs. For OMs, one set of data typically represents one scenario and the optimization problem has to be
resolved for each new data set if multiple scenarios are to be examined. The PM uses random fields as inputs
which model a very large or even infinite number of scenarios.
Outputs or criteria. Both methodologies allow for multiple criteria for decision evaluation but only the
PM allows for their interaction.
Approach. OMs use mathematical optimization, i.e., an algorithmic search for an optimal decision
among alternatives only implicitly available. The PM uses a brute force approach: all alternatives are
explicitly available and an optimal one is identified through a screening process.
System complexity. The PM can treat multiple-level problems with multiple components at every level
while OMs may treat problems of limited complexity due to maintaining the solvability of the optimization
problems.
Optimality. OMs define optimality of the final decision based on the type of the optimization problem
used (e.g., Pareto, robust, reliability-based, possibility-based decision, etc.) while the PM yields a decision of
minimal risk and hence a naturally robust decision.
Robustness and risk. In some OMs (e.g., robust design), robustness is modeled with the expected
performance and the variance of performance. While the latter has the meaning of risk, it has not officially
been recognized. The PM models the expected performance with the feasibility condition and manages risk
by providing its representation as a function of uncertainties.
4. Complex systems development process
Consider the following scenario. An automaker company identifies an economic opportunity. A management
team and a design team are formed. Ultimately each team will be composed of a large number of individuals
with a variety of expertise. Initially the teams are small but composed of individuals with a broad base of
knowledge. The management team is charged with the exploitation of the opportunity which includes
management of the development of a vehicle design, supply, manufacturing, distribution, and marketing
chains, and investments in those chains. The design team is charged with producing a vehicle design meeting
requirements of the management team and regulatory agencies while maximizing expected performance and
minimizing risk. The problem is coordinating the efforts of the two teams.
We present a methodology for formalizing and rationalizing the development process behind the
presented scenario and for structuring communication between two teams with disparate knowledge bases.
15
4.1 Overview
The complex systems development process is first decomposed into an assessment process and a design
process that employ different teams, the team of managers and the team of designers, respectively (see
Figure 6). The two teams communicate and cooperate with each other in order to design a complex system
performing satisfactorily in a variety of operating conditions.
The teams are composed of people with different background and capabilities, have different concerns
and view the world from different perspectives. Additionally, the goals of the teams are not necessarily the
same. The design team may propose several innovative designs for which the management team may not
have an appreciation. On the other hand, the management team may be guided by market conditions that
limit the design team’s possibilities.
Figure 6. One stage of the complex systems development process
The overall development process coordinates the two processes and makes them compatible with each
other through modeling requirements. While the teams have different knowledge domains, they operate in
the same company environment and join forces to develop the same complex system. Our modeling
technique captures this commonality in a very special way. Both, the company environment and performance
of the system being developed are affected by conditions of uncertainty, which are modeled by a common
and unique uncertainty space whose variables remain independent.
The vehicle development process proceeds in stages, regulating the timing of decisions and imposition of
requirements. Within the design process and, in a similar way, within the assessment process, decision
makers at different stages or even at different levels of a stage will have different knowledge bases and
perspectives. In general, knowledge becomes more particular as we move down or more general as we move
up. An important problem for the design development methodology is that decisions must be made at the
right knowledge level. Inappropriate decisions made too early in the development process can limit the final
design and the full exploitation of the opportunity.
The management team's decisions narrow the space of uncertainties. The design team would prefer the
design space to be as open as possible. The goal of management/design coordination is the reduction of
uncertainties in moving through the stages of the development process while limiting the restrictions placed
on feasible designs. In our vision of the process, at each stage the management team does not appreciate the
range of possible designs and yet the management team is the final arbiter of acceptable designs. How can
we avoid imposing restrictive requirements early that limit design possibilities later? Our hope is that
systematic progress can be made in the development process while delaying some management choices and
advancing others.
The communication between the management and design team requires two interfaces, the metric
converter M2D and the performance converter D2M, that convert information form one team to the other
(see Figure 6). Some members of the two teams could constitute the converter committees. Within a stage
16
the two teams negotiate the dominant uncertainties (using the M2D interface) and the management team sets
the dominant metrics, translated by the M2D interface for the design team. Neither team is in a position to
understand everything the other team is doing. The concept for the M2D and D2M interfaces is to provide
bridges between the teams.
Given some initial design requirements Γ0 (see Figure 6), the design team proposes several alternative
(preliminary) designs. The design team will embrace an open design philosophy allowing for the largest
number of possibilities in later stages.
In each development stage vehicle performance at each point in the uncertainty space (the independent
variables) is measured using prescribed metrics applied to the whole vehicle. Informally, we usually think of
performance as some action unfolding in time. For our purposes performance refers to the measured
effectiveness of a method (design) for accomplishing a task and the independent variables do not include
time, in general. Vehicle performance is a random function of the uncertainties.
The vehicle is assessed using criteria. The assessment according to a given criterion is based on
measurements (using the specified metrics) of vehicle performance. Each criterion could make use of several
metrics. For instance, two metrics might be the empty vehicle weight and the load carrying capacity. One
criterion, economic viability (EV), in an early assessment might be based on the empty vehicle weight.
Another criterion, vehicle performance (VP), again in an early assessment could be based on the ratio of the
load carrying capacity and the empty vehicle weight.
4.2 The design process
At each development stage the design process is modeled as a multi-level complex system in which the
overall design is decomposed into tasks (subsystems) derived from design performances (outputs). Inputs are
modeled as Wiener fields to represent variable conditions in which the overall designs will operate. Figure 7
depicts decomposition of the design problem into four interacting tasks, in which the solid-line and dottedline arrows denote stronger or weaker interactions respectively and Wiener fields W1 and W2 are used as the
inputs. Proposed partial designs (methods), being actual or virtual components, are modeled as linear
transformations on the space of design performances that are random functions of the uncertainties. The
proposed designs are ``plugged-in'' as the alternatives in the performance-based methodology, and their
performance is estimated. For early stages the ``measurements'' are estimates of performance. In later stages
the ``measurements'' might be estimates based on measurements of actual or physical components. In the
testing phase using assembled vehicles the metrics are applied to the physical artifact.
Randomness in performance is a result of the situational uncertainties rather than errors in the estimates or
measurements. For instance, repetitions of a designed maneuver of the assembled vehicle may result in
performances which vary with the incompletely described operating environment.
Figure 7. Preliminary design model
17
4.3 The assessment process
At each development stage the assessment problem is decomposed into several interacting criteria. The
criteria may interact in that assessment using one criterion could influence the assessment using another.
Figure 3 depicts possible decomposition of the assessment problem into two interacting criteria A1 and A2,
now referred to as vehicle performance (VP) and economic viability (EV), respectively. In the spirit of
performance-based methodology, the criteria are modeled as linear transformations on the space of
assessments that are random functions of the uncertainties. Note that the management team might not know
or understand the components or relationships of components of the design model. Similarly, the design team
may not know the relationship between VP and EV in the assessment model.
When efficient overall designs are identified by the design team at the completion of the optimization step
of the performance-based methodology, the D2M interface converts the covariance kernel of these designs’
performance available in the units of design metrics into the units of assessment criteria. The performancebased methodology then continues for the assessment process. During the decision-making step, a minimum
risk design is selected as a preferred design. This decision completes that stage of the development process.
The decision also reduces the uncertainties for the next stage and imposes additional design requirements Γ1.
4.4 Inputs
Some concerns for both teams are better modeled as random model inputs, i.e., as inputs to a criterion in the
assessment model or inputs to a method in the design model (see Figures 3 and 7). The inputs can be used in
the decision problem to insure system robustness and to value system flexibility. In our vision, the inputs on
the management side are models of external influences: customer preferences, general economic conditions,
etc. The inputs on the design side also represent random external influences: operating conditions,
performance expectations, etc.; however, these inputs, modeled as Wiener fields rather than more general
fields, are “uncolored” leaving the design/environment balance neutral.
4.5 Multiple stages
In general, the development process may consist of multiple stages with multiple levels within each stage. At
each stage some design constraints will already be established: empty vehicle weight at the earth's surface of
no more than 40 lb for a Martian rover or operating range of 300 miles for a passenger vehicle. In general, in
passing between stages uncertainties will be narrowed and assessments for the preferred upper stage design
will determine constraints for the next lower-stage design. For instance, if the empty vehicle weight is a
metric at a given design stage then the empty vehicle weight of the preferred design becomes a constraint for
the next stage.
At each stage of the development process the design process may consist of multiple levels according to
the performance-based methodology. As the design process passes down through the stages or down through
levels within a stage, the design problem is further decomposed using tasks and methods (see Figure 8). At
each stage, except the last stage, an alternative (preliminary) design will consist of a collection of actual (off
the shelf) components and virtual components (realized as a set of requirements). Even a Stage I preliminary
design could include actual components. This would certainly be the case for an upgrade exercise. At the last
design stage, all of the components would be actual and a test vehicle could be assembled from the specified
components. On the other hand, the decomposition of the assessment process into criteria is likely to be the
same at every stage because the assessment team may prefer to apply a uniform design evaluation approach
to the all stages of design.
5. Computational example
We present an example illustrating some of the steps of the development process presented in the previous
section. We concentrate on the assessment model and decision making by the management team. Figures 7
and 8 illustrating the design models are included for completeness but not discussed in detail. Modeling and
decision steps are prefaced with remarks indicating mathematical considerations necessary for the pieces to
fit together.
5.1 Uncertainties
Some decisions including design, manufacturing, and marketing decisions will already have been made. We can
assume the existence of a company culture or philosophy that restricts alternatives. Further, company strategy
18
Figure 8. Two-level decomposition of the design model
might require that preliminary designs incorporate some elements; engines, chassis, etc. In Figure 6 these
requirements are labeled Γ0.
For our illustrative example we assume two principal uncertainties: (1) vehicle operating environment
modeled as an independent variable 0 ≤ s ≤ 1; this variable models the “who, where, why?” uncertainty
related to the type of driver, vehicle operating conditions, and the driving mission, and (2) vehicle
manufacturing environment modeled as an independent variable 0 ≤ t ≤ 1; this variable models the “how
much, how long, with what?” uncertainty related to the number of vehicles manufactured, the production
time, and the amount of resources.
By identifying the principal uncertainties and naming them, we recognize that good decisions can produce
poor results and poor decisions good results because of conditions beyond our control. Second, we realize
that early decisions must produce reasonable results over a range of conditions. Neither the management nor
design team can get too far out front with decisions that later prove to be unduly limiting. The agreed upon
uncertainties act as a constraint on premature decisions.
The two teams agree on the words but there can be differences in interpretation and emphasis. We leave
the uncertainties unmodeled and so the two teams with different perspectives will likely differ in their
interpretation and emphasis. However, as the design development process proceeds the uncertainties will
narrow and the two teams brought into closer agreement. Since the success of the development process
depends on cooperation between the teams both teams will put a premium on a common understanding of the
unknowns comprising the uncertainties.
Deep in the design development process the uncertainties could represent concrete quantities: interest
rates, transportation costs, operating temperatures, or manufacturing tolerances. However, in early stages of
the development process the variables will be ordinals relating degrees of ignorance of unknown conditions
affecting the outcomes of the vehicle design.
5.2 Metrics
The assessment metrics, VP and EV, are translated by the M2D interface into design metrics as the ratio of
load carrying capacity to empty vehicle weight and the empty vehicle weight, respectively. For the
assessment process and design process, these measures of performance are the model outputs or
``observables''.
The performance measures of the assessment process are available through Eqs. (38-39) while analogous
equations may be derived for the performance measures of the design process depicted in Figure 7. In order
to hold, these equations require that the performance measures must have appropriate units and be
appropriately scaled. In the example, since one metric is a ratio, vehicle weight might be measured as a
percent of some ideal weight.
The metrics apply to the ``whole'' vehicle. At the preliminary design stage there is no physical artifact.
19
Early in the development process measures of performance are estimates but as the process proceeds the
measures will become increasingly data-based.
Note that the translation of management metrics into design metrics is not unique and alternate
translations could make a difference.
5.3 Construction of preliminary designs incorporating Γ0 and estimation of their performance
One can envision a small group of experts with knowledge in the basic vehicle tasks: propulsion power,
vehicle controls, cabin environment, and load support, meeting face to face and negotiating several
preliminary designs satisfying the Γ0 requirements. Using the empty vehicle weight metric, the total weight
is the sum of the weights of the components. These weights, for some preliminary design, would likely be a
mixture of measured values for actual components and estimated values for virtual components. Both kinds
of estimates might be arrived at after a further decomposition of the design model.
In this example, since we do not decompose the design process, we numerically construct metric values
for overall designs. For each preliminary design, a separable normal random field of each performance
metric is modeled. Using the fact that a normal random field is the sum of a deterministic mean field and a
zero mean random field, we assume the expected performance is constant over the range of uncertainties and
model the variable part of the performance, i.e., the difference of the performance and the expected
performance. The estimated expected performance is defined operationally as the performance level for
which actual performance is equally likely to exceed or fall below. The random variability in the vehicle
weights result from the uncertain operating and manufacturing environments, i.e., at the preliminary design
stage, before the relative importance of weight, strength, cost, etc. is established, the design team can only
estimate the mean and covariance of the ultimate vehicle weight.
The design team may set the following feasibility conditions on the preliminary designs: A design is
feasible if the expected ratio of load carrying capacity to empty vehicle weight is at least 0.5 and the
expected ratio of vehicle weight to a weight goal is no more than one. Clearly, due to the performance
variability not every preliminary design may be feasible.
The constant expected performance of a design can be thought of either as a number or a constant surface
over the space of uncertainties S × T = [0, 1] × [0, 1] and therefore is selected randomly from among {0.6,
0.7, 0.8} for the ratio of load carrying capacity to empty vehicle weight and from among {0.9, 0.8, 0.7} for
the ratio of vehicle weight to a weight goal.
The variable performance is modeled by covariance functions for which Eq. (22) holds. We define low,
medium, and high risk designs, depending on the confidence level at which designs satisfy the feasibility
conditions, and follow the computational stage of the performance-based methodology. For each feasible
design and each performance metric, we choose two increasing functions kL(s) and kR(t) from the four
possibilities illustrated in Figure 2 to estimate the variance of each metric with respect to s and t,
respectively, or in other words, we select the risk profiles that are meaningful for each metric. The
``estimates'' are completed by choosing multipliers (scaling factors), for instance, kL = aki and kR = bkj for
some a, b > 0 and i, j =1, . . . , 4. The multipliers of kL and kR are selected randomly to correspond to low,
medium, and high risk.
Assume now that at a point (s, t) in the uncertainty space the normal random field of ratio x of load
carrying capacity to empty vehicle weight has the expected value m and standard deviation σ. If m > 0.5 and
σ = (m – 0.5)/1.3 then the probability of x < 0.5 is equal to 0.1 and under the most uncertain conditions, s = t
= 1, the design will meet the corresponding feasibility condition at the 90% confidence level. This condition
will be met at a higher confidence level under less uncertain conditions s, t < 1. We classify this design as
low risk and choose kL(s) = ((m – 0.5)/1.3)ki(s) and kR(t) = ((m – 0.5)/1.3)kj(t). For a = b = (m– 0.5)/0.85 the
design will meet the feasibility threshold at the 80% confidence level which designates a medium risk
design. For a = b = ((m – 0.5)/0.25 the design will meet the threshold at the 60% confidence level producing
a high risk design. Similarly, if the expected ratio of the vehicle weight to the weight goal is smaller than 1,
we have kL(s) = ((1 – m)/1.3)ki(s), and kR(t) = ((1 – m)/1.3)kj(t), and the design is low risk. If a = b = (1 –
m)/0.85 then the design is medium risk. If a = b = (1 – m)/0.25 then the design is high risk.
For each design, the factored covariance kernel (Eq. (20)) now assumes the following form
K((s1, t1), (s2, t2)) = kL(min(s1, t1)) kR(min(s2, t2)) = a b ki(min(s1, t1)) kj(min(s2, t2))
(40)
20
Since there are four risk profiles for functions kL and kR for each metric, there are 44 = 256 choices of risk
profiles, 32 = 9 choices of expected performance levels, and 32 = 9 choices of risk levels (low, medium, and
high). Consequently, this mathematical approach produces 20,736 feasible designs. Recall that the design
team only passes to the management team the feasible designs that are efficient. In order to simplify our
example, we construct a set of sixteen efficient designs by adding one feasible design at a time checking that
all the designs are efficient. If not, we discard the last choice and introduce a new design chosen at random
from the feasible set and check again.
The identification of efficient designs follows the optimization stage of the performance-based
methodology. Using Eq. (40) for each feasible design and each performance metric, a risk surface is
constructed according to Eq. (35). The pairs of risk surfaces are compared in order to produce 16 nondominated pairs and, accordingly, 16 efficient designs.
5.4
Representation of criteria
For each feasible design, the D2M interface translates the random field of ratio of load carrying capacity and
empty vehicle weight into a random field of vehicle performance and the random field of ratio of vehicle
weight and weight goal into a random field of economic viability. The translation amounts to converting the
covariance kernel available in the design units with Eq. (40) into the units of assessment criteria. Since this
translation acts on the matrices AL and AR in Eq. (20), we automatically have these matrices also in the
assessment units and can use them as alternatives for EV and VP in the assessment model.
In the example, the units of performance were chosen so the numerical values of the design metrics were
unchanged when translated into assessment units.
5.5 Assessment of designs
To assess the designs identified as efficient in the design process we use the conceptual assessment model
pictured in Figure 3, which shares all of the mathematical complications arising in a more complex model.
Eqs. (38-39) give the responses of the assessment model in terms of the representations of EV and VP and
the inputs F1 and F2.
Effective demand, perhaps for specific vehicle features at attainable price levels, and general market
conditions affecting the firm's ability to raise capital are modeled as inputs F1 and F2. Again, the inputs are
separable random fields of the uncertainties and described in terms of estimated means and covariances.
These inputs are constructed from estimates of the variances of Fi(s, 1) and Fi(1, t), i = 1, 2, in the same way
that the performance surfaces for the designs have been constructed. The variances might be based on
observations or constructed using risk profiles and multipliers. The inputs permit the management team to
explore ``what if questions'', i.e., how different designs fare under different conditions represented by
effective demand and general market conditions.
For each design and each assessment criterion, the covariance kernel of the output (G1 and G2, see Figure
4) available at each point of the uncertainty space produces a risk surface in this space. We generate 16 pairs
of risk surfaces for the criteria VP and EV.
5.7 Decision stage for the assessment process
We follow the decision-making stage of the performance-based methodology. We first identify the efficient
designs by finding the non-dominated pairs of risk surfaces for the criteria VP and EV. Among 16 designs,
only designs #4 and #7 are efficient (see Figure 9).
In step two we use a preference rule to pick out a preferred design from among the efficient designs. We
now discuss the implementation of two preference rules. The preference might be for the design whose risk
surfaces are closest to the utopia pair (see Figure 10). For design #4, the distance to the utopia pair is 0.8365
while for design #7 this distance is 0.2052. In this case, design #7 would be preferred.
A possible preference rule involves consideration of where the efficient designs do better (see Figure
11) or, in other words, where the regions of their non-dominance are located in the space of uncertainties.
Design #4 is less risky for EV almost everywhere in the uncertainty space, and for VP almost nowhere in the
uncertainty space. Design #7 is less risky for EV everywhere except for larger values of s and low to middle
values of t, and for VP almost everywhere in the uncertainty space. In general, design #7 might be preferred.
However, if EV is relatively more important than VP, design #4 might be preferred.
The preference rules need not be communicated to the design team only the results of applying the rules.
21
The management team may explore their decision over a range of inputs. These inputs, for instance inputs
reflecting general business conditions, might in fact be meaningless to the design team.
Figure 9. Two pairs of nondominated risk surfaces;
The left (right) column is for economic viability (vehicle performance) and the upper (lower) row is for
design #4 (#7)
Figure 10. Utopia pair: At each point in the uncertainty space, the utopia surface is the best
that any preliminary model can achieve
5.9 Reduced uncertainties/additional requirements
Performance of the preferred alternative becomes a requirement for the next assessment/design stage. These
requirements are added to Γ0 to become Γ1. The management team may make additional decisions, perhaps
in manufacturing and marketing, to reduce uncertainties based on model inputs to the assessment model for
the preferred design. This completes our example for Stage I. Stage II and succeeding stages would follow
the same pattern. Decomposition can be used to control the complexity of more detailed models.
New uncertainties can be added and old uncertainties modified. At any stage of the development process
the highlighted uncertainties should be the most significant. As the development process proceeds old
uncertainties are resolved and new uncertainties emerge. We would expect that the uncertainties become
more specific suggesting refinements in the designs offered as alternatives.
22
Figure 11. Regions of dominance in the space of uncertainties (s, t) for efficient designs:
The darker regions indicate where the design is least risky;
The left (right) column is for economic viability (vehicle performance) and the upper (lower) row is for
design #4 (#7)
In our vision of the design development process the management team provides the interface with
manufacturing, marketing, the customer base, and the external business community composed of both
competitive and supportive elements. As the development process moves forward and decisions are made for
the emerging design, the management team will be able to reduce the uncertainties. For instance the
customer base and the competitive position of the new design in the market will be more clearly identified.
The narrowing of the uncertainties in the next step will enable more specific metrics and hence more refined
designs.
Some concerns relevant to the decisions might seem appropriately modeled as uncertainties, but those
concerns might be meaningful for only one team or the other. Addressing the question from the management
viewpoint the team has available models of inputs for assessing alternative designs under different
circumstances. We assume that the impact of the external world on the design assessment (modeled by the
inputs) is not completely known and each input can represent a different set of circumstances under which
the vehicle must perform. The inputs may not be relevant for the design team.
Inputs for the design model would play the same role as inputs for the management model. On the other
hand, a design is to perform well under all foreseeable circumstance, for instance weather conditions.
Therefore, the inputs are simply Wiener fields.
6. Conclusion
This paper presents the theory behind a novel modeling and methodological approach to the complex
systems development process. The approach is comprehensive since it integrates the engineering design
process with the management assessment process within a common decision-making framework and
accounts for conditions of uncertainty and risk. The methodology addresses system robustness in the sense
that only those designs that guarantee the desired expected system performance are feasible, and a feasible
design that guarantees minimum risk system performance over the entire space of uncertainties is preferred.
The methodology also produces and evaluates the feasible designs that could become preferred if another
preference rule was used. The evaluation of preference-dependent design options helps quantify engineering
flexibility.
23
A different view of uncertainty and risk is presented. Uncertainties are modeled as independent variables
and risk is a function of the uncertainties. Risk models are constructed using limited information and
computational resources and demonstrate the feasibility of decision methods for guiding the design
development process.
We conclude with a few comments. The design development methodology outlined in this paper
represents a complete rationalization of the development process under conditions of uncertainty and risk,
i.e., modeling and decision processes are complete for both the management and design teams and
communication between the teams is included in the methodology. Numerical methods, the basis for the
decision methodology, are complete and realizable with minimum effort in MatLab. The methodology has
not been implemented in a business/engineering environment and so the illustrative example pointed to in the
paper must be recognized as an illustration and nothing more.
Acknowledgements
Margaret M. Wiecek’s research has been supported in part by the Automotive Research Center, a U.S. Army
TACOM Center of Excellence for Modeling and Simulation of Ground Vehicles at the University of
Michigan, and by the National Science Foundation, Grant number DMS-0425768. James A. Reneke’s
research has been supported in part by the Center of Advanced Engineering Fibers and Films at Clemson
University. The views presented here do not necessarily reflect those of our sponsors whose support is
gratefully acknowledged.
References
1. N.M. Alexandrov and R.M. Lewis. Algorithmic perspectives on problem formulations in MDO. 8th
AIAA/USAF/NASA/ISSMO Symposium on Multidisciplinary Analysis and Optimization, 2000, Long
Beach, CA, Paper AIAA 2000-4719
2. J. Sobieszczanski-Sobieski. Multidisciplinary Design Optimization (MDO) Methods: Their Synergy with
Computer Technology in the Design Process. Aeronautical Journal, 1999, 103 (1026): 373-382
3. J. Sobieszczanski-Sobieski and R.T. Haftka. Multidisciplinary Aerospace Design Optimization: Survey of
Recent Developments. Structural Optimization, 1997, 14 (1): 1-23
4. E.R. Taylor. Evaluation of Multidisciplinary Design Optimization Techniques as Applied to Spacecraft
Design, 2000, 1: 371-384
5. M. Han. Multidisciplinary design optimization methods for complex engineering systems. Proceedings of
the International Conference on Agile Manufacturing, ICAM 2003, Beijing, China, 347-354
6. D.V. Steward, System analysis and management: structure, strategy and design, New York: Petrocelli
Books, 1981
7. D.V. Steward. The Design Structure System: A Method for Managing the Design of Complex Systems.
IEEE Transactions on Engineering Management, 1988, EM-28 (3): 294-303
8. A. Kusiak and J. Wang. Decomposition of the Design Process. Journal of Mechanical Design, 1993, 115:
687-695
9. A. Kusiak and J. Wang. Efficient Organizing of Design Activities. International Journal of Production
Research, 1993, 31 (4): 753-769
10. T.C. Wagner and P.Y. Papalambros. A General Framework for Decomposition Analysis in Optimal
Design. Advances in Design Automation—1993, ed. B.J. Gilmore, 1993, 2 ASME, New York, 315-325
11. T.C. Wagner and P.Y. Papalambros. Implementation of decomposition analysis in optimal design.
Advances in Design Automation—1993, ed. B.J. Gilmore, 1993, 2 ASME, New York, 327-335
12. S.D. Eppinger, D.E. Whitney, R.P. Smith, and D.A. Gebala. A Model-based Method for Organizing
Tasks in Product Development, Research in Engineering Design, 1994, 6: 1-13
13. J.L. Rogers and C.L. Bloebaum. Ordering design tasks based on coupling strengths. Proceedings of the
5th AIAA/NASA/USAF/ISSMO Symposium on Multidisciplinary Analysis and Optimization, 1994, Panama
City, paper AIAA 94-4326-CP, 708-717
14. N. Michelena and P. Papalambros. A Hypergraph Framework for Optimal Model-based Decomposition
of Design Problems. Computational Optimization and Applications, 1997, 8: 173-196
24
15. J. Sobieszczanski-Sobieski and S. Kodiyalam. Multidisciplinary Design Optimization – Some Formal
Methods, Framework Requirements, and Application to Vehicle Design. International Journal of Vehicle
Design, 2001, 25 (1-2): 3-22
16. R. Belie. Non-technical barriers to multidisciplinary optimization in the aerospace industry. Proceedings
of the 9th AIAA/ISSMO Symposium on Multidisciplinary Analysis and Optimization, 2002, Atlanta, GA,
AIAA-2002-5439
17. V. Blouin, J.D. Summers, and G.M. Fadel. Intrinsic analysis of decomposition and coordination
strategies for complex design problems. Proceedings of the 10th AIAA/ISSMO Multidisciplinary Analysis
and Optimization Conference, 2004, Albany, NY, AIAA-2004-4466
18. N. Michelena, L. Louca, M. Kokkolaras, Chan-Chiao Lin, D. Jung, Z. Filipi, D. Assanis, P. Papalambros,
H. Peng, J. Stein, and M. Feury. Design of an advanced heavy tactical truck: a target cascading case study.
Proceedings of the Society of Automotive Engineers World Congress, 2001, Paper F2001-01-2793
19. H.M. Kim, D.G. Rideout, P.Y. Papalambros, and J.L. Stein. Analytical Target Cascading in Automotive
Vehicle Design. Journal of Mechanical Design, 2003, 125: 481-489
20. F.H. Knight. Risk, uncertainty, and profit. Boston: Hart, Schaffner & Marx, Houghton Mifflin Company,
1921
21. W.L. Oberkampf, J.C. Helton, C.A Joslyn, S.F. Wojtkiewicz, and S. Ferson. Challenge Problems:
Uncertainty in System response Given Uncertain Parameters. Reliability Engineering & System Safety,
2004, 85: 11-19
22. J. R. Birge and F. Louveaux. Introduction to stochastic programming. Springer Verlag, 1997
23. B. Liu. Uncertain programming. New York: John Wiley & Sons, 1999
24. J.M. Mulvey, R.J. Vanderbei, and S.A. Zenios. Robust Optimization of Large-scale Systems. Operations
Research, 1995, 43 (2): 264-281
25. P. Kouvelis and G. Yu. Robust discrete optimization and its applications. Kluwer Academic Publishers,
1996
26. H.-J. Zimmermann. An Application-oriented View of Modeling Uncertainty. European Journal of
Operational Research, 2000, 122: 190-198
27. W. Chen, J. K. Allen, K.-L. Tsui, and F. Mistree. A Procedure for Robust Design: Minimizing Variations
Caused by Noise Factors and Control Factors. ASME Journal of Mechanical Design, 1996, 118 (4): 478-485
28. W. Chen, R. Garimella, and N. Michelena, Robust design for improved vehicle handling under a range of
maneuver conditions. Engineering Optimization, 2002, 33(3): 303-326
29. K. Shimoyama, K. Fujii, and H. Kobayashi. Development of realistic optimization method of TSTO
spaceplane – multi-objective and robust optimization. Proceedings of the 10th AIAA/ISSMO
Multidisciplinary Analysis and Optimization Conference, 2004, Albany, NY, AIAA-2004-4475
30. Z. Dai, M.J. Scott, and Z.P. Mourelatos, Improving robust design with preference aggregation methods.
Proceedings of the Society of Automotive Engineers World Congress, 2004, paper 2004-01-1140
31. B.D. Youn, K.K. Cho, and Y.H. Park. Hybrid Analysis Method for Reliability-based Design
Optimization. ASME Journal of Mechanical Design, 2003, 125 (2): 221-232
32. H. Agarwal and J. E. Renaud, Reliability Based Design Optimization Using Response Surfaces in
Application to Multidisciplinary Systems. Engineering Optimization, 2004, 36(3): 291-311
33. B.B. Youn, K.K. Choi, R.-J. Yang, and L. Gu. Reliability-based Design Optimization for
Crashworthiness of Vehicle Side Impact. Journal of Structural and Multidisciplinary Optimization, 2004,
26(3-4): 272-283
34. J. Liang, Z.P, Mourelatos, and J. Tu. A single-loop method for reliability-based design optimization.
Proceedings of DETC’04, 2004, Salt Lake City, UT, paper DETC2004/DAC-57255
35. M. Kokkolaras, Z.P. Mourelatos, and P.Y. Papalambros. Design optimization of hierarchically
decomposed multilevel systems under uncertainty. Proceedings of DETC’04, 2004, Salt Lake City, UT,
DETC2004/DAC-57357
36. X. Du and W. Chen. Towards a Better Understanding of Modeling Feasibility Robustness in Engineering
Design. ASME Journal of Mechanical Design, 2000, 122 (4): 385-394
37. X. Du, A. Sudjianto, and W. Chen. An integrated framework for probabilistic optimization using inverse
reliability strategy. Proceedings of DETC’03, 2003, Chicago, IL, DETC2003/DAC-48706
38. Z.P. Mourelatos and J. Zhou. Reliability estimation and design with insufficient data based on possibility
theory. Proceedings of the 10th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference,
25
2004, Albany, NY, AIAA 2004-4586
39. Z. Zhao and J. Shah. A normative DFM framework based on benefit-cost analysis. Proceedings of
DETC’02. 2002, Montreal, Canada, DETC2002/DFM-34176
40. H.M. Kim, D.K.D. Kumar, W. Chen, and P.Y. Papalambros. A multilevel optimization formulation for
enterprise-driven hierarchical multidisciplinary design, Proceedings of the 10th AIAA/ISSMO
Multidisciplinary Analysis and Optimization Conference, 2004, Albany, NY, AIAA-2004-4546
41. J.J. Michalek, F.M. Feinberg, and P.Y. Papalambros, Linking Marketing and Engineering Product Design
Decisions via Analytical Target Cascading. Journal of Product Innovation Management: Special Issue on
Design and Marketing in New Product Development, 2005, 22: 42-62
42. H.J. Wassenaar, W. Chen, J. Cheng,and A. Sudjianto. Demand analysis for decision-based design of
automotive engine. Proceedings of the Society of Automotive Engineers World Congress, 2003, paper 04M133
43. I. Hossoy, P.Y. Papalambros, R. Gonzalez, and T.J. Aitken. Modeling customer perceptions of
craftsmanship in vehicle interior design. Proceedings of the Tools and Methods of Competitive Engineering
Conference, 2004, Lausanne, Switzerland, 1091-1093
44. A.B. Cooper, M. Kokkolaras, M., and P.Y. Papalambros, P. Y. A dual-use enterprise context for vehicle
design and technology valuation. Proceedings of the Society of Automotive Engineers World Congress,
2004, Detroit, MI, SAE paper 2004-01-1588.
45. R. de Neufville, O. de Weck, D. Frey, D. Hastings, R. Larson, D. Simchi-Levi, K. Oye, A. Weigel, R.
Welsch, et al. Uncertainty management for engineering systems planning and design. Paper presented at
the
MIT
Engineering
Systems
Symposium,
2004,
MIT,
Cambridge,
Mass.,
http://esd.mit.edu/symposium/monograph.
46. S.M. Batill, J.E. Renaud, and X. Gu. Modeling and simulation uncertainty in multidisciplinary design
optimization. 2002, AIAA-2000-4803
47. H. Markovitz. Portfolio selection. The Journal of Finance, 1952, 7(1): 77-91
48. S. Kaplan. The Words of Risk Analysis. Risk Analysis, 1997, 17(4): 407-417
49. J.D. Andrews and T.R. Moss. Reliability and risk assessment. Essex, England: Longman Scientific &
Technical, 1993
50. Y.Y. Haimes. Risk modeling, assessment, and management. New York: John Wiley & Sons, 1998
51. J.A. Reneke, M.J. Saltzman, and M.M. Wiecek. Affordable upgrades of complex systems: a multilevel,
performance-based approach, in: Modeling Uncertainty: An Examination of Stochastic Theory, Methods,
and Applications, eds. M. Dror, P. L’Ecuyer and F. Szidarovszky, 2002, Boston: Kluwer, 301-331
52. J.A. Reneke and M.M. Wiecek. Performance-based multicriteria decision making for complex systems:
An example. Proceedings of the 6th World Multi Conference on Systemics, Cybernetics and Informatics,
2001, XVIII, Information Systems Development III, 292-298
53. J.A. Reneke and M.M. Wiecek, Performance-based multicriteria decision making for service sector
systems, Decision Sciences Institute 2002 Annual Meeting Proceedings, 2002, 1642-1647
54. J.A. Reneke and M.M. Wiecek. Research support decisions under conditions of uncertainty and risk.
Nonlinear Analysis, 2006, in press.
55. E. Vanmarcke. Random fields: analysis and synthesis. Cambridge, MA: The MIT Press, 1983
56. J.A. Reneke. Stochastic Linearizations of Nonlinear Point Dissipative Systems. International Journal of
Mathematics and Mathematical Sciences, 2004, 2004(65): 3541-3563
57. V. Belton and T.J. Stewart. Multiple Criteria Decision Analysis: An Integrated Approach. Boston:
Kluwer, 2002
58. J. A. Reneke and S. Samson, Separable approximations of random fields and risk analysis of complex
systems, in preparation
26
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertisement