System identification viaquasilinearization and random search
Authors Pillmeier, Rudolf Jacob, 1943-
Publisher The University of Arizona.
Rights Copyright © is held by the author. Digital access to this materialis made possible by the University Libraries, University of Arizona.Further transmission, reproduction or presentation (such aspublic display or performance) of protected items is prohibitedexcept with permission of the author.
Download date 22/07/2021 00:56:30
Link to Item http://hdl.handle.net/10150/318700
SYSTEM IDENTIFICATION VIA QUASILINEARIZATION
AND RANDOM SEARCH
by
Rudolf Jacob Pillmeier
A Thesis Submitted to the Faculty of the
DEPARTMENT OF ELECTRICAL ENGINEERING
In Partial Fulfillment of the Requirements For the Degree of
MASTER OF SCIENCE
In the Graduate College
THE UNIVERSITY OF ARIZONA
1 9 6 8
STATEMENT BY AUTHOR
This thesis has been submitted in partial fulfillment of requirements for an advanced degree at The University of Arizona and is deposited in the University Library to be made available to borrowers under rules of the Library.
Brief quotations from this thesis are allowable without special permission, provided that accurate acknowledgment of source is made. Requests for permission for extended quotation from or reproduction of this manuscript in whole or in part may be granted by the head of the major department or the Dean of the Graduate College when in his judgment the proposed use of the material is in the interests of scholarship. In all other instances, however, permission must be obtained from the author.
SIGNED:
APPROVAL BY THESIS DIRECTOR
This thesis has been approved on the date shown below:
ACKNOWLEDGMENTS
The author wishes to express his appreciation to
Dr, Donald G, Schultz and Dr. James L. Melsa for their
continued interest and helpful guidance during this study,
and to the National Aeronautics and Space' Administration
for the support of this research under a NASA Grant. 490.
iii
TABLE OF CONTENTS
PageLIST OF ILLUSTRATIONS ........................... vi
LIST OF T A B L E S ........................................... viii
A B S T R A C T ............... ix
CHAPTER. . . ? -1 o INTRODUCTION ...................... 1
1.1 Introduction ........................ 11.2 Problem Formulation............ . . . 21.3 O r g a n i z a t i o n ............. 3
2. QUASILINEARIZATION ............................. 6
2.1 Introduction ................. 62.2 Two-Point-Boundary-Value-Problem . . 72 .3 Multi-Point-Boundary-Value-Probiem . 102.4- Computational Procedure . . . . . . . 152.5 Problem Formulation . . . . . . . . . 202.6 Conclusion . 23
3 . RANDOM SEARCH ............... 24
3-1 Introduction ............. 243 - 2 Paradoxes and Philosophy of
Search Techniques . . . . . . . . . 243 - 3 Problem Formulation and Error
Criteria ............. 283-4 Computational Procedure . . . . . . . 31
3.4.1 Random Search Basic Phase . . 393-4.2 Success P h a s e ................ 403.4.3 Local Failure Phase . . . .s . 423-4.4 Global Failure Phase . . . . . 43
3 - 5 Conclusion .................. 434. THE LINEAR S Y S T E M ................. 45
4.1 I n t r o d u c t i o n ......................... 454.2 Example One--Third Order System . . . 46
IV
VTABLE OF CONTENTS--Continued
CHAPTER Page
4.3 Example Two— Third Order Systemwith a Zero ............. j?4
4.4 Identification in the Face ofMeasurement Noise .................. 60
4.5 Conclusion ................ . . . . . 715. THE NONLINEAR PROBLEM . . . . . . . . . . . . 73
5.1 I n t r o d u c t i o n .......... 735 •2 Product Nonlinearity for a Second
Order S y s t e m ................ 735•3 Van der Pol Equation— The Problem
of Integration .................. 755.4 S u m m a r y ............. 79
6 . SUMMARY AND CONCLUSION ........... 8l
6.1 S u m m a r y ............................... 8l6.2 Conclusion ................. 82
APPENDIX. .1. QUASILINEARIZATION PROGRAM . . . . . . . 85
APPENDIX 2. RANDOM SEARCH PROGRAM . ............... 93
APPENDIX 3 . QUASILINEARIZATION INPUT DECK . . . . . 102
APPENDIX 4. RANDOM SEARCH INPUT D E C K .............. . . 104-
LIST OF R E F E R E N C E S . 106
LIST OF ILLUSTRATIONS
Figure Page
2•1 Flow Diagram for Quasiline^rization, QL Program ........................... l6
3*1 The Curse of Dimensionality Presented inTwo Dimensional Space . . 26
3*2 Flow Diagram for Random Search, RS ,P r o g r a m ..........................32
k»1 Pole Configuration for Example O n e -Third -Order- Linear Problem 47
4.2 Time Response for Example O n e ............ . * 4?
4 o 3 Block Diagram for Example One —Phase Variables............................. 48
4.4 Block Diagram for Example O n e -Real Variables ............... 48
4.5 Performance Index vs. Number of Iterationsfor Q.L. and R.S. Schemes for Example One . 55
4.6 Block Diagram for Example Two— ThirdOrder Linear Problem with Zero . . . . . . 56
4.7 Pole-Zero Configuration for Example Two . . . 56
4.8 Time Response for Example Two . . . . . . . 58
4.9 Block Diagram for Example Three— SecondOrder Linear System . . . . . . . . . . . . . 62
4.10 Pole Configuration for Example Three . . . . 62
4.11 Time Response for Example Three . . . . . . . 64
4.12 Block Diagram for Example Four— FourthOrder Linear System . * . 67
4.13 Pole Configuration for Example F o u r ......... 67
vi
vii
LIST OF ILLUSTRATIQNS— Continued
Figure Page
4.l4 Time Response for Example F o u r .............. 68
5 -1 Block Diagram for Example Five--SecondOrder Product Nonlinear System ............ 74
5 • 2 Time Response for Example F i v e .............. 7^
5 • 3 Time Response for Example S i x -Van der Pol Equation . . . 77
5.4 Parameter Solution v s . Integration StepSize for Q . L ♦ M e t h o d ...................... 77
6.1 Probability of Parameter Identification • » . 83
LIST OF TABLES
Table Page
3 .1 Probability of A c c u r a c y ........................ . 29
3«2 Variables for Random Search Program . ......... 36
4.1 Comparison of Initial Parameter Variancefor Random Search Scheme . 50
4 o 2 Comparison of Q.L. Solution for Example One--Phase Variable Representation ................ 52
4 ♦3 Comparison of Q.L. Solution for Example One—Real Variable Representation ................ 53
4.4 Comparison of Q.L. and R.S. Solutions forExample Two . . . . . 59
4-5 Comparison of Random Search Solution forExample Two for Various Initial ZeroLocations . 6l
4.6 Comparison of Q.L. and R.S. Solutions forExample Three--Noise Free Measurements . . . 65
4.7 Comparison of Q.L. and R.S. Solutions forExample Three--Noisy Measurements ........... 66
4.8 Comparison of Q.L, and R.S. Solutions forExample Four— Noiseless and NoisyMeasurement Cases . 70
5..1 Comparison of Q.L. and R.S. Solutions forExample F i v e .................................. 76
5•2 Comparison of Q.L. and R.S. Solutions forExample S i x .................................... 78
viii
ABSTRACT
In this thesis the problem of system identification
is considered. Identification means specifically the
problem of determining the coefficients of the differential
equations that govern the dynamic behavior of a system.
The methods of identification investigated are (1) Quasi
linearization and (2)/Random Search. These methods are
seen to complement each other. The Random Search method
gives a solution that has a wide range of convergence but
lacks accuracy. The Quasilinearization method gives a
highly accurate solution but has a limited range of
convergence. With both methods only the input and output
records of the system are necessary for identification.
ix
CHAPTER 1
INTRODUCTION
1 .1 Introduction
The problems of adaptive and nonlinear control
systems have received much attention in recent years. One
of the important problems in connection with these types of
systems is that of Hsystem identification." in this study h identification means specifically the problem of determining
the differential equations that govern the dynamic behavior
of a system. This could further imply that (1) the
coefficients of the equations are unknown but the general
order and form are known or (2) the order of the describing
differential equation, as well as its coefficients, is
unknown. This thesis treats only the first case where the
general order and form of the differential equations are
known and only the coefficients need identification. If
the form is unknown, the complexity of the problem is
increased considerably.
The methods of identification proposed here are
(1) Quasilinearization, Q.L., and (2) Creeping Random
Search, R.S. These methods are chosen because of their
(1) ease of implementation; (2) applicability to nonlinear,
as well as linear, systems with no change in programming;
2
and (3) absence of special test signals . Both of the
methods proposed in this research are applicable to the
available« A knowledge of the internal state variables is
not a necessity. This study indicates that the methods
proposed are very successful in the identification of linear
systems. The identification of nonlinear systems is also
successful for the cases studied. This success, however,
must be viewed in the light of the vast numbers and types
of nonlinearities to be found.
1 .2 Problem Formulation
In this study the problems under consideration are
those that can be put into state variable form as
situation where only the input and output records are
(1 .1)
(1 .2)
mPI (1.3)
i=l
where
x is. a n-dimensional vector representing the state of
the system
u is a m-dimensional vector representing the system
inputs
3
a. is a k-dimensional vector representing the unknown
parameter constants of the system
is a n-dimensional vector representing the dynamics
of the system
H is a n x 1-dimensional matrix representing the manner
in which the state variables are combined to form the
output
y is a 1-dimensional vector representing the output of
the system
GL is a n-dimensional vector representing the performance
function
PI is a scaler function representing the performance
index.of the system
t is the independent variable time
The problem is then to determine the coefficients of
equation (l.l) such that when used with equation (1 .2), the
output expression, a scalar valued performance index, PI,
is minimized•
1 o 3 Organization
Chapter 2 develops the theory of quasilinearization.
This is done by first presenting the case of the two-point-
boundary-value-problemo This is then modified and extended
to cover the case of the multi-point-boundary-value-problem
with only one state variable observed. The method is
general and can also be used when all of the state
variables are observed. Use of a variable identification,
time length extends the region of convergence of the
algorithm presented in this chapter to solve the multi-
point-boundary-value-prob1em. Finally the computational
scheme actually employed is presented with the aid of a
flow chart of the FORTRAN program.
The technique of random search is presented in
Chapter 3- The basic philosophy of search schemes is first
introduced using some low order examples to indicate the
complexity of the situation. The actual search algorithm
is discussed by means of a discussion of the flow chart for
the FORTRAN program. The program described attempts to use
statistics of the past search trials’ failures and
successes in order to bias the search in the direction in
which a successful search is most probable.
Several linear examples are presented in Chapter 4.
These are used to demonstrate the determination of the
empirical variables needed for the search program, and
validate the technique used to extend the range of con
vergence for the quasilinearization program. Several
examples are used to compare the accuracy and applicability
of the two methods for noise-free systems and also ,those
corrupted by measurement noise. Chapter 5 then treats the
case of the nonlinear system by presenting several varied
examples.
5Finally Chapter 6 discusses the feasibility of
quasilinearization and random search in the light of the
results of Chapters 4 and 5• Areas for further investiga
tion are presented, as well as the applicability to complex
systems of high order.
CHAPTER 2
QUASILINEARIZATION
2.1 Introduction
The solution of the system identification problem
is considered as a nonlinear-boundary-value-problem. In
this chapter the method of quasilinearization is used as
the basis for the solution to the nonlinear-multi-point-
boundary-value-problem. This method was first suggested by
Kalaba (1959) and later investigated by Kumar and Shridhar
(1964) , Ohap and Stubberud (1965), and Sage and Eisenberg
(1966). This paper duplicates some of their work but also
extends it.
The chapter is divided into four main parts. In
the first part. Section 2.2, the theory of quasilineariza
tion follows Painef s (196?) development for the two-point-
boundary-value-problem. This is then modified and extended
to cover the case of the multi-point-boundary-value-problem
in Section 2.3 » The numerical techniques for solving the
computational problem are then considered. The procedure
is presented with the aid of a flow chart in Section 2.4.
Finally Section 2 .5 illustrates the method of problem
formulation for the computational procedure.
2 * 2 Two-Point-Boundary-Value-Problem
The basic concept of quasilinearization .is small
signal linearization of the system response about a nominal
path through state space. It is assumed that the system
can be described by the following state equation:
x = (2 .1 )
where the elements of equation (2 .1 ) are as defined in
Chapter 1 . The equation a. = 2 is now adjoined to the state
equation (2 .1 ) to form:
z, = ) (2 .2)
where z_ is the adjoined state vector resulting by combining
the state vector x and the parameter vector a.. The first n
state variables in this new system represented by equation
(2 .2) are the actual state variables of the original system
described by equation (2 .1 ), while the last k state
variables are the parameters of the original system. This
forms a (n+k)-dimensional .system. The dynamics of the
system, however, remain the same.
It must be made clear that the method of quasi-
linearization, as pointed out by Kumar and Shridhar QL964),
is not limited to linear systems. The dynamical equations
need not be linear. In fact the augmented state equations
(2 .2 ) appear nonlinear even for a linear system, as
illustrated in the example problem of Section 2 .$. This is
8true because in the quasilinearization procedure the constant coefficients, <a, are treated as time dependent variables.
The method of quasilinearization is a successive approximation scheme. An initial condition vector is first selected, called ’ where the subscript K designates theapproximation number. Equation (2.2) is then used to obtain the (K+l ) it* approximation from the Kti> approximation. In order to do this, the Taylor series expansion about z„
— IV
is formed, yielding
i(K+l)(t) = l [ (B(K+l) " — +
iK+l (t) “ ^ ( K ’u ’t) + f g " — k ] (2.3)
Equation (2 .3) is a linear approximation to the nonlinearequations (2.2). The quantity dF_(z^,ii,t)/az^ in equation(2.3) is the Jacobi an of £.(.21 £11 ) . The elements of theJacobian can be represented by a (N x N ) matrix, whereN = n + k , the order of the adjoined system. There are nodifference terms in (uL, n - u „ ) in equation (2 .3 ) because—tv+1 —tvij is assumed known for all time.
The convergence of the successive approximations determined by equation (2.3) is shown by Kalaba (1959) to converge quadratically to the solution of equation (2.2), if convergence takes place at all. Quadratic convergence implies that the number of correct digits approximately
doubles each iteration. Convergence, however, is not guaranteed, but depends heavily on the initially guessed approximation to the solution.
The general solution to equation (2.3) is
-(K+l) (tf ) = $.(K+1) (to ’tf -(K-H ) (to^ + -(K+l)(tf ) (2.4)
where k +1 ) *o ’ *f solution to the homogeneousequation
i(K + l ) (t> = | | i<K+i)(t) (2.5)
and is the solution to the particular equation
i(K+l)(t) = + || K ’ — ,t)[— (K+l) (t) " —K ( 4 ) ](2.6)
w i t h — (K + l ) (* 0 ) = — (K + X ) (^ o ) = i
The state transition matrix, $(t ,t .), relates the finalstate to the ,initial state. Equations (2.2), (2.5), and(2.6) are then used to integrate 25, ^ , and respectively forward to the terminal time, t ^ . The initial time boundary conditions, bQ , and terminal time boundary conditions, b _, are then applied to equation (2.4) yielding
10
X
X 11 $(to ’tf )w v
Zn+K(tf^
o
o(2.7)n
W V
Zn + K (to )
Once these simultaneous equations have been solved, thezn+j_(t )i i = 1,2, ... k, are used along with the initialconditions at the initial time, b^, to form a new initialcondition vector. This process is continued until someerror criterion is satisfied, i.e., |z„.(t ) - z»(t ) I - F
I — iv + JL o — iv o 1
where ^ is a predetermined error vector.
2.3 Multi-Point-Boundary-Value-ProbiemWith the multi-point-boundary-value-problem (MPBVP)
as with the two-point-boundary-value-problem (TPBVP) a specific form of the equation is assumed to describe the system under study. The system is observed over some time interval, T, with all inputs and available state variables recorded for future u s e . The important difference to be noted between the MPBVP and the TPBVP is that with the MPBVP the entire trajectory of one or more of the state variables is known over some time period T. Therefore, instead of being confronted with the problem of just
11matching some initial and terminal time constraints, the problem is to match the entire trajectory of one or more of the state variables. The observed inputs are applied to the assumed model and the model response compared to that of the actual system using summation of absolute value of errors as the index of performance (PI). If the exact form of the differential equation is unknown, it may be necessary to try several mathematical models and select the one giving the minimum PI as the one best representing the actual system. For the development work to follow, it is assumed that the model form is known and only the coefficients need identification.
As with the TPBVP it is assumed that the system can be represented by the state equation of the form,
x = Fjx , u , j* , t ) (2.8)
The representation of x,ii,ja,t and F_ are as in the previous section or Chapter 1. The vector equation ja = () is adjoined to equation (2.8) to form the new state equation,
z = ^ , t) (2 .9 )
The development to the point of arrival at the general solution is the same as in the previous section and is, therefore, only briefly repeated here for continuity of presentation.
12
An initial value for the state vector is assumed and called z^_ q • An updated approximation, , is thenfound by forming the Taylor series expansion about , thus yielding
-K+l " — —•(K+l) (z-(K+l) — K T —-Kor
& + l = + -K+l - (2 .10)
The general solution to equation (2.10) is
2(K+l)(t) = lK+l(to ’t)^(K+l)(to) + P(K+l)(t) (2-11)
where the Homogeneous solution is
l(K+l) (t) = $(K + 1 )( t ) i l K * l Ctn ) = 1 (2.12)c>F 2>z K ■(K+l) -K+l o
and the Particular solution is
-(K+l)(t) -K^^J
-(K+l)(to ) = 0
(2 .13)
It is at this point that the derivation for the MPBVP and the TPDVP diverge.
Instead of solving the set of simultaneous equations represented by equation (2.11) for the initialcondition vector _, (t ) as was done in the previous— K+l osection, form the error function represented by equation(2.14).
13NDATA
\ . . T f— . . . . .1 I= 0*( 2 .14 )
In the above equation Y_(j) represents the observed values of the state variables at time t ^ , j = 1,2,3) NDATA,where NDATA is the number of observation times, H_ is the output matrix from the equation ^ = 11 2S’ and the bracketedexpression § j > z K+1(t0 ) + iiK+ l (to ) is the general solution to equation (2.10), represented by equation (2.11). Equation (2.14) is set equal to zero because it is desired that the error between the observed and estimated system outputs be equal to zero. The effect is the same as taking the partial derivative of the left hand side of equation (2.14) with respect to _z and setting it equal to zero. All of the state variables need not be observed (in fact only one is needed). This is a prime difference between this method and the method as presented for the two-point- boundary-value-problem. Equations (2.9), (2.12), and(2.13) are used to integrate _Z, <£, and P respectively forward in time from the initial time, t , to the time ofothe first observation of the state variables, t ^ . At time t^ the summation represented by equation (2.14) is formed. The equations are then integrated forward in time from t^ until the second observation time is reached, at which time the summation represented by equation (2.14) is again formed and added to the original sum formed at the first
14
observation time. The process is continued until the finalobservation time is reached. At this time equation (2.14)is solved for a new initial condition vector Z„ , (t ) and—K+1 othe procedure repeated until some convergence criteria is met. The criteria used here is
CONV = ^j=l
NDATA _Y_( j) - YEST( i) - 10~6 (2 .15)
where YEST (,j) is a matrix representing the estimated state variables at the observation times.
In order to make the above scheme computationally tractible, it is necessary to modify equation (2.14) slightly. Equation (2.14) is first pre-multiplied by .The equation is then separated into its component parts yielding the representation as,
NDATA NDATAZ - Z |t U ) h ht p (K+1)(j)j = 1 j = 1
NDATA- Z |T U)]1 HT |(j)Z(K+ 1 )(t0 ) = 0 (2 .16)
j = l
It is now clear that pre-multiplying by $^(j )21 yields an expression that pre-multiplies 1 a squaresymmetric matrix. This is required if the equation is to be solved for ^ +^(t^) by pre-multiplying by the inverse
15of §T (j)H HT |(j) Regrouping terms and carrying out therequired multiplication results in equation (2.17)•
NDATA—K+l(*0} = j Z §T ( j )“
L j = l
-1
NDATAY, §T(j)il(I (j) - fiTP U ) )j=i
(2 .1?)
The process is continued until the estimated state variable trajectories are within some predetermined limit of the actual observed trajectories.
2.4 Computational ProcedureThis section discusses the actual computational
procedure used to carry out the algorithm as described in the previous section. The procedure is most clearly discussed by describing the program flow chart in Fig. 2.1. The symbology used in this section is the same as that used in the FORTRAN program. The actual Fortran program is listed in Appendix 1.
After problem initialization, the computational procedure starts at statement number five. Equations (2.9) , (2.12), and (2.13) are used to integrate Z, PHI(corresponding to §) and respectively forward in time.A rectangular integration scheme is used implementing the
16
READDATA
INTEGRATE Z,P,& PHI CALL FORM
INITIALIZEYESPROBLEM MARK
NO100
DIFFT - T (IDATA)-T ime
CALCULATE SI & S2
NOCONV < RM*CONSV 211
TIME -YESTIME + DELTA
IDATAIDATA + 1
TIME < X N O T(IDATA) y r ' N O /^ IDATA > X Y E S
\ NDATA / ^103 209YES
MARK - 1 TIME-T (IDATA)
(a)Fig. 2.1. Flow Diagram for Quasilinearization, Q L , Program
17
NO211 CONVclO
YES
iter>i5
YES PRINTIOLUTION:OG
PRINT \ CONV & IDATA/
CALCULATESI - SI
CALCULATENEU
PARAMETERSET
r2 0 i > N O / k f l a g - > > V E ^ o ^
NO 204KFLAG - 1YES300
NORE-INITIALIZEPARAMETERSILL COND
V PROBLEM
bFig. 2.1.— Continued
where DELTA is a small element of time. If DELTA would carry the running value of time, TIME, past the time of an observation point, T (IDATA), where IDATA is an index to indicate to which observation point the integration is to take place, a partial time step, DIFFT, is used in place of DELTA. The index MARK is used to indicate whether a full (MARK - 0) or partial (MARK = 1) time step is to be taken. After the equations have been integrated forward to the observation point of interest, the summations S 1 and S2 are formed. These are defined as in Section 2.3 as
NDATASI = $T ( j)H HT |( j) (2.18)
j=l
NDATA T1 I T(2.19)S2 = Y $T U)il
j=lY(j) - HTP (j )
The convergence factor, CONV, is now updated. If it exceeds a scaled value of the previous iteration's convergence factor, RM*CONV, the program branches to that portion of the program where a decision is made whether or not to calculate a new initial condition vector. The scale factor RM is equal to the number of state variables
observed and is an attempt to relax the convergence
constraints slightly for the case when more than one state
variable is observed. If the convergence factor is less
than that of the previous iteration, the integration is
continued to the next observation point. This, in effect,
gives a variable trajectory length to be matched on the\
basis of the accuracy of the current initial condition
vector. If the initial condition vector is less accurate
than that' used in the previous iteration, the integration
of the state variable trajectory is terminated and a new
initial condition vector calculated.
If the equations have been integrated forward to
the terminal time represented by T(NDATA), and the con
vergence factor GONV is less than 10 ^ , the problem is
considered to have converged and the solution is printed.
If the problem has not converged, ITER is tested to see if
the number of trial solutions has exceeded the predeter
mined limit. If the number of iterations has exceeded the
limit, the problem is terminated; if not, the convergence
factor, CONY, and trajectory length, IDATA, are printed
out. Since the convergence property of quasilinearization
is quadratic, it typically converges in a small number of
iterations, if at all. The above logic prevents the
program from performing needless searching when the problem
is outside the range of convergence.
20
The new initial condition vector, iscalculated by premultiplying S2 by SI? the inverse of S I .
The inverse of SI is calculated by the Gauss-Jordan
elimination method using the largest element in a column
as the pivot element. This is carried out in double
precision arithmetic to increase accuracy as suggested by
Sage and Eisenberg (1966)0 If the matrix becomes singular
at any point in the inversion process, the column in which
the singularity occurs is recorded by the index KFLAG. A
non-singular matrix is indicated by KFLAG equalling zero e
If the matrix is singular in the first column, indicated
by KFLAG = 1 , the problem is terminated as an ill-
conditioned problem. Should the matrix be singular in any
other column than the first, the initial condition vector
is calculated for the variables corresponding to the non
singular portion of the matrix. The variables corresponding
to the singular portion of the matrix are returned to their
initial value. If the change in the initial condition
vector from the Kib iteration to the (K*l)st iteration is
less than a predetermined limit, the problem is terminated;
if not, the procedure is continued using the new initial
condition vector.
2 .5 Problem Formulation
The method of problem formulation for the method of
quasilinearization is illustrated here by means of a second
21
order linear example. Assume that the state equations
describing the system are represented in phase variables
as
(t) = x2(t )(2 .2 0)
x0(t) = a^^x^Ct) + a2x 2(t) + u (t)
y ( t ) = x ^ ( t ) (2.21)
As described in Section 2.3) a new state equation is formed
by adjoining the vector <a = () to the original state equa
tion. The variables are also relabled to ease the
algebraic manipulation. Accordingly allow
zl = x l z3 al
z2 = x 2 ' z4 = a2
The adjoined state equations then become
(t ) = z0(t)
z„(t) = z_(t)z- (t) + zi.(t)z_(t) + u (t)
z^(t) = 0
z . (t) = 0
(2 .2 2)
4
y (t ) = z1 (t ) (2.23)
22The state equations (2.22) are the nonlinear state equa
tions referred to in Section 2.2. These are nonlinear even
though the original system equations (2.20) are linear.
The second set of equations needed for the quasi-
linearization method are those designated as the Jacobian
in Sections 2.2 and 2.3- The Jacobian is determined for
the adjoined system as represented by equations (2.22).
These are
3z (t) ^ zQ (t )
= 0 = ^
az, (t) azo(t)T = 1 = zaz2 az2
azl (t) „ (2.24)= ° = Z1
d z (t ) az (t)— ---- = 0 = z0<5 2
a z (t ) az. (t)— J = --- = 0a z a z —
The equations represented by equations (2.22) and
(2.24) indicate that 2N + equations are needed to
totally formulate a Nib order problem, where N is the order
of the adjoined system.
The problem is to now determine values for the
state vector _Z such that the boundary conditions are
satisfied. In the example above this means that it is
23necessary to identify the system parameters a^ and a^,
represented by and z^ respectively, and the initial
conditions for the unobserved state variables, i.e.,
for the case where only z^ is observed. The boundary
conditions to be matched are the values of the observed
state variables at the observation times, i.e., the state
variables of the original system state equations. These,
in effect, represent the trajectory of the observed state
variables through state space.
The last item that is needed for the total defini
tion of the problem is the record of the system input as a
function of time. This is readily available and presents
no complications.
2.6 Conclusion
This chapter has presented the development of the
method of quasilinearization for the case of the MPBVP.
It has been pointed out that the method converges
quadratically if convergence takes place at all. The2problem formulation has indicated that 2N N equation
are needed to totally formulate the problem* This is a
disadvantage, especially for high order systems. In the
next chapter the method of random search is developed and
shown to complement the method of quasilinearizatiqn.
CHAPTER 3
RANDOM SEARCH
3.1 Introduction
Search techniques are basically trial and error
schemes. Methods such as golden section, Fibonacci,
gradient, and random search have been presented and.dis
cussed by Wilde and Beightler ( 1 9 6 7 ) , Balakrishnan and
Neustadt (1964), and others. Random searches are often
superior to gradient and other methods when little informa
tion is known about the performance function surface. It
is also necessary to compute the performance index only
once per iteration with the random search method. The
procedure outlined in this chapter was discussed"by Sabroff
et al. (1965) and later in some detail by Gelopulos (1967) -
This chapter discusses in Section 3•2 some of the
paradoxes and philosophies connected with search methods.
Section 3*3 then presents a discussion of the problem
formulation and error criterion, while Section 3•4 presents
the computational algorithm for the creeping random search
by way of a discussion of the flow chart for the program.
3•2 Paradoxes and Philosophy of Search Techniques
The search technique selects a set of parameter
values from the defined parameter space. The performance
24
25index associated with this set of parameter values is then
evaluated and compared with that of the previous successful
search trial. If the performance index has improved, the
new parameter set is kept and the old one discarded. In
this manner it is hoped to reduce the parameter space in
which the true parameter set may reside. This, however,
presents the paradox of size.
Consider that a unit segment of line is searched
into a final interval of uncertainty of 10%, i.e., a
length of 0.1 units. This interval looks relatively small
compared to the original. Consider then that a unit square
is searched into an area which is 6 .25% of the original
area. This could be thought of as a smaller square
(0 .0625)^^^ = 0.25 units on a side as represented by the
shaded area in Fig. 3.1(a). This indicates that each of
the parameter values could be anywhere within a 25% interval
of its total range even though only 6 .25% of the original
space is being considered. The problem is compounded if
the situation depicted in Pig. 3«l(l>) is considered. The
space considered is still only 6 .25% of the original area
but one of the parameters is now known only within a range
of 50% of its total interval.
The effect is compounded still further if a
hypercube of eight variables is investigated. Let the
space of uncertainty be 10% of the original space. This
could then be represented as a hypercube that would, measure
t t m m
1i0.25 0.125
0.25 — i i 0.5 —' 1 —
(a) (l>)
Fig. 3•1• The Curse of Dimensionality Presented inTwo Dimensional Space
27
(0.1) = 0*75 units on a side • This paradox arises
because the first-degree measure of percentage is being compared to a multidimensional measure, namely volume. Bellman
refers to this difficulty with the vastness of hyperspace as
the Mcurse of dimensionalityvn in Bellman and Kalaba (1965),
Even though the curse mentioned above is indeed
awesome, it is still useful to get an approximation of the
requirements needed in reducing the original parameter
space to some portion of its original size. For this
purpose consider a three dimensional unit space. Each
parameter is then divided into 10 intervals of 0.1 units in
length, or a total of 1000 cubic cells. In general, if m
represents the number of divisions into which each parameter
is divided and n represents the number of parameters, there
will be (m)n cells in the parameter space. Let each cell
take on its average value. If 100 of the cells are good,
in that, they represent the actual parameter set reasonably
well, and it is desired to find one good one, then the
probability on each choice is 100/1000 = 0.1. The
probability of not being in the best 100 is 1 - 0.1 = 0.9.
For two choices the probability of two failures would be
(0 .9 ) = 0.8l. In general the probability P(0.l), i.e.,
the probability of finding a cell in the best 10%, is
1 - (0.9)n , where n is the number of trials necessary to
achieve the desired probability. The general formulation
28is then,
P(f) = 1 - (1-f)R
where f represents the fraction of the total possibility.
Table 3.1 presents a brief indication of the number of
trials needed for various accuracies and fractions of total.
3•3 Problem Formulation and Error Criteria
The general problem is assumed formulated in the
following form:
where the elements of equations (3 «1 ), (3 *2), and (3 *3) are
as defined in Section 1.2. It is assumed that the general
form of the system, state equation (3*1)> is known but the
coefficients , a, are unknown. The problem is, therefore,
to determine the coefficients of equation (3 • 1 ) and also
determine the initial condition vector for the unobserved
state variables. No supplemental equations or formulation
are necessary. The given system is observed and its inputs
and all available state variables recorded as a function of
time for future use. An initial set of parameters is
selected. This set includes the unobserved state
x F(x,u ,a ,t) (3-1)
(3.2)
mPI (3-3)
i = l
29Table 3*1• Probability of Accuracy
p ( f )f 0 . 8 0.9 0.93 0.99
0 . 1 16 22 29 44
lf\oo 32 45 59 900 .025 64 91 119 182
0 . 01 161 230 299 4590 . 0 0 1 700 2326 3026 4651
30variables' initial conditions and the coefficients for the
state equation. The state equation (3*1) is then inte
grated forward in time and the performance index, PI,
calculated. The performance index used here is,
is the observed output vector
YEST is the estimated output vector generated using
the assumed parameter set and the output
expression (3•2)
In order to facilitate a comparison of the methods
of quasilinearization and random search, the constants of
the state equation are relabled to conform with those used
in the chapter on quasilinearization. A second order
linear system whose original state equations were
NDATAPI (3-4)
where NDATA is the number of observation times
x^(t) = x0(t)
(t) = a^x^(t) + a 2X 0(t ) + u (t )
would, therefore, appear as
x^(t) = x2 (t)
x^(t ) = x^x^(t) + x^x0(t) + u(t)
31However, there are still only two state equations to be
integrated and no equations to represent the Jacobian. The
total number of equations needed to formulate the problem
is n , the order of the original system.
The search algorithm then selects a new parameter
set and repeats the computation of the performance index.
On the basis of past failures and successes in reducing P I ,
the parameter space is searched until the estimated output
vector, YEST, sufficiently approximates the observed output
vector.
3=4 Computational Procedure
A generalized creeping random search technique is
described which can be used in any type of identification
or optimization problem that depends on the minimization of
some type of performance index. The basic search routine
can be divided into the following four phases:
1. basic phase (calculation of performance index)
2. success phase
3 « local failure phase
4. global failure phase
The operation or strategy of each phase is described with
the aid of the flow chart given in Fig, 3 »2. A detailed
tabulation of all variables used in the program and flow
chart is given in Table 3 •S. The FORTRAN program is listed
in Appendix 2.
32READ
INITIALDATA
\SELECT a RN
} CALCULATEDP (N)
NT - NT -f 1 CALCULATE
TRIAL PARAM.
PP - XL
PP - XU
EVALUATE PI
Y E S / pi \ N OMPROVED
(a) Basic Phase
Fig. 3*2. Flow Diagram for Random Search, RS, Program
33
PIP - PI p i s v - PI
STEPS - STEP
^ PI X IMPROVEDv 102 >
YES NO4 2 41
NFSC - 0NSF - 0 PPISV - PISV
SSG - SG SSTEP - STEP
PP
PISV > PIMIN41
PRINTEXIT
CONDITIONYESNO NT > NTMX 6 0
PRINTEXIT
CONDITIONYES NO
10MD > 0
CALCULATEB(N)
PRINTFINALSOL.YESNO 29NSS > 0
INCREASE SG
NSS - 0NSS
(b ) Success Phase
Pig* 3*2.— Continued. Flow Diagram for Random Search,R S , Program
34
NSF > NSFMX
NSF - NSF + 1
NFSC > 0
MD-1 B-0NSF - 0
NFSC - 0SG - SSG
DECREASE SG (N) &
STEP
STEP < STMIN
NFSC - 1DP - -DP
NFSC - 0
(c) Local Failure Phase
Fig. 3.2.— Continued. Flow Diagram for Random Search,RS, Program
35
NSS - 0
60 H EV nt > NTMx\^P *.( 37
PRINT EXIT
CONDITION
> o 17NSFC
SGS - 0 INCREASE SG
CALC. SGSNSFC
NO ^ sg s >SGMX
/ PRINT EXIT
\ CONDITIONNSFC - 0
(d) Global Failure PhasFig. 3*2.— Continued♦ Flow Diagram for Random Search,
RS, Program
36
Table 3•2 » Variables for Random Search Program
B(N)•..».bias value used to cause the parameter selection to favor the direction of previous successful trials
DP(N)....actual amount of deviation of a given parameterfrom its last successful value; used to calculate a new trial parameter value
FNP..... floating point value of NP
GROW....% value by which the size of the parameter spacesearched on a given iteration is increased after a Global failure or Local success
M D ..... index used to indicate Local (MD=0) or Global(MD=l) search mode
running index for number of Local successive search failures performed before returning to Global search mode
index to indicate whether this is the first (NFSG=0) or second (NFSC=l) Local or Global search at a given step size. With the Global search mode the step size is increased if we get two successive failures. In the Local search mode, the step size is decreased if a failure is obtained in both search directions.
N P ...... number of parameters to be identified
NRN.... .running index used to indicate which random numberis used in the current calculation
NRNMX....maximum number of random numbers in the data bank
NRNX....index to indicate where random sequence isstarted
NSFMSf. . . .maximum number of Local searches performed
NSS ..... .index to indicate whether this is the first(NSS=0) or second (NSS=l) successful iteration at a given step size. In the Local search mode the step size is increased if we get two successful iterations at a given step size.
NSF
NPSG
37Table 3 « 2 »--Continued
N T .......running index of the number of Random searchiterations performed
NTMX.... maximum number of random search iterations to beperformed
P(N).... current successful parameter value; point fromwhich we start our search
P I .......current value of the performance index
PIMIN»..-minimum acceptable value of the performance index
PIP..... value of the performance index at the currentsuccessful parameter value
PISV.... value of the performance index at the previoussuccessful parameter value
PP(N)...•trial value of the parameter
PPIVS•...value of the performance index at the second previous successful parameter value
PRT...... percentage of historical bias retained
R N (NRN)♦.a random number from the set of Gaussian random numbers
SG(N)...•variance of parameter; the amount of deviationfrom the last successful parameter value for the current iteration's parameter selection
SGMX...••maximum value of SGS allowed
SGS..... summation of the squares of the variances for eachparameter
SHRNK..•.percentage value by which the size of the parameter space searched on a given iteration is decreased if a Local failure occurs
38
Table 3.2.--Continued
SSG(N) ♦ . ovalue of SG(N) on the first successful searchiteration when starting in the Local search mode. When exiting from the Local search mode to the Global search mode, this value of SG(N) will be used on the first Global search iteration. This prevents retracing a portion of the parameter space already searched.
SSTEP....value of STEP on the first successful Local search
STEP current value of the perturbation step size usedin calculating a new parameter value
X I .......initial value of parameter
X L .......lower limit of parameter space to be searched
X U .......upper limit of parameter space to be searched
3*4.1 Random Search Basic Phase
This phase is started by calculating the perform
ance index, PI, for the initially guessed parameter set. A
different random variation, DP(N), is then made for each
parameter in the parameter set. The PI associated with
this trial set is computed and compared to that of the last
successful trial. If PI has not been improved, another
random variation is made. If PI has been reduced, the
trial parameter set replaces the original set and the
process is continued.
The random variations are obtained from a group of
stored random numbers, RN(NRN), that are Gaussian distrib
uted with a zero mean and a variance of one. The initially
guessed set of parameter values is used as a starting point
for the local search mode. All searches are initiated from
the local search mode. It is assumed that the probability
of finding the true set of parameter values is also
Gaussian distributed with a mean that is equal to the
initial set of parameter values. The initial variance of
the Nti? parameter, SG(N) (deviation from the initial
parameter setting), was determined experimentally and is
explained in Chapter 4. It is changed according to whether
a success or failure is achieved in the attempt to find the
true parameter value.
The actual deviation factor, DP(N), determines both
the magnitude and direction of the trial parameter relative
4oto its initial value. The factor is individually calcu
lated for each parameter according to the equation:
DP(N) = SG(N) * RN(NRN) + B(N) (3.5)
where DP (N) is the deviation factor for the Nib parameter
SG(N) is the variance of the trial parameter from
its mean
RN is the random number which determines the
magnitude and direction of DP(N)
B(N) is a bias factor determined from past successful
trials
NRN is an index that indicates the position in the
random number sequence
If the trial value of the parameter, PP(N ), is greater than
its predefined upper limit, X U (N ), or less than its pre
defined lower limit, X L (N ), the respective boundary value
is used as the trial parameter value. Decisions are now
made on the basis of whether a detriment or improvement was
achieved in the value of the performance index.
3.4.2 Success Phase
There are two modes of operation by which a success
can be achieved♦ The first mode of operation is that of
the Global mode (MD ~ 1). If a success is found, the
parameter set is updated. If in the global search mode, a
local search (MD - 0) is initiated. Along with the updating
. 41a bias factor, B(N), is calculated which influences the
direction and magnitude of the deviation factor for the
next set of trial parameters* The bias factor is calcu
lated by the formula:
B(N) = DP(N) + PRT B(N) - DP(N) (3.6)
where B (N) is the bias factor for the Ntb parameter
DP(N) is the deviation factor
PRT is the percentage of historical bias retained
The bias causes the search to favor the direction of past
successful local searches* If there have been two
successive successes obtained while in the local search
mode (NSS = 1) the step size, STEP, is increased for the
next calculation of the deviation factor. STEP is an index
that indicates the relative magnitude of the deviation
factor, DP(N), This is included in an attempt to speed the
problem solution.
A significance test is also included. The impor
tance of the test is to determine whether the value of PI
is significantly better -than the previous value of PI. If
it is, the local search procedure is reinitialized. Often
times changes in parameter value cause such a small
improvement in PI as not to warrant the time required to;>
continue the search in that particular area of the parameter
space.
423-4*3 Local Failure Phase
If a failure is encountered while in the local
search mode, NSF, the number of successive failures index
is incremented. If the failure encountered was the first
one with a given DP(N), indicated by NFSG equalling zero,
the direction of DP(N) is reversed and the performance
index is again evaluated. If on this trial a failure again
occurs, NFSC is set equal to zero and the variance of the
parameter space is decreased since it is likely that the
search has gone too far from the area where a success is
most likely to be found. This does not mean that the
portion of the parameter space lying far from the mean
value of the space will not be searched. It is searched
less thoroughly since the likelihood of finding a successful
set of parameter values in this area is reduced. The rate
at which the variance is decreased is determined by the
formula:
SHRNK = 1 + (1/2)FNP (3-7)
where SHRNK is the rate at which the variance is reduced
FNP is the number of parameters in the parameter
space
This factor attempts to take into account the dependence
of the search scheme on the dimensionality of the problem.
This was pointed out in Section 3•2 of this chapter. If at
this point the variance of the parameter space has become
43
so small as to cause the value of the deviation factor to
be essentially zero, the local search is discontinued and
the global search initiated.
3.4.4 Global Failure Phase
The motivation of the global phase of the search
technique is that either a local minimum value for PI has
been found or does not exist in the area being searched and
it is necessary to search the remaining portion of the
parameter space in the most efficient manner possible to
discover if there are minima in other parts of the parameter
space. The basic strategy, therefore, is that whenever a
failure occurs, the variance of the parameter space searched
is increased. The rate, GROW, at which the parameter space
variance is increased is equal to l/SHRNK. The strategy is
continued until either a success is found, in which case
the program returns to the local search mode, or until the
variance of the parameter space becomes so large as to
render it hopeless of finding a success. In this case the
problem is terminated, i.e., SG exceeds SGMX, the maximum
allowable variance.
3•5 Conclusion
The random search program presented in this chapter
has been shown to require no additional equations other
than the original system state equations. This is to be
contrasted with the method of quasilinearization which
44
requires the Jacobian of the state equations, as well as
the adjoined system state equations.
The accuracy of the random search solution depends
heavily on the number of trial searches performed. This
can become quite high if high accuracy is desired for a
high order system. Remember the Mcurse of dimensionality.M
The ability of the search method to identify the
system parameters is insensitive to the shape or contour of
the performance function surface. This is because the
search technique is a non-analytic type of procedure and
does not use the performance function for anything but an
index.
Chapter 4 demonstrates the usefulness of the random
search method by considering actual examples.
CHAPTER 4
THE LINEAR SYSTEM
4.1 Introduction
Linear example problems are used in this chapter as
a vehicle to demonstrate the validity and usefulness of the
quasilinearization and random search methods in the problem
of system identification. As mentioned in Chapters 2 and 3?
the state vector under discussion is the one formed by
adjoining the parameter vector, a., to the normal state
vector.
In order to judge the two methods fairly, the best
program available for each method is used in the comparison.
This is done by first demonstrating, in Section 4.2, the
effect of the initial parameter variance, SG(N), on the
success of the random search method in system identifica
tion. The variance showing the most promise is then used
in the subsequent problems. The same example problem is
then used to demonstrate the usefulness of a variable
length observation record for the quasilinearization
method. The methods of quasilinearization and random
search are then compared on the basis of number of itera
tions required to reduce the performance index, PI, to a
certain value.
46
Section 4.3 adds a zero to the problem discussed in Section 4.2 and discusses the results for the two methods under consideration.
A second order system with a step input is used in
Section 4.4 to study the effect of measurement noise on the
capability of the quasilinearization and random search
methods to identify the system under study. A fourth order
system is considered to confirm the results obtained from
the second order case.
4 o2 Example One--Third Order System
In order to determine whether or not the initial
parameter variance used in the random search method has a
noticeable effect on the problem solution, the third order
system with transfer function
G(s) = — %-----------------s + 2s + 2s + 2
is considered. The system has one real pole and one set of
complex conjugate poles as represented in Fig. 4.1. The
total identification record for the output variable, .*
x( 1 ) = y, is presented in Fig ♦ 4.2. For this particular
experiment the phase variable representation in Fig. 4 .3 is
chosen with the result that the state equations are
(-0.22,1.11) * S PLANE
(-1.54)
(-0.22,-1.11) X
Fig. 4.1. Pole Configuration for Example One--Third Order Linear Problem
y (t) 2.0
1.5
1.0
0.5
-0.5
- 1.0
Fig. 4.2. Time Response for Example One
48
X^O) - 2.0
Fig. 4.3* Block Diagram for Example One— Phase Variables
X1(0) - 2.0
u ■ 0 1 X3 ^ . 1 X2 1S + 1.54 +v S 4- 0.468 s
xi - y
1.30
Fig. 4.4. Diode Diagram for Example One Real Variables
49x (1) = x ( 2 ) x (2) = x(3)x (3) = x (4)x (1) + x(5)x(2) + x(6)x(3) x ( 4) = x (5) = x (6) = 0
The first three state variables are the normal statevariables usually considered with the state variable
>representation. The last three state variables, x (4), x (5)» and x (6) are the adjoined constant parameters which are to be identified. Because only the x (1) record is available, it is also necessary to identify the initial conditions of the state variables x (2) and x (3)•
The experiment was carried out for three sets of initial parameter guesses as listed in Table 4.1. Each set of guesses was then considered for the five differentparameter variances, 0.23 , 0.50 P (N) , 0.75 P (N)1.0 P(N) , and 1.0, where represents the absolute
P(N)P (N)
value of the Nti? parameter. Here, N is the order of the adjoined system. The results are presented in Table 4.1. Indicated are the values of performance index, PI, achieved, as well as, the number of iterations, ITER, and the running time required for each problem. All problems were run on a CDC 6400 computer with running time indicated in seconds. Although it is difficult to make vast generalizations, it appears that choosing SG(N) equal to 0.25 P(N) gives the best consistent results for the random
Table 4.1. Comparison of Initial Parameter Variance for Random Search Scheme
SG(N) X ( 2) X( 3) x(4) . x(5) x(6) pi ITER
Run Time- Sec •
GUESS 0 0 -3.0 -3.0 -3.00.25 P(N) .-0.0124 0.0109 -2.026 -2.032 -2.049 0.039 420 19.00.50 P(N) 0.0052 0.0031 -2.178 -2.117 -2.265 0.124 991 45.70-75 P(N) 0.0018 0.0042 -2.198 -2.121 -2.301 0.165 n 4 8 52.31.00 P(N) 0 .0419 0.027 -3.639 -2.903 -4.522 1.069 399 18.01.00 0.089 -0.021 —2.76 -2.47 -2.974 • 0.370 393 17.8
GUESS 0 0 -1.0 -3,0 — 2.00.25 P(N) -0.037 0.037 -2.068 -2.04 ■ -2.123 0.250 343 15.50.50 P(N) -0.020 0.028 -1.587 -1.8i4 -1.366 0.730 275 12.50.75 P(N) 0.012 0.009 -2.277 -2.17 -2 .4o4 0.184 261 11.81.00 P(N) 0.006 0.010 -2.4o8 -2.264 -2.509 0.660 225 10.11.00 -0.43 2.3 -3.23 -2.62 -3.015 0.350 375 17.0
GUESS 0 0 -1.0 -1.0 -1.00.25 P(N) -0.028 —0 .024 -1.852 -1.930 -1.831 o.io4 1019 47.10.50 P(N) -0.017 -o.o44 -1.850 -1.933 -1.826 0.090 605 27.80.75 P(N) -0.026 -0.010 -1.862 -1.940 -1.827 0.107 577 26.41.00 P(N) -0.012 -0.013 -1.937 -1 .98 7 . -1.925 0.056 474 21.51.00 -0.070 0.092 -1.469 -1.712 -1.316 0.390 863 29.4ANSWER 0 0 -2.0 -2.0 -2.0
VJlo
51search program while in the local search mode. This value
is used in all subsequent examples.
The second experiment carried out with the same
third order system is that of identification via the method
of quasilinearization. The purpose is to determine whether
or not it is beneficial to vary the observation record
length in hopes of increasing the region of problem con
vergence. In effect, this implies that the constraints on
the identification scheme are being relaxed if the program
is having little success in matching the observed output
record with the current set of parameter values. The second
implication is that computer time is not wasted in inte
grating the estimated state equations to the final time of
the observed output record once it is determined that the
error will be larger than that obtained on the previous
iteration.
The problem is formulated in both phase and real
variable configurations as depicted in Fig. 4.3 and
Fig. 4.4 respectively to show that the method is not
partial to one type of problem representation. Several
initial sets of parameters are guessed for each configura
tion. The results are presented in Tables 4.2 and 4.3
respectively. In both cases the problem indicated by an
asterisk, *, converged when using the variable length
observation record but not for the case of a fixed length
observation record. The problem indicated by NC did not
Table 4.2. Comparison of Q.L. Solution for Example One-- Phase Variable Representation
X (2) x(3) x(4) X(5) X(6) ITER PI
Run Time (Sec.)
GUESSSOL. 0 -° -6 -5.0 10
0.0-0.04o
-1.000-1.979
-1.000-2.009
-1.000-1.999 6 1.1 io~8 33.4
GUESSSOL.
0.00.0002
0.0-0.04
-3.00 -1.977
-3 .00 -2.008
-3.00-1.996 5 -81.9 10 27.8
GUESSSOL.
0.0 -7.9 10 >
0.0-0.039
-1.0-1.98
— 3*0-2.009
— 2.0 -2.000 18 6.9 io“12 83.8 *
GUESSSOL.
-0.02?-0.001-0.010-0.049
-1.862-1.96
-1.941-1.999
-1.828-1.981 2 6.7 10"7 . 11.1
GUESSSOL.
0.0020.003
0.004-0.007
-2.198-1.929
-2.12-1.98
-2.30-1.944 2 3.9 10"7 11.1
ANSWER 0.0 0.0 — 2.0 -2.0 -2.0
Table 4.3. Comparison of Q.L. Solution for Example O n e - Real Variable Representation
X ( 2) X(3) x(4) X(5) X(6) ITER PI
RunTime(Sec.)
..GUESSSOL.
0.000.0002
0.002.54
— 1.0 -1.293
— 1.0 -0.469
-1.0-1.529 6 6.5 IQ"7 32.6
GUESSSOL. O
o o
o
oo
o 0.002.55
0.00-1.295
0.00-0.465
0.00-1.531 7 8.3 10~7 *
GUESSSOL.
0.00 0.00 -1.5 -1.5 -1.5N.C.
ANSWER 0.0 0.0 -1.30 -0.468 -1.54
vnV)
54
converge for either case. In all cases where convergence
took place the estimated output followed the observed_ 4output within 10
A comparison of the performance index for the two
identification methods was made with the result plotted in
Fig. 4.$. This indicates the quadratic nature of the
method of quasilinearization, and also indicates that the
random search method has difficulty in converging to the
exact parameter values once the local of the parameters
has been fixed to within some region. This was experienced
in all of the example problems tested. The effect is also
indicated by the fact that the random search method
generates an estimated output within .+ 10 while the
quasilinearization method generates an output that follows-4within +. 10 .
Having shown that the modifications made in the
two programs are beneficial, various linear problems are
now presented to compare the methods of quasilinearization
and random search.
4 .3 Example Two-Third Order System with a Zero
As a second example used to evaluate the ability
of the two methods to identify systems, the third order
system with a zero depicted in Fig. 4.6 is used. The pole-
zero configuration of Fig. 4.7 indicates that one set of
complex poles, as well as one real pole and one real zero
55
100,000.0
10,000.0
RANDOM SEARCH1,000.0
100.0v O
10.0
1.0 QUASILINEARATION
Q. L. ITERATIONS
100 150 200 250R, S, ITERATIONS
Fig. 4.^. Performance Index vs. Number of Iterations for Q.L. and R.S. Schemes for Example One
56
y(0) » 2.0
Fig. 4.6. Block Diagram for Example Two--Third Order Linear Problem with Zero
S PLANE(-0.22,1.11)
(-0.22,-1.11)
F i g . 4.7. Pole-Zero Configuration for Example Two
57are involved yielding a transfer function of
G(s) = 8 + 23 2s 4- 2s 4- 2s 4* 2
The effect of the zero on the output record can be seen by-
comparing Fig. 4.8, the case with the zero, with that of
Fig, 4.2, the case without the zero.
In the formulation of the problem it is assumed
that the position of the zero is known. This is not a
particularly unrealistic assumption since servomechanisms
rarely have inborn zeros in the plant model. However, zeros
often occur as the result of compensation networks and are,
therefore, known. The problem is, therefore, to identify
the three feedback coefficients pictured in Fig. 4.6.
These are the same as for the example discussed in Section
4.2. The state equations are the same; the output expres
sion is different.
The results for various initial guesses are tabu
lated in Table 4.4. The table indicates that the range of
convergence for the random search program is greater than
that for the quasilinearization program. It is also seen
that the accuracy of the quasilinearization method is
greater than that for the random search method. The
average error at each observation point is less than
4_ 10 for the quasilinearization method and +/10 for
the random search method.
58
y(t)
Fig. 4.8. Time Response for Example Two
Table 4.4. Comparison of Q.L. and R .S <, Solutions for Example Two
RunTime
X ( 2) x(3) x(4) x(5) x(6) ITER PI (Sec.)
GUESS 0.00 0.00 - 1 . 0 0 - 1 . 0 0 -1.00Q.L. SOL. 0.001 -0.017 -1.993 -2.011 -2.0L1 6 42.10-5 54.0R.S. SOL. -0.036 -0.005 -1.909 -1.948 -1.915 4?4 0.0062 22.1GUESS 0.00 0.00 -3.00 -3-00 -3.00 AQ.L. SOL. -0.003 -0.018 -1.980 -2.009 -2.000 6 7.10 36.1R.S. SOL. -0.016 -0.003 -1.996 -2.002 -2.021 646 0.0037 30.6
GUESS 0.00 0.00 00r—(1 -3.00 OO<MI
Q.L. SOL. N.C.R.S. SOL. 0 .022 0.036 -2.049 -2.068 -2.055 1477 0.0072 70.3ANSWER 0.0 0.0 -2.0 -2.0 -2.0
60Because of the general success encountered with the
random search method, a second experiment was carried out
with this example. This time the search program was to
determine the value of the zero location, as well as the
feedback coefficients. The results indicated in Table k.5
indicate that the program had little success in identifying
the pole and zero locations when only the output record was
available for observation. This is due to the fact that
there are many pole-zero combinations that give approxi
mately the same overall system response. In all cases the
estimated and observed output records are within of
each other.
4.4 Identification in the Face of Measurement Noise
In order to determine if either quasilinearization
or random search are applicable when the observed data are
corrupted with measurement noise, two examples are con
sidered. The first example is that of a second order
system with the transfer function
G(s) = — --------s * s * 1
The block diagram of Fig. 4.9 shows the problem to be
represented in phase variables. It is, therefore, necessary
to identify the two feedback coefficients. The system has
one set of complex conjugate poles as shown in Fig. 4.9.
Table 4 • 5« Comparison of Random Search Solution for Example Twofor Various Initial Zero Locations
X(2) X( 3) X(4) x(5) X(6 ) x( 7) PI ITER
Run Time (Sec.)
GUESS U.S. SOL.
0.00 -0.012
0.00-0.072
-1 .00 -1.44
-1.00 -1.94
-1.00 -1.54
+ 2.00 2.43 6 .10--3 380 17.5
GUESS R.S.- SOL.
0.000.032
0 .00 o.o4o
-3-00-3.76
-3.00-2.09
-3.00-3.47
1.001.39 4 .10"2 1563 72.6
GUESS R.S. SOL.
0.00 . 6 .10
0.000.011
-1.00-1.07
-3.00-2.09
-2.00-1.34
3.003.39 2.10-2 387 17.8
ANSWER 0.00 0.00 -2.00 -2.00 -2.00 + 2.00
62
u(t) ■ 2.0
Fig. 4 . 9• Block Diagram for Example Three Second Order Linear System
1.0
S PLANE (-0.5,0.867)
0.5
-1.0 -0.5
—0.5
(-0.5, -0.867) — 1.0
Fig. 4.10. Pole Configuration for Example Three
63The output record observed is that shown, in Fig. 4,11 and
indicates that it is desired to identify the system as it
is responding to step a input of value two. Once again
only the output record is available.
Table 4.6 first indicates the results for the case
when there is no measurement noise, while Table 4.7
considers the results for the situation when measurement
noise is present. The,noise introduced is a random
amplitude variation between the upper limits of 10% of
the original signal level. Table 4.7 indicates that
identification is achieved in all cases where identifica
tion was successful in the noise-free case. The number of
iterations required for identification by the quasi
linearization program in the noisy case is generally
greater than that in the noise-free case. This is to be
expected. The added noise has little effect on the random
search method since this is not an analytic type procedure.
The problems indicated by NC did not converge to a solu
tion.
In order to demonstrate that the above result is in
general valid, the fourth order system depicted in Fig.
4.12 is considered. This system has two sets of complex
conjugate poles as shown in Fig. 4.13- The output record
to be matched is indicated in Fig. 4.l4. It is readily
seen that this system is of higher order. The feedback
y(t)u(t) ■ 2.0
2.0
1.0- -
0.5--
4.02.0 3.0 5.0 6.01.0
Fig. 4.11. Time Response for Example Three
ON
Table 4.6. Comparison of Q.L. and R.S. Solutions for Example Three— Noise Free Measurements
x(3) x( 4) ITER PIRun Time (Sec.)
GUESS Q.L. SOL. R.S. SOL.
0.00-0.999-0.972
0.00 -1.000 -0.968
8650
-41 .2*10 _ 6 * io--3
18.519.0
GUESS Q.L. SOL. R.S. SOL.
-3-00 -0.999 -0.996
-2 .00-1.009-1.025
10271
-41.2*103*10-3 21 .5 7.8
GUESS Q.L. S O L . R.S. SOL.
-2.000 -1.080
-3.000-0.946
N.C.93 1 .4*10"2 2.7
GUESS Q.L. SOL. R.S. S O L .
-3.000 -0.950
-3.000 -1 .14o
N.C .127 8 • io--3 3.7
GUESS Q.L. SOL. R.S. SOL.
-4.000-0.999 -.1 .005
-1.000 -1.009 -1.006
11216
-41.2*10 1 .4 * 10“J
23.06 . 2
GUESS Q.L. SOL. R.S. SOL.
-1.000 -1.006
-4.000-0.919
N.C.100 6 • 10--3 2.9
ANSWER -1 .0 -1 .0
66Table 4.7* Comparison of Q.L. and R.S. Solutions for
Example Three— Noisy Measurements
x( 3) x(4) ITER PIRunTime
GUESS 0.000 0.000Q.L. S O L . -o.9B9 -1.095 8 2*10'2 18.5R.S. SOL. -0.986 -1.077 314 2 • 10-2 9.3GUESS -3.000 -2.000Q.L. SOL. -0.989 -1.095 9 2 *10~2 19.6R.S. SOL. -0.996 -1.043 223 2.2*10-2 6.5GUESS -2.000 -3.000Q.L. SOL. N.C .R.S. SOL. -1.088 -0.946 93 3.10"“ 2.7GUESS -3.000 -3.000Q.L. SOL. N.C.R.S. SOL. -0.945 -1.14 127 2 .5*10 3.7GUESS -4.000 -1.000Q.L. S O L . -0.989 -1.096 17 2.10~ 2 34.3R.S. SOL. -1.013 -1 .004 180 2-10-2 5.2GUESS -1.000 -4.000Q . L . SOL. N.C .R . S . S O L . -1.006 -0.919 100 3-10"2 2.9ANSWER -1 .00 -1.00
67X1(0) - 2.0
200
40
40
Fig. 4.12. Block Diagram for Example Four— Fourth Order Linear System
- 6.0(-0.48,5.72)
-4.0
- 2.0(-0.52,2.4)
- 1.0 -0.5
- 2.0(-0.52,-2.4)
— -4.0
- - — 6.0(-0.48,-5.72)
Fig. 4.13• Pole Configuration for Example Four
68
y(t)
1.0
0.5
- 1.0
- 2.0
Fig. 4.l4. Time Response for Example Four
69coefficients are again the parameters to be identified.
Table 4 .8 shows that reasonable success was achieved.
It should be pointed out at this time, lest the
reader be led astray, that successful identification is not
always achieved. The system considered in example one was
also considered when measurement noise was present.
Success with the problem was very poor.
Along with identification using only the output
record, the fourth order example was tested when all of the
state variables were observed. This, however, did not
yield the success anticipated. It is believed that the
reason for this lack of success is due in part or whole to
the fact that when all of the state variables are observed,
there are too many observation points to be matched at
once. In other words, the constraints are too strict.
This is borne out by the fact that a variable length
observation record extended the region of convergence as
indicated in Section 4.2.
Wherf more than just the output state variable is
available, the extra state variable should be treated as
an input to that part of the system that follows it and a
lower order system considered. If x (2) is the input to the
last block of a system and x (1 ) the output, then the block
between x (2) and x (1 ) is treated as the system to be
identified instead of the entire Nth order system. This
Table 4.8. Comparison of Q . L . and R.S. Solutions for Example Four-Noiseless and Noisy Measurement Cases ,
’X (2) X(3) x(4) x(5) X(6) X (7) X(8) ITER
GUESS ooo 0.00 0.00 -150.0 -30.00 — 3 0 e 00 -1.000
Q.L. SOL. w/o Noise -5 7.10 p -2 .10"3 -1.49 -199.5 -40.89 -40.05 -2.095 8
Q.L. SOL. w Noise 0.54 -2.052 -12.55 -199.3 -39.76 -39.60 -1.952 10
R.S. SOL. w Noise -O.O36 +0.051 -4- 5.10 -204,2 -42.9 -40.7 -2.20 772
ANSWER 0.00- 0.00 0.00 -200.0 -4o.oo -4o.oo -2.000
o
71requires that some representation other than phase
variables be used•
4.5 Conclusion
The examples considered in this chapter illustrate
that both quasilinearization and random search methods are
successful in system identification. The two methods tend
to complement each other. The range of problem convergence
is considerably greater for the random search scheme, while
the accuracy of the quasilinearization method is far
greater. The strategy implied is, therefore, to localize
the parameters with the random search method and then
finalize the identification with the quasilinearization
method.
The fact that identification is possible in the
face of measurement noise and also with only the output
record available indicates that the methods of quasi
linearization and random search are of definite practical
use .
Although the computation time can sometimes be
considerable, the availability of high speed digital
computers makes it possible to consider these methods for
systems of higher order. Computation time can, however,
be cut considerably by using hybrid computers. The inte
gration of the state equations to generate the required
estimated output record lends itself quite naturally to
analog computation, while the logical and statistic
portion of the programs are readily implemented on
digital machine.
CHAPTER 5
THE NONLINEAR PROBLEM
5 -1 Introduction
Chapter 4 has shown that it is both possible and
practical to identify linear systems via quasilinearization
and random search methods » This chapter attempts to show
that the methods are also useful in the identification of
nonlinear systems * In order to do this a product non-
linearity is introduced into a second order system in
Section $.2 while Section 5•3 discusses the case of the
Van der Pol equation. This is not to say that these are
the only types of nonlinearities that are tractible under
thgse methods. These are used only as examples to show the
feasibility of the methods.
5 •2 Product Nonlinearity for a Second Order System
A block diagram of the system under study is given
in Fig. 5•1« The state equations for the system are
OCX ( 2 ) + |3x ( 2 ) 3
x (2) = -6x (1 ) - 6x (2) + 6u(t)
where u(t) is a five-unit step applied to the system. The
system response or identification record is given in
73
X (1) = 25T
74
u(t) ■ 5*0
- O *25/6
Fig. 5•1• Block Diagram for Example Five-- Second Order Product Nonlinear System
3.0 -■
1.0 - -
0.5 1.0
Fig. 5•2. Time Response for Example Five
75Fig • 5*2. The adjoined parameters are OC and P and are
equal to 1.0 and 0.01 respectively. The results for
various initial guesses are tabulated in Table 5•1• As
with the linear examples of Chapter 4, convergence is
reasonably good for both methods.
5•3 Van der Pol Equation— The Problem of Integration
As a sixth and final example the Van der Pol equa
tion is considered. This equation is represented as
' x(l) ~ x(2)
x ( 2 ) = C( 1 - x(l) 2) x ( 2 ) + p,x(l)
where the output response appears as in Fig. 5 • 3 for the
case where X (1) = 2.0 and X (2) = 0.0. This particularo oequation is quite useful and is known to describe such non
linear functions as the human heart beat as indicated by
Bellman and Kalaba (1965)• The form of the equation is
very well known, but the defining constants are unknown;
a very practical problem.
Although convergence is achieved as indicated by
the results in Table $.2, the problem did not necessarily
converge to the correct solution. Convergence to the
correct parameter values is highly dependent on the
integration step size, A, as indicated in Fig. 5.4. This
phenomenon is not experienced with any of the linear
examples. In the linear case, good convergence is achieved
76Table 5•1• Comparison of Q.L. and R.S. Solutions
for Example Five
X(l) X (2) a p ITER
Run T ime (Sec.)
GUESS 0.00 . 0.00 0.50 0.00Q.L. SOL. -1.4-10“* 0.094 0.973 0.0102 7 13.8R.S. SOL. 0.00 0.020 0.971 0.0119 249 12.8
GUESS 0.00 . 0.00 1.50 0.00Q.L. SOL. -1.4-10 0.095 0.973 0.0102 5 9.9R.S. SOL. 0 .00 0.00 0.958 0.0155 177 9-1GUESS 0.00 0.00 0 .00 0.00Q.L. SOL. N.C.R.S. SOL. 0 .00 0 .12 0.976 0.0091 637 1—i r'x
GUESS 0.00 _ 0 .00 1.00 0.01Q.L. SOL. -1.4-10"5 0.095 0.973 0.0102 3 5-9R.S. SOL. 0 .00 0.005 0.959 0.0138 152 7.8ANSWER 0 .00 0.00 1.00 0.01
77
y(t)2.0
1.5 '
1.0"
0.25 0.75
5•3• Time Response for Example Six— Van der Pol Equation
1.17
3.65
0.0050.01 0.0025 0.00125 0.000625
DELTA
Fig. 5.4. Parameter Solution vs. Integration StepSize for Q.L. Method
78
Table 5*2. Comparison of Q.L. and R.S. Solutionsfor Example Six
€ ITER PI.Run T ime (Sec.)
GUESS -1 .00 -4.oo A
Q.L. SOL. -0.988 -3.676 9 6 -io-4 21.7R.S. SOL. -1.130 -3.608 124 4'10 4 6 .5GUESS -0 . 5 1 to O OQ.L. SOL. N.C. ------- — —
R.S. SOL. -1.155 -3.613 1200 10r—l 66.0
GUESS -2 .00 -2 .00 AQ.L. S O L . -0.990 -3.674 20 6-10-4 39.7R.S. SOL. -1.49 -3.395 183 9 " 10-3 9.6GUESS 0.00 0.00Q.L. S O L . N.C . — — — —
R.S. SOL. -1.196 -3.547 1200 4*io“3 66.9GUESS -1.17 -3.65Q.L. SOL. -0.988 -3.676 8 6-10-4 19.3R.S. SOL. -1.17 -3.65 88 5 .5*10“5 4.6ANSWER -1.17 -3.65
79whenever A is three or more times less than the time
between observation times. Little improvement is affected
by decreasing the step size further. This, however, is not
the case with nonlinear problems as evidenced by the
present example. Here the proper convergence of the
problem is quite dependent on the integration step size.
Although the output record was originally generated
using an integration scheme with a step size of 0 .01, it is
still necessary for the step size in the quasilinearization
program to be reduced to a value of 0.0005 before proper
convergence is approached. This demonstrates the fact that
quasilinearization is an approximation scheme and should be
used with this thought in mind. As indicated in Fig. 5.4
this is an asymptotic type function where an integration
step size should be used that is consistent with the
accuracy needed in the problem. The results in Table 5 •2
are those obtained for an integration step size of 0.002
with the result that the average error between the observed_ 3output and the estimated output is less than 10
5.4 Summary
The examples presented in this chapter have indi
cated the feasibility of the quasilinearization and random
search methods for the identification of low order nonlinear
systems. The pitfall of integration accuracy is also
brought to the surface indicating that for nonlinear problems
several integration step sizes should be tried and
evaluated to ascertain whether or not this is an important
factor in the problem. It should be noted that because
this study was carried out on a CDC 6400 computer with a
60 bit word, round off and truncation errors are unimportant
when determining the integration step size for the rectan
gular integration scheme. This is not true if the computer
used has only a 36 bit word such as the IBM 709^» In this
case a more sophisticated integration scheme, such as a
four point Runge-Kutta is not only beneficial, but indeed a
necessity. It is also beneficial to use a four point
Runge-Kutta integration scheme to reduce computer run time
when the step size for the rectangular integration scheme
becomes very small.
CHAPTER 6
SUMMARY AND CONCLUSION
6 .1 Summary
It has been shown that the methods of quasi
linearization and random search provide complementary
solutions tp the problem of "system identification."
Quasilinearization is a method that converges quadratically
to the solution if the initial starting point is within
the region of convergence. Therefore, the method providesi
a solution that is highly accurate. Random search, on the
other hand, provides a solution that is, in general, less
accurate than that obtained by quasilinearization. How
ever, there is no restriction or problem with regions of
convergence with the method of random search.
Both the methods of quasilinearization and random
search require that the analyst supply the form and formu
lation of the state equations which are assumed to describe
the system under study. The method of quasilinearization
has the disadvantage that the additional equations for the
Jacobian must be supplied. This is not a serious limita-2tion but does require an additional N equations, where N
is the order of the adjoined state vector. Once again,
both methods require that the defining state equations be
81
82integrated forward in time from the initial time to the
final observation time. This task consumes a substantial
portion of the computation time for both methods. As
mentioned in Chapters 2 and 3 ? the integration of the state
equations could be handled very easily by an analog
computer, while the logical portion of each program could
be handled by a digital computer. In effect, the problem
is ideally suited for implementation on a hybrid computer.
6 o2 Conclusion
Although the random search program, as implemented
by the author, works very well in the local search mode,
the global search strategy leaves something to be desired.
It is suggested that while in the global search mode, it
be assumed that the problem solution be uniformly dis
tributed in parameter space. In Chapter 3 it was pointed
out that the strategy is to assume that the solution be
Gaussian distributed about the initially selected parameter
set. This is depicted in Fig. 6.1 for the one parameter
problem. The probability, Pr(P), of selecting the correct
parameter, P , is plotted as a function of the current
parameter value, P. If a failure occurs while in the
global search mode, the variance of the distribution is
increased as indicated by the dashed curve. This approaches
a uniform distribution in the limit. The suggestion here
is, that the distribution of the solution in parameter
83
Pr(P)
Fig. f> . 1 . Probability of Parameter Identification
84space be immediately assumed uniform while in the global search mode.
It has been shown that the two methods described in
this study can be very successful in system identification
of both linear and nonlinear systems. The question is now:
for what various types of nonlinearities do the methods
give confident results ? Two nonlinear examples were
presented. This is by no means a limit to the types
possible. This area could produce very fruitful results
if pursued further. The area of time varying systems is
also open for exploration and could prove to be very
beneficial.
Perhaps the one thought that is most important when
reviewing this work is that system identification is by no
means an easy problem. With this in mind it can be recog
nized that there is no final format for the identification
problem. Each problem is different and must be looked at
individually. It is also for this reason that the author
feels that the success indicated in this study warrants
additional investigation.
APPENDIX 1
QUASILINEARIZATION PROGRAM
85
c QUASILINEARIZATION PROGRAM86
subroutine form *z »t »d e l t a»g *p g >subroutine form supplies the adjoined state equatio
NSAND THE JACOB IAN
Z = VALUE OF STALE VARIABLES AT TIME T T = TIMEDELTA = INTEGRATION STEP SIZEG = STATE EQUATIONS EVALUATED AT TIME TPG = ELEMENTS OF THE JACOB IAN EVALUATED AT TIME TDIMENSION Z < 10 ) *G< 10 ) iPGUOilO >IF<T-DELTA> lOfllf-eS
10 N = 4DO 1 I»1»N DO 1 J = 1 $>N
1 PGCleJ) = 0»p PG(lf2> "lob
3 PG < 2 »1 I = Z(3)P0 (2,2 ) « Z(4) 'PGI 2 # 3) - ZC1)PG(2 eA) = Z(2)G(l) = Z(2).G ( 2 ) x Z (3 )*7(i) + Z ( 4 > *Z ( 2 I * 2 <» 06(3) « 0 e 0G (A) = C60RETURNEND
SUBROUTINE SIMEG CAeXDOT»KC»AINV»KFLAG)SUBROUTINE SIMEO INVERTS MATRIX A BY USING A
GAUSS JORDAN ELIMINATION SCHEME A = MATRIX TO BE INVERTED XDOT = A XKC = ORDER OF THE MATRIX TO BE INVERTED AINV = INVERSE OF MATRIX A
L KFLAG « COLUMN IN WHICH MATRIX WENT SINGULARDIMENSION A ( 10 o 1 ),0(10,10 ) ,XDOT( IQ ) • X 1 1‘0 ) »A INV (10 »
I 0 )DOUBLE PRECISION A*B#AINV*COMP#TEMP KFLAG * 0 DOl I=l9KC DOl Jxl»KC AINV(1» J )
1 B(I#J)»A(I *J)D0 25 I»1»KCAINV(111)»1
2 X(I)=XD0T( I ?
003 1=1,KC COMP=0 K5* I6 IF(AB£F(8(K,I>>~ABSF(COMP))5e5o4
4 COMP=B(K,I>N=K
5 K=K4l IF(K~KC)6o6,7
7 IF(B(N, I? ) 8 ,51, 88 IF(N-I>51912,99 D010 M”1»KC
TEMPOS{I»M>B(N*M)=TEMP TEMP=AINV(I,M>AINV { I ,M > = AINV ( N ♦ M ) •
10 AlNV(NoM)*TEMPTEMP=X(I) ' ,X( n=X(.N)X(N > *T EMP
12 X< I )=X( I )/8(L, I )TEMP = 8(1,1)DO13 M = 1 ,KCAlNV(IeM) * ATNVU «M>/TEMP
13 8(1»M) » 8(I,M >/TEMP D016 J=1#KCIF(J~I>14,16,14
14 IF(B(J »I))15,16,1515 XU)=X( J)~8( J,I )*X< I )
TEMP=8<J,I>DO17 N*1oKCAINV(JoN)=AINV(J$N>~TEMP*AINV(IeN>
17 8(J,N)=B(J,N)-TEMP*8(1,N)16 CONTINUE3 CONTINUE
RETURN51 PRINT52,I»KC
KFLAG = I52 FORMAT( 16HO ERROR IN COLUMN I2»2X,9H0F MATRIX ,5X, *** 3HKC-I2//>
RETURNEND
PROGRAM OUASILN(INPUT•OUTPUT#TAPE INPUT)MAIN PROGRAM FOR QUASILINEARIZATION METHOD DIMENSION VEST(10,100),PHI(10,10)»P(10)9H{10»10),Z(
*** 10)»YAST(10),1Y(10*100),NAME(6),T{1Q0),PDOT(10)#ZDOTt10),PHIDO t1 *** 0,101 ,151(10,10)952(10),G(10),PG(10,10),SI<lOe10),ZSAVE(10
**«■ )
V V
V V V
V w
DOUBLE PRECISION 51*511001 FORMATI6A5»4I5 0F1C6O ) '1002 FORMAT{SElOoO)1003 FORMAT UP&E20»7>1004 FORMAT!lHOe5X9l6HITERAT!ON NOe » *12)1005 FORMAT(1P8E14o4)1006 FORMAT<1H1*5Xol4HPROBLEM 1DENT&*5Xo6A5)100? FORMAT(6X* 13HCONVERGENCE »lPE15o?e10X»
$36H INTEGRATION INTERVAL TO DATA POINT 15)1008 FORMAT{1H0*5X*14HPROBLEM IDENTo65X*6A5)1010 FORMAT(1H1sSXelAHFINAL SOLUTION)1011 FORMAT { 1H0* 5X s 168 ITERATION N06 « 13o05X$>14HC0NVERGE
*** MCE = E16c7,527H INTEGRATION 30 DATA POINT I5$5Xol2H RUN TIME *
*** E10b3)1012 FORMATfIHOe5Xe16HPARAMETER VALUES//)1013 FORMAT(1P6E20«7)1014 FORMAT(1HO »1OX » 4HTIME * 14X e 2HY(IlflH)»14X»5HYEST(II*
1H)*14X615HERROR//)
1018 FORMAT!23HILL CONDITIONED PROBLEM*5X.o 10F8a3)1020 FORMAT (1H0 o 14HPR0BLEM IDENTo * 5Xo6A5'»/
135H ORDER OF IDENTIFICATION PROBLEM = 15*/238H NUMBER OF ST1TE VARIABLES OBSERVED * I5o/336H NUMBER OF PARAMETERS TO BE FOUND = 15*/524H NUMBER OF DATA POINTS * 13*/625H INTEGRATION STEP SIZE = F10o5)READ DATANAME « PROBLEM IDENTITY N = ORDER OF IDENTIFICATION PROBLEM M = NUMBER OF STATE VARIABLES OBSERVED IR * NUMBER OF PARAMETRS TO BE FOUND ,NDATA * NUMBER OF OBSERVATION POINTS DELTA = INTEGRATION STEP SIZEREAD 1Q01*(NAME!I)*I®lo6)*N*MsIR*NDATA*DELTA PRINT 1020*(NAME9I)»I » U 6)oN*M*IR*NDATApDELTA RM = M
C READ IN H MATRIXDO 11 I*loM
11 READ 1002 *(H jI*J)*J=1,N)C READ OBSERVATION DATA
DO 1 1*1#NDATA1 READ 1002* T (I)*(Y<J»I> =
C READ INITIAL GUESSES13 READ 1002*(2(I)*I»1»N)
IF(EOF*1)501*500 3 00 CONV » 10«0
ITERaO MARK.* 0PRINT 1006 *<NAME(I)*1=1*6)YMAG=06
V u
89DO 2 0 I«loM DO 20 J»1sNDATA 20 YMAG»YMAG4-Yt ICALL SECOND(RTIME)
C INITIALIZE PROBLEM2 I DATA®1
NCHCK = 0 ITER = ITER-H ERROR « 0oO CONSV » CONV PRINT IGOAoITER PRINT 2003o <Z(I DO 3 I*1»N P(I)®0oS2(I)»0o 2SAVE(I) ® Z(I?DO 3 J»XoN SI {I oJ ? »0o IFU~J) ISaAolS
IB PHI<I»J)®06 GO TO 3
4 PHI<nJ>«ld3 CONTINUE
TIME^OoC CHECK TIME
5 DIFFT * TiIDATA)-TIME INCREMENT TIME AND CALCULATE NEW VALUES FOR Z*P, AN
»»» D PHI16 TIME = TIME4‘0ELTA
IF(TIME-T(IDATA))18,17,1717 MARK = 1
-TIME = T( IDATA)18 CALL FORM{2,TIMEg0ELTA9G,PG)
DO 7 I*1»N ZD0T<I>*6(I)PDOT(I>*G(I)DO 7 J«1»N PHIDO (IPDOT(I)=PDOT{I)*PG(DO 7 K*leN
7 PHIDO (I,J)=pHIDO (I*J)+PG(I,K)*PMI(K,J )IF(MARK)30d30,3 2
30 DO 8 I»1»N Z(U=Z( n*ZDOT( I )*DELTA P ( I ) = P < I )4-PDOT < $ )-8-DELTA DO 8 J®1 ,N
8 PHI(I,J)=PHI(I,J>+PHI0O (I*J)*DELTA GO TO 5
31 DO 9 1*1»N Z< !} .« Z ( f I4-20OT ( ! )*DIFFT Pin * P ( I 5 ❖ PDOT( I)*DIFFT ■
DO 9 JsloN9 PHI < I t>J) = PHI ( loJ) + PHIDOi I »JHDIFFT
MARK * 0 C ADD TERM TO SUMS
100 DO 101 1=1oM YAST< U=0oY£ST <I 9 tOATA)*0 o 00 101YAST ( I i sYAST( ! > 9-H ( ! » J ) *P (J )
101 YESTtI>IDATA)=YE2T(I*IDATA)+H(DO 102 I=l4NDO 102 J*1«N DO 102 L=1»MS2< n » S 2 U i*PHI (Jol )*HttyJ)*(YtL♦IDATAHYASTIL) )DO 102 sN DO 102 LL«1*N
102 Sl(I»ji=Sl(I 9J>+PHI(KolHH i L 9 K)>H(L 9 LL > «PM H LL o J) C CHECK FOR CONVERGENCE ,
DO 210 1=1kM210 ERROR == ERROR .+ 1BS( Y< 1 * IDATA>-YEST < I DATA) )
CONV = ERROR/YMAGIF(CONVoGT*1o0E'4’20)G0 TO 13 IF(RM#CONSV-CONV)212 e300»200
2 00 1F(IDATA-NDATA)1 39209*209103 IOAT A*IDATA+1
GO TO16209 IFKONV-loOE-S) 30092119211211 IF(ITER~2C)206»3 0,300 206 PRINT 10079CONV 9IDATA
C CALCULATE INVERSE2 20 CALL SIM5Q(S19S29N9SIeKFLAG)
C CALCULATE NEW INITIAL CONDITIONSNUP = NIF(KFLAG)201*202,201
201 IF(KFLAG-11205 9 205*204 204 NUP = KFLAG"!
DO 203 I=KFLACoM 203 Z U ) = ZSAVE< I 12 02 CONTINUE
DO 208 1=1eNUP •Z (n * 060 DO 208 J=19NUP
208 Z(I)=2(11+51(19J)»52(Jl212 DO 213 1=1,N
CRIT = A85(Z5AVE(I>~Z(I))IF(CR!T-5*0E-51213*213,214
214 NCHCK * 1213 CONTINUE
IF(NCHCK129300*2 C PRINT SOLUTION
3 00 PRINT 1010
p
301205
501
CALL SECOND(FT I ME)RTIME = FTIME-RTIMEPRINT 1008*(NAME( IJPRINT 101Ip ITER?CONVeI DATA9R'PRINT 1012NP=N-IR+1PRINT 10139(Z(I)»I“NPpN)DO 301 1=1PM PRINT 1014,1oI DO 302/0=1oNDATA ERROR = Y(!oJ>~YESL<I»J 5
'I ME
PRINT GO TO PRINT GO" TO STOP END
1013oT (J) m i $J> gYESTU oj? <> ERROR 131018*(Z m 91 = 1oN>13
ND ORDER 1 6 0 *000000o160000 o320000 *480000 *640000 *800000 *960000
1 *12 0 0 00 1 *28 0 0 00 1*440000 1*600000 1*760000 1*9200002 o 0800002*240000 2*400000 2*560000 2*720000 2 *880000 3*040000 3*200000 3*360000 3*520000 3*680000 3*640000 4*000000 4*180000 -4 *320000 4*480000 •q640000 *800000
1
LINEAR DRIVEN0*000000 *024236 *091530 *193927 *323819 o4740?4 e638135 *810088 *984706
1*157473 324585
1*482933 lo630074 1*764192 1*884047 1*968917 2 *078543 2*153064 2o212953 2*258959 2*292051 2*313355 2*324112 2*325627 2*319232 2*306246 2*287950 2 ©265562 2*240216 2o212946 2*184681
04 01 02
40960000 50220000 5 e 280000 5»440000 5o600000 5o760000 5 o920DOO
20256236 2 o128309 2 6101461 2#076225 2o052904 2oC31?82 20023032
APPENDIX 2
RANDOM SEARCH PROGRAM
93
9^RANDOM SEARCH PROGRAM
FUNCTION PERFRt PP)C FUNCTION PERFR calculates THE PERFORMANCE INDEX FORC THE CURRENT PARAMETER SET
DIMENSION PP<2Q)*T(100)-*Y<10095> »XP ( 20 > txXDOT (20 > eRN *** (2000)9XU(20)»SXL(20)$XI(20)9P(20)sYEST(1Q0»5 ?COMMON T*Y«NDATAsMOATA*RN*NRNMX*XUoXLfrXl*YEST »NPeP
oPISV*STEPSMARK = 0 DELTA = 0«01 TIME = 060 PI = OoOc DEFINE THE STATE VARIABLESXPU ) *• PP(1)XP(2) = PP(2)XP(3) = PP(3)IDATA = 1
5 DIFFT * T(IDAT A 5-TIME16 TIME = TIME-4-DELTA'
IF(IIME—T(IDATA))18,17,1717 MARK = 1
TIME -- T( IDATA)C DEFINE THE STATE EQUATIONS18 XDOT(1 ) = XP(2 )
XDOT(2) = XP(3)XDOT (3) » PP(4)*XP( 1 ) 4- PP<5 >*XP<2 > 4 PP(6)*XP(3)IF(MARK 3 30,30,31
C INTEGRATE THE STATE EQUATIONS30 XP{13 ~ XDOT(1)*DELTA4XP(1)
XP(2 3 * XDOT ( 2 ) «-DELTA4XP ( 2 )XP(3) = XOOT(3>*DELTA 4 XP<3)GO TO 5
31 MARK * 0C INTEGRATE THE STATE EQUATIONS
XPC1) = XDOT(1)*DIFFT+XP(13 XP(2 3 = XD0T(2)«DIFFT4XP(2)XP(3) » XDOTI 3 3 »DIFFT + XP(3)VEST(IDATA,1i = XP(13*PP<7)*XPC2?PI » PI + ABS(Y(9DATA,1}-YEST<IDATA,!)3 IDATA ■ IDATA + 1 IF(IDATA-NDATA)5,5,50
50 PERFR « PI RETURN END
SUBROUTINE RNS
95DIMENSION RN(200 )9F(20)oDP(20)oPP<20> o3(20)oSGf20)
*** oEX{100»4) $>* T(100)»Y(100»5)»XI(20 > $XU(20)»XL< 20)tSSG(20>oGR(
**» 20)»YEST(100*5)COMMON T 9Y »NDAT A *MDATA »RN oNRNMXeXUeXL eXI *YES T sNPeP
*«•# *PISV*STEPS31 FORMAT(XI109 14H GLOBAL SEARCH *1PE20* 8*6H = PI*1PE2*** Oo 8 eSH » STEP)
32 FORMAT(/110914H LOCAL SEARCH*1PE20o806H = PI*1PE2*** 0 a 8 * 8H * STEP)
33 F0RMAT(12X»1P6E17*8)35 FORMAT(3X»9H SUCCESS 9lP6El?»8)50 FORMAT(/37H EXIT ON MAXIMUM NUMBER OF ITERATIONS95X
$ 13H ITERATION = 15)51 FORMAT(/25H EXIT ON MAXIMUM VARIANC£*5Xel3H HERAT! ***' ON = 15)52 FORMAT(/34H EXIT ON MINIMUM PERFORMANCE INDEX*5X9
$ 13H ITERATION =15)53 FORMAT(14H0BEST SOLUTlONelPE20e8»6H * PIsSXelPEZOe *** : 8 *8H = STEP $$ 3Xol2H RUN TIME * EHo4)
54 FORMAT(1HI9/ 23H RANDOM SEARCH SOLUTION)1013 FORMAT(6E20o8)1014 FORMAT(1H0*lOX e 4HTI ME 914X e 2HY{IlelH)»14X95HYEST(11*
1H)914X 9$ 5HERR0R///)CALL SECOND(RTI ME)PRINT 54
I PROGRAM RANDOM SEARCH08ASIC PHASEPIMIN * OoOOOS STMIN * 0e005NRN-0 NRNX ■ 2 STEP316 MD=0 NSS»0 NSF = 0 NFSC=0 NT=0 PI«06 FNP = HPGROW * la0+l*0/(2o0*FNP)SHRNK 3 IoO/GROW 00 36 N = 19NP BCN)*Oo DP(N )=0 0SSGtN) = 0»5 'P (N ) «= XI (N)IF(P (N ))719 70» 71
70 SGCN) = OoOl GO TO 36
9671 SGCn) = 0.25*ABSF(P{N)»36 CONTINUE
PRINT 33,(P (II)*11=1,NP)SSTEP * 0o5 PPISV « l00o0**4 PRT=06 75 PIP=100o**4 NTMX = 300*NP SGMX=2000o NSFMX » 2**«P+6 GO TO 1
2 003N=1,NP NRN=NRN^1IF(NRN-NRNMX)3*3*4
4 NRN = NRNXNR NX « NRNX * 1
3 DP(N) = SG(N)*RN(NRN) + B(N)1 NT »NT+1D05N=1,NPPP(N ) =P (N )4-DP( N )IF(PP(N)~XL{N))45*46,46
45 PP(N) * XU N)46 IF(XU(N)-PP(N))47*5,547 PP < N > * XU(N)5 CONTINUE
C EVALUATE PIPI * PERFR(PP) .IF(PIo6T«lo0e+200) GO TO 14
C SUCCESS PHASEIFiPIP-PI>14,14*6
6 PIPePIIF(MD)38,38,39
38 PRINT32*NT»PI,STEP GO TO 40
39 PRINT31*N7,PI,ST£P40 CONTINUE
PRINT 35 *.( PP (I I } » 11* 1 *NP)PISV * P!STEPS w STEPIF( C (PPISV~PISV) /PP!SV)*«0o01 )41$41#42
42 NSF=0 PPISV » PISV DO 43 11*1*NP
43 SSGt 11 ) « SGU I ) - SSTEP = STEP
41 NFSC=0 D07N*1#NP
7 P tN) « PP(N)IF(PISV-PIMIN) 62,30*30
30 IF(NT—NTMX)9,60,60 9 IF(MD)10*10*12
973.0 D011N*loNP11 B(N)«DP(N)4-PRT»{B(N?"DPtN3 )
IF(NSS >28»28*2928 NSS«1
60 TO 229 DO 34 N*1gNP34 SG<N > = GROW*SG(N)
STEP * GROW^STEPNSSaOGO TO 2
12 D013N=loNP13 8 <N3»Oti
M0*0GO TO 2
C GLOBAL FAILURE PHASE14 NSS=0
IF(NT~NTMX)37o60»6037 IF{MD>20o20o1515 IFtNFSC)16ol6*1716 Nf501
GO TO 217 SGS-0o
STEP = GROW*STEPD018N«loNPSG(N) » GROW*SG(N)
18 SGS=SGS+SG(N)*SG(N3 IFCSGS-SGMX)19o19961
19 NFSC=0 GO TO 2
C LOCAL FAILURE PHASE20 IFINSF~NSFMXJ21#26*2621 N5F=NSF+1
IFtNF5C)22»22o2422 NFSC=1
0023N«1»NP23 DP(N»«Oo*DP(N)
GO TO 124 D025N»1oNP25 SO(N) = SHRNK*SG(M3
STEP = 5HRNK»STEP IF<STEP-STMIN)26ol9*19
26 MC=1 NSF*0 NFSC=0 D027N=1#NP BtNOOo
27 SG(N) = SSG(N)STEP = 5STEP GO TO 2
60 PRINT 50 oNT GO TO 8
n n c% n n n n n n
9861 PRINT 51a#T
<30 TO 8 . . .62 PRINT 52 s>NT8 CALL SECOND I FT I ME)
RTIME = FTIME-RTIME PI = PERFR(P)PRINT 53»PISVpSTEPS®RTIME PRINT 33*(PCIIJoI!*l»NP)DO 100 J=1oMDATAPRINT 10149J»JDO 100 1=1oNDATAERROR « Y(1,J) » YEST(I-eJ)
100 PRINT 1013»T(H o8CI»JI»YEST(I#J)9ERROR RETURN END
PROGRAM RANSRCMCINPUTgOUTPUT9TAPE 1 * INPUT)PROGRAM TO READ IN DATARN » GAUSSIAN DISTRIBUTED RANDOM VARIABLES NP * ORDER OF IDENTIFICATION PROBLEM MOATA = NUMBER OF STATE VARIABLES OBSERVED IR = NUMBER OF PARAMETERS TO BE FOUND NDATA = NUMBER OF OBSERVATION POINTS XU = UPPER LIMITS OF PARAMETER SPACE XL = UPPER LIMITS OF PARAMETER SPACE XI e INITIALLY GUESSED PARAMETERSDIMENSION RNl200 )eP(20)»T(100 > eY{100 9 5)»X<20)*XL(2
0)oXU< 20)6* XI(20)»NAME < 4)9YEST<100,5)COMMON T»Y»NDATA*MDATA*RN*NRNMX»XU»XL»XI»YEST »NP»P
*** 9.°ISVsSTEPS50 FORMATC6A5»415)51 F0RMAT(8E1Co0)52 FORMAT(6E20ti8)53 FORMAT(IHOslAHPROBLEM IDENTo95X»6A59/
135H ORDER OF IDENTIFICATION PROBLEM # 15 9/238H NUMBER OF STATE VARIABLES OBSERVED = 15*/336H.,NUMBER OF PARAMETERS TO 8E FOUND = 15$,/524H NUMBER OF DATA POINTS = 15®/)
54 FORMAT<13F603356 FORMAT(SElOoO)57 FORMAT (13HOUPPER LIMITStlOF10«2).58 FORMAT (13HOLOWER LIMITS®10F10®2)
NRNMX « 572READ' 54® ( RN (I > • I *1 ®NRNMX )READ 50®(NAME(I)=1=1*6)®NP*MDATAsIR9NDATA PRINT 53»(NAME(I? 9 I=1®6)»NP*MDATA®IRsNDATA DO 2 KK*lvNDATAREAD- 51®TIKK)t{Y(KKeKKK)»KKK«1»MDATA)
2 PRINT 52 sTOCK) ® ( 8 ( KK »KKK ) »KKK = 1 ®MDAT A )
READ 56g (XU(I ) <.1 = 1 «NP )PRINT 57»(XU(I)»I = 1»NPI READ 56*(XL(I)o1*1*NP)PRINT 569(XL(I>*1=1,NP)
707 READ 56*(XI(I> o1*1oNP)IF(EOF 91J 709 9 70 8
708 CALL RMS GO TO 707
709 STOP END
o330 0,530 1,196 0,812 2o030~0,l72 0,310 O0552~lo392 lo »*•» 594-06362 0,662
*072 0,150-0,026 1,874-0,986 1,668-2*028-1,048 0,006-0,794-0,810 1,422
1,108 1,296-0,694 0*3 0-1,898-1,138 0,036-0,840 0,250 1* *** 064“0,156 0*606
1*298 2,056 0*074 1,7 6 0*592 0*174-1*246 0*802-0,504 0, *** 056-0,186-0*712
*834 1*354-1,074 0*734-0,180-0*346-1,02 6-0,396-0,096-0* *** 970-0,720 0*242
* 138 1*772 1*118-0*634 0*364-0,106 0*196 0,274-0*488-0,*** 922-1,018-0*396
,612-0*708 0,778-0,696 0,214-1*378 0*748-0o262 0,064 Oo «•*«• 972^2*034-1 *518
*566-0,080 0*876 1»592-1,928-0*416 1,054 1,854 0,262-0, *** 100 1,308-2,014
,880 0*832-1,200-2,0 86-0*408 0*774-0,644 2,476-0,546 0,304-0*042 1,062
1.010-1*058 0*562-0*0 8-0.920 0*212-0,340-2,210-0,038 1*2 04—0* 594—0, 356
*966 1*404-0*148 0*776-1,432-0,132 0,052-1,266-0*770 Oe *** . 362-1,342 0,5 440
1.056-0,482-00942-l,882-0,154-0e540 0,308 1,038 1,944 0* *** 994 1,138-1,128
,330 0,350-0,730-0,362 0,632-0*536-0*056 0,580-1*822-06 *** 124-10 388-0 0246
* 832-0*050-1,034 0.746-1 * 144-1*294-10108 0,182-0*168 2**** 290-2,186 0,272
1,976-0*760-1,876 0*768-1,092-1*340 2,006-1,170 0*868-0,544-0,936 0,382-
0184 0,194-1,502-0* 816-0.874 0.350 0*064-1,350 0*022-0* *** ' - 990-0,044 0,208
,540-0*644-0.494-1,096 1.616-0,416-0,508-1d084-0O390-0o ***■ 744-0*652-0.710
.096 0.928-0*432 0»126-1«268-0 * 114-1,638 1,102 1,164-0, *** ' 634-0,762 0,106
1.176-1,418-0,720 0*048 0,298-0,00:4 0*410-20214-1.706 0, **•* 168-0,172 0,594
1*032 0,660 .9*130-0*310 1.854-0.600 0.980-0,822-1,022-1,774-1,192 0*170
100o752 1o070-0«396-0«362-0*060 0*318 0a708 2o736 0*558-0o
»*♦ 684 0o524-0o6102*282 1*890-1*902 0*334 2*226 1*086 0o748 OoSOO 0*000 O0
*«•* oOOO 0*000.558-0«382-0*104-0*388 0*762-0*230*0*494-04132-0*382-1*
*** 644 0*570 0*4806494-00498-0*500-0*1 6-0*928 0*900 0*836 1*146 2*242 10 *** 798 16622-1*952
*114-16488 0*534-0*168-0*422 0*970-0*242-1*418-0*768 1* *** 900 0*580 0*452
o774 0*194 0*928 0*796 0o526-0*868-0o624 lo2OO-Oo396-20 *** 078 0*006-0*644
o140 0o418 1*976 0*576-0*596 0*462 2o088-0*100-lo934 1* ***' 686-1*058 0*090
*824 O0894-O»924 0o014-0*396-1.104-1*092 0*2 54-0*674-0* *** 586 0*532 1o338
1*036-0*970-0*540 1.240 ■0«688-l*l90-2«408 Oo994-20048 10 *** 440 0*488-0.752
.724-0.652 0.832 0*276-0*580-1.788 1*602-0*690-0*446 2.*** 038-0*380-0.668
o H 8 - 0 o9C8 1.168 0*786-1*462 2.380 1.406-2o538.0*188 0* *** 688 lo652-0.612
1*112 lo528 2.082-0.884 C.450-1.090-0o356-1o142 0.368-0.460 0.016 0.896
1 a 628 0.832 1 o684-0o6.16 0.316-0.022 lo566 0*612-0*890-0* »«-«• 332 0.900 1 o002
2*428 1.734 1.104 0*090 0.712 0.152 0.214 0*486-0*292-0* *** 682 0ol78»Oe566
.976-1.262 0.932 1 * 140-0.746-0,262-0*028 1.226 0.566-0. *** 924 0*638-0.302
la926 0oll2-0o622 1 .060 0e262-0e738-2a072-Oo384-1«398 lo170-0*940 0.030
1.024 1*524-1*762 00796-1o330-0„562 1.530 0a430-0.306 lo ***• 016-1*12 0-0*828
o 840 1o154-1*394—2o0 20-0.222-0.164-0*15S—1o298 0.582 1* *** 080-1 6176-2 0'432
*912-0*576 0.618—0.274 lo258 0*976-0*140-0*630 1.464-0* **» 906 lo204 1*196
o 200 0*016-0*192-0*996 0*246 '1*790 0*222 2.028-2.180-10434 0.684 0*902
* 986 0*082-0*734-1*132-0*332-0*066 0.372 0*092 0.754 06 *** 518-0*060 0*192
«168 0o210-0e6l2 0*462 0e790-0*898-0e168-1o258 0.020 1*616-0*770-0*816
.786 0*410 2*234-0*274 1.098 0.612 0*224 1 o592-1 <,454-0.»»» . 534 1*504 0*570
a 268 2ol32 1.578-0O724 1*492-0*572 1.06.8-0*014 l*86S-0*010—0*754’ 1.594
3 RD ORDER LINEAR WITH ZERO 07 01 04 34a0 0 0 000 2 *0 0 0 0 0 0*120000 1*945788
o O
101d240000 lo796189 o360000 lb569651 6480000 le2e32329600000 0952671o 720-000 e592423o840000 0215&79o960000 -0165633
. 1*080000 ~«> 540853lo 200000 -£,9005891o320000 -lo236726 1 o44Q0G0 -1o542417 1o560000 -lo812072 1o680000 -2*041334 Is800000 -2*227046 1o920000 -2*367204 2 0040000 -2*460898 2 a 160000 —2 ©508244 Zo280000 -2*5103022 £>400000 -2o468992 2»520000 -2o3B6994 2*640000 -2 *267647 2*760000 -2*114843 2*880000 -1*932915 3o000000 -1o726533 3*120000 -1*500591 30240000 -1*2601013 *360000 -1*010094 3*480000 -*7553233 *60000.0 -.501173 3 *720000 -£>251385 3*840000 —*010979
960000 £>2168042*0 10*0 1Q 0 0 10*0 10*0 10*0
10*02 e 0 — 10 0 0 "10*0 — 20*0 -20*0 -20*0
—20*02*0 0*0 0*0 — 1*0 -1*0 —lo 0
2*0
APPENDIX 3
QUASILINEARIZATION INPUT DECK
102
$
Card Column NumbersNum. 20
Problem Identity C M 5)_____
N M IR NDATA(15) I (15) UI5) i (15)
DELTA(FlQ.iQ)
H matrix M cgrds(8E10.0)1+M
T(l)2+M Y(2,l)
Observation Data NQATA cards
1+M+m i A Y(l, NDATA)T (NDATA) Y (2,NDATA)
2+MfNDATA Z(l) Z (2) Z (3)
Initially Guessed Parameters____Z(l) 2(3)2(2)
Cot
APPENDIX k
RANDOM SEARCH INPUT DECK
CardNUM.
Column Numbers5i 10, 15 , 20 , 25 , 30 , 35 , 40 , 45, 50 , 55 , 60 , 65 , 70,
1.....................................................\ ----,--- 1 1---------1
(13F6.3) > Random Numbers same for all
44 problems
45 Problem Identity NP MDATA IR NDATA (6A5) ,(15) , (15), (15), (15) ,
46 T(l) , Y(l,l) Y(l,2) Y d , 3) ......... 11 1 I i 1 i i.
-- ____ 1 1 1 I i Observation Data
NDAfA cards ,45+NDATA
T (NDATA) Y (NDATA,!) Y (NDATA, 2) Y (NDATA, 3) .........| V | | | |
(8E10.0), i
46+NDATA XU(1) XU (2) XU (3) ......... Upper and Lower Limits
> of Parameter Space,47+NDATA XL(1) , XL(2) f XL(3) , ......... ,
(8E10.0)I i
48+NDATA XI(1) | XI (2) , XI (3) 1 ......... , t]
Initially Guessed Parameter Values ,
49+ XI (1) t XI (2) | XI (3) | ......... | | > (8E10.0)l i
i ,
1 1 1 1 i i 1
LIST OF REFERENCES
Balakrishnan, A « V. and Lucien W . Neustadt* Conference on Computing Methods on Optimization Problems. New YorkV Academic Press, 1964 *
Bellman, Robert and Robert Kalaba. Quasilinearization and Non-Linear Boundary Value Problems. New York:American Elsevier, 19 6 5•
Eveleigh, Vernon W » Adaptive Control and Optimization Techniques« New York: MeGraw-Hi11, 1967•
Gelopulos c, Demosthenes P. Computation of Regions ofConstrained Stability for Nonlinear Control Systems <, Tucson, Arizona: University of Arizona, Ph.D. Dissertation, June, 1967•
Kalaba, Robert. "On Nonlinear Differential Equations, the Maximum Operation and Monotone Convergence,"I. Math. Mecho, Vol. 8, 1959 -
Kumar, K. 3. P. and R. Shridhar. "On the Identificationof Control Systems by the Quasilinearization Method," I.B.E.E. Transactions on Automatic Control,Vol. AC-9? No o 3? April, 1964.
Melsa, James L ., Rudolf J. Pillmeier, William W. Bottorff, and William J. Steinway. "Research in and Application of Modern Automatic Control Theory to Nuclear Rocket Dynamics and Control," Engineering Experimental Station, University of Arizona, Semiannual Progress Report, National Aeronautics and Space Administration Grant No. NsG 490, July,1967.
Ohap, Robert F . and A. R. Stubberud. "A Technique for Estimating the State of a Nonlinear System,"I.B.E.E. Transactions on Automatic Control,Vol. AC-10, No. 2 , April, 1965•
Paine, George. "The Application of the Method of Quasilinearization to the Computation of Optimal Control," Los Angeles, California: U.C.L.A.,Report No. 67-49, August, 1967•
106
10?Sabroff, A » 9 R. Farr enkopf, A* Frew, and M. Gran.
"Investigation of the Acquisition Problem in Satellite Attitude Control," Space Technology Laboratories, Report on Project No. 8219 under Contract No. AF33(6l5)-1535? prepared for A.F.Flight.Dynamics Laboratory, Wright-Patterson Air Force Base, Ohio, March, 1965 «
Sage, Andrew P. and B , R » Eisenberg. "Experiments inNonlinear and Nonstationary System Identification via Quasilinearization and Differential Approximation," Proceedings of the Seventh Joint Automatic Control Conference, 1966.
Schultz, Donald G . and James L . Melsa. State Functions and Linear Control Systems. New York: McGraw-Hill,1967.
Wilde, Douglas J . and Charles S. Beightler. Foundations of Optimization. Englewood Cliffs, New Jersey:Prentice-Hall, 19 6 7•