Download - 9a SN UIUC Short Course 1
-
7/28/2019 9a SN UIUC Short Course 1
1/55
IntroductionClassication
Least Squares EstimationRecursive Least Squares Estimation
Weighted Least SquaresDiscrete-Time Kalman Filter
State Space Identication
System Identication
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice
July 7, 2009
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
-
7/28/2019 9a SN UIUC Short Course 1
2/55
IntroductionClassication
Least Squares EstimationRecursive Least Squares Estimation
Weighted Least SquaresDiscrete-Time Kalman Filter
State Space Identication
Outline I1 Introduction
DenitionObjective
2 ClassicationNon-parametric ModelsParametric Models
3 Least Squares Estimation4 Recursive Least Squares Estimation
DerivationStatistical Analysis of the RLS Estimator
5 Weighted Least Squares6 Discrete-Time Kalman Filter
FeaturesSatish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
-
7/28/2019 9a SN UIUC Short Course 1
3/55
IntroductionClassication
Least Squares EstimationRecursive Least Squares Estimation
Weighted Least SquaresDiscrete-Time Kalman Filter
State Space Identication
Outline IIDerivations
7 State Space IdenticationWeighting Sequence ModelState-space Observer ModelLinear Difference Model
ARX ModelPulse Response ModelPseudo-Inverse
Physical Interpretation of SVDApproximation ProblemBasic EquationsCondition Number
Eigen Realization Algorithm
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
-
7/28/2019 9a SN UIUC Short Course 1
4/55
IntroductionClassication
Least Squares EstimationRecursive Least Squares Estimation
Weighted Least SquaresDiscrete-Time Kalman Filter
State Space Identication
DenitionObjective
DenitionSystem identication is the process of developing or improving themathematical representation of a physical system usingexperimental data. There are three types of identicationtechniques: Modal parameter identication andStructural-model parameter identication (primarily used instructural engineering) and Control-model identication(primarily used in mechanical and aerospace systems). The primary
objective of system identication is to determine the systemmatrices, A, B, C, D from measured/analyzed data often withnoise. The modal parameters are computed from the systemmatrices.
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
-
7/28/2019 9a SN UIUC Short Course 1
5/55
IntroductionClassication
Least Squares EstimationRecursive Least Squares Estimation
Weighted Least SquaresDiscrete-Time Kalman Filter
State Space Identication
DenitionObjective
ObjectiveThe main aim of system identication is to determine amathematical model of a physical/dynamic system from observeddata. Six key steps are involved in system identication (1)
develop an approximate analytical model of the structure, (2)establish levels of structural dynamic response which are likely tooccur using the analytical model and characteristics of anticipatedexcitation sources, (3) determine the instrumentation requirementsneeded to sense the motion with prescribed accuracy and spatial
resolution,(4) perform experiments and record data, (5) applysystem identication techniques to identify the dynamiccharacteristics such as system matrices,modal parameters, andexcitation and input/output noise characteristics, and (6)rene/update the analytical model based on identied results.
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
I d i
-
7/28/2019 9a SN UIUC Short Course 1
6/55
IntroductionClassication
Least Squares EstimationRecursive Least Squares Estimation
Weighted Least SquaresDiscrete-Time Kalman Filter
State Space Identication
Non-parametric ModelsParametric Models
Parameteric and Non-parameteric Models
Parametric Models : Choose the model structure and estimate
the model parameters for best t.Non-parametric Models : Model structure is not specied apriori but is instead determined from data. Non parametrictechniques rely on the Cross correlation function (CCF) R yu /Auto correlation function (ACF) R uu and Spectral DensityFunctions S yu / S uu (Fourier transform of CCF and ACF) toestimate the transfer function/frequency response function of the model
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
I t d ti
-
7/28/2019 9a SN UIUC Short Course 1
7/55
IntroductionClassication
Least Squares EstimationRecursive Least Squares Estimation
Weighted Least SquaresDiscrete-Time Kalman Filter
State Space Identication
Non-parametric ModelsParametric Models
Non-parametric ModelsFrequency Response Function (FRF)
Y ( j ) = H ( j )U ( j )
FRF - Nonparametric
H ( j ) =S yu ( j )S uu ( j )
Impulse Response Function (IRF)
y (t ) =
t
0 h(t )u ( ) d [Note: IRF and FRF form a FourierTransform P air]
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
Introduction
-
7/28/2019 9a SN UIUC Short Course 1
8/55
IntroductionClassication
Least Squares EstimationRecursive Least Squares Estimation
Weighted Least SquaresDiscrete-Time Kalman Filter
State Space Identication
Non-parametric ModelsParametric Models
Parametric ModelsTF Models (SISO)
Y (s ) =b m s m + b m 1s m 1 + + b 1s + b o
s n + an 1s n 1 + + a1s + ao U (s )
In this model structure, we choose m and n and estimateparameters b 0, , b m , a0, , an 1.
Time-domain Models (SISO)
d n y dt n
+ an 1d n 1y dt n 1
+ + a1dy dt
+ ao y (t )
= b md m u
dt m + b m 1
d m 1u
dt m 1 + + b 1
du
dt + b o u (t )
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
Introduction
-
7/28/2019 9a SN UIUC Short Course 1
9/55
IntroductionClassication
Least Squares EstimationRecursive Least Squares Estimation
Weighted Least SquaresDiscrete-Time Kalman Filter
State Space Identication
Non-parametric ModelsParametric Models
Parametric ModelsDiscrete Time-domain Models (SISO)
y (k )+ a1y (k 1)+ + an y (k n) = b 1u (k 1)+ + b m u (k m)
State Space Models (MIMO)
xn 1 = An n xn 1 + Bn m um 1yr 1 = Cr n xn 1 + Dr m um 1
The parameters n, r , m are given and the models parametersA, B, C, D are to be estimated.
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
Introduction
-
7/28/2019 9a SN UIUC Short Course 1
10/55
IntroductionClassication
Least Squares EstimationRecursive Least Squares Estimation
Weighted Least SquaresDiscrete-Time Kalman Filter
State Space Identication
Non-parametric ModelsParametric Models
Parametric Models
Transfer Function Matrix Models (MIMO)
Y (s ) =H 11 (s ) . . . H 1m (s )... . . .
...H r 1 (s ) H rm (s )
U (s )
which can be written as:
Y (s ) = H (s )U (s )= C(s I A) 1B + D U (s )
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
Introduction
-
7/28/2019 9a SN UIUC Short Course 1
11/55
IntroductionClassication
Least Squares EstimationRecursive Least Squares Estimation
Weighted Least SquaresDiscrete-Time Kalman Filter
State Space Identication
Non-parametric ModelsParametric Models
Parametric Models
System identication can be grouped into frequency domainidentication methods and time-domain identicationmethods. We will focus mainly on discrete-time domain modelidentication and state-space identication:
1 Discrete Time-domain Models (SISO)2 State Space Models (MIMO)
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
Introduction
-
7/28/2019 9a SN UIUC Short Course 1
12/55
ClassicationLeast Squares Estimation
Recursive Least Squares EstimationWeighted Least Squares
Discrete-Time Kalman FilterState Space Identication
Least Squares Estimation
Consider a second-order discrete model of the form,
y (k ) + a1y (k 1) + a2y (k 2) = a3u (k ) + a4u (k 1)
The objective is to estimate the parameter vectorpT = [a1 a2 a3 a4] using the vector of input and outputmeasurements. Making the substitution,
hT = [ y (k 1) y (k 2) u (k ) u (k 1)]
we can writey (k ) = hT p
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
Introduction
-
7/28/2019 9a SN UIUC Short Course 1
13/55
ClassicationLeast Squares Estimation
Recursive Least Squares EstimationWeighted Least Squares
Discrete-Time Kalman FilterState Space Identication
Least Squares EstimationLet us say, we have k sets of measurements. Then, we canwrite the above equation in matrix form as,
y 1y 2...
y k
=h11 h12 h1nh21
. . .... . . .
hk 1 hkn
p 1p 2...
p n
y i = hT i p i = 1 , 2, , k (1)
In matrix form, we can write,
y = HT p
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
Introduction
-
7/28/2019 9a SN UIUC Short Course 1
14/55
ClassicationLeast Squares Estimation
Recursive Least Squares EstimationWeighted Least Squares
Discrete-Time Kalman FilterState Space Identication
Least Squares EstimationIn least-squares estimation, we minimize the followingperformance index:
J = y HT pT
y HT p
= yT y yT HT p pHy + pT HH T p (2)
Minimizing the performance index in eq. 2 with respect to p, J
p=
pyT y yT HT p pHy + pT HH T p = 0
= Hy Hy + 2 HH T p = 0
which results in the expression for the parameter estimate as:
p = HH T 1
Hy (3)Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
Introductionl
-
7/28/2019 9a SN UIUC Short Course 1
15/55
ClassicationLeast Squares Estimation
Recursive Least Squares EstimationWeighted Least Squares
Discrete-Time Kalman FilterState Space Identication
DerivationStatistical Analysis of the RLS Estimator
DerivationLimitations of Least Squares EstimationThe parameter update law in eq. 3 involves operating in a batchmode. For every k + 1 th measurement, the matrix inverse
HH T 1 needs to be re-calculated. This is a cumbersomeoperation and it is best if it can be avoided.
In a recursive estimator, there is no need to store all theprevious data to compute the present estimate. Let us use thefollowing simplied notations:
P k = HH T 1
and
Bk = HySatish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
IntroductionCl i ti
-
7/28/2019 9a SN UIUC Short Course 1
16/55
ClassicationLeast Squares Estimation
Recursive Least Squares EstimationWeighted Least Squares
Discrete-Time Kalman FilterState Space Identication
DerivationStatistical Analysis of the RLS Estimator
Derivation
Hence, the parameter update law in eq. 3 can be written as:
pk = P k Bk
In the recursive estimator, the matrices P k , Bk are updatedas follows:
Bk +1 = Bk + hk +1 y k +1 (4)
In order to update P k , the following update law is used:
P k +1 = P k P k hk +1 hT k +1 P k
1 + hT k +1 P k hk +1(5)
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
IntroductionClassication
-
7/28/2019 9a SN UIUC Short Course 1
17/55
ClassicationLeast Squares Estimation
Recursive Least Squares EstimationWeighted Least Squares
Discrete-Time Kalman FilterState Space Identication
DerivationStatistical Analysis of the RLS Estimator
DerivationNote that the update for matrix P k +1 does not involve matrixinversion.The updates for P k , Bk can then be used to update the
parameter vector as follows:pk +1 = P k +1 Bk +1
pk = P k Bk (6)
Combining these equations,
pk +1 pk = P k +1 Bk +1 P k Bk Substituting eqs. 4 and 5 in the above equationwe get:
pk +1 = pk + P k hk +1 1 + hT k +1 P k hk +1 1
y k +1 hT k +1 pk
(7)Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
IntroductionClassication
-
7/28/2019 9a SN UIUC Short Course 1
18/55
ClassicationLeast Squares Estimation
Recursive Least Squares EstimationWeighted Least Squares
Discrete-Time Kalman FilterState Space Identication
DerivationStatistical Analysis of the RLS Estimator
Statistical Analysis
Consider the scalar form of the equation again:
y i = hT i p i = 1 , 2, , k
In the presence of measurement noise, it becomes,
y i = hT i p + ni i = 1 , 2, , k
With the folowing assumptions1 Average value of the noise is zero, that is, E (ni ) = 0, where E
is the expectation operator.2 Noise samples are uncorrelated, that is
E (ni n j ) = E (ni )E (n j ) = 0 , i = j 3 E (n2i ) = r , the covariance of noise.
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
IntroductionClassication
-
7/28/2019 9a SN UIUC Short Course 1
19/55
ClassicationLeast Squares Estimation
Recursive Least Squares EstimationWeighted Least Squares
Discrete-Time Kalman FilterState Space Identication
DerivationStatistical Analysis of the RLS Estimator
Statistical Analysis
Recalling eq. 6:pk = P k Bk (8)
This can be expanded as:
pk =k
i =1hi hT i
1 k
i =1hi y i (9)
Taking E () on both sides, we get,
E (pk ) = E (p) = p (10)This makes the estimator unbiased estimator, that is theexpected value of the estimate is equal to to that of thequantity being estimated.
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
IntroductionClassication
-
7/28/2019 9a SN UIUC Short Course 1
20/55
ClassicationLeast Squares Estimation
Recursive Least Squares EstimationWeighted Least Squares
Discrete-Time Kalman FilterState Space Identication
DerivationStatistical Analysis of the RLS Estimator
Statistical Analysis
Now, let us look at the covariance of the error,
Cov = E (pk pk ) ( pk pk )T (11)
Which upon simplication, we get
Cov = P k r (12)
It can be shown that P k decreases as k increases. Hence, asmore measurements become available, the error reduces andconverges to the true value of p. Hence, this is known as aconsistent estimator.
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
IntroductionClassication
-
7/28/2019 9a SN UIUC Short Course 1
21/55
C ass cat oLeast Squares Estimation
Recursive Least Squares EstimationWeighted Least Squares
Discrete-Time Kalman FilterState Space Identication
Weighted Least Squares
Extension of RLS MethodThe scalar formulation can be extended to a MIMO(multi-input multi-output) system.A weighting matrix is introduced to emphasize the relativeimportance of one parameter over the other.
Consider eq. 1. Extending this to the MIMO case and
including measurement noise,
yi = HT i p + ni , i = 1 , 2, , k
where, yi is l 1, H i is n l , p is n 1 and ni is l 1.
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
IntroductionClassication
-
7/28/2019 9a SN UIUC Short Course 1
22/55
Least Squares EstimationRecursive Least Squares Estimation
Weighted Least SquaresDiscrete-Time Kalman Filter
State Space Identication
Weighted Least Squares
The performance index,J is dened by,
J =k
i =1yi HT i p
T yi HT i p
Minimizing J with respect to p, we get,
p =k
i =1H i HT i
1 k
i =1H i yi
The above equation is a batch estimator. The recursive LS estimator canbe calculated by proceeding the same way as was done for the scalar case.Dening,
P k =k
i =1Hi HT i
1
Bk =k
i =1H i yi (13)
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
IntroductionClassication
-
7/28/2019 9a SN UIUC Short Course 1
23/55
Least Squares EstimationRecursive Least Squares Estimation
Weighted Least SquaresDiscrete-Time Kalman Filter
State Space Identication
Weighted Least Squares
The parameter update rule is given by,
pk +1 = pk + P k Hk +1 yk +1 HT k +1 pk
Now, if we introduce a weighting matrix, W into theperformance index, we get
J =k
i =1yi HT i p
T W yi HT i p (14)
The minimization of eq. 14 leads to
p =k
i =1H i WH T i
1 k
i =1H i Wy i
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
IntroductionClassication
-
7/28/2019 9a SN UIUC Short Course 1
24/55
Least Squares EstimationRecursive Least Squares Estimation
Weighted Least SquaresDiscrete-Time Kalman Filter
State Space Identication
Weighted Least Squares
Once again, dening
P k =k
i =1H i WHT i
1
Bk =k
i =1H i Wy i (15)
the recursive relationships are given by,
pk +1 = pk + P k +1 Hk +1 W yk +1 HT k +1 pk (16)
and
P k +1 = P k P k Hk +1 W 1 + HT k +1 P k Hk +1 1
HT k +1 P k (17)
Assuming that the noise samples are uncorrelated, or,
E ni nT i =0 i = j R i = j
It can be shown that by choosing W = R 1 produces the minimumcovariance estimator . In other words, the estimation error is the minimum.
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
IntroductionClassication
L S E i i
-
7/28/2019 9a SN UIUC Short Course 1
25/55
Least Squares EstimationRecursive Least Squares Estimation
Weighted Least SquaresDiscrete-Time Kalman Filter
State Space Identication
FeaturesDerivations
Discrete-Time Kalman FilterKalman Filter is the most widely used state estimation toolused for control and identication.LS, RLS, WLS deals with the estimation of system parametersKalman lter deals with the estimation of states for adynamical system.
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
IntroductionClassication
L t S E ti ti
-
7/28/2019 9a SN UIUC Short Course 1
26/55
Least Squares EstimationRecursive Least Squares Estimation
Weighted Least SquaresDiscrete-Time Kalman Filter
State Space Identication
FeaturesDerivations
Discrete-Time Kalman Filter
Consider the linear discrete-system given by,
xk = Axk 1 + Gwk 1yk = HT xk + nk
Note : The parameter vector p is replaced by x, consistent with theterminology we have adopted for representing states.wk is a n 1, is noise measurements with E() = 0 and COV () = Qxk is the n 1 state vectorA is the state matrix assumed to be knownnk is a l 1 vector of output noise with E() = 0 and COV () = Ryk is l 1 vector of measurementsG is n n, H is l n and they are assumed to be known
The objective is to estimate the states, xk based on k observations of y. A Recursive lter is used for this purpose. This recursive lter iscalled Kalman lter.
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
IntroductionClassicationLeast Squares Estimation
-
7/28/2019 9a SN UIUC Short Course 1
27/55
Least Squares EstimationRecursive Least Squares Estimation
Weighted Least SquaresDiscrete-Time Kalman Filter
State Space Identication
FeaturesDerivations
Fundamental difference between the WLS for Dynamicalcase and Non-dynamic case
Non-dynamic case, at time t k 1, an estimate xk 1 needs to beproduced and the covariance estimate xk 1 needs to be updated.These quantities do not change between t k 1 and t k because,xk 1 = xk .In the dynamic case, xk 1 = xk , since the state evolves between thetime-steps k 1 and k . That means a prediction is needed of whathappens to these state estimates and the covariance estimatesbetween measurements.Recall the WLS estimator in eq. 16 and eq. 17
xk = xk 1 + P k Hk W yk HT k xk 1
P k = P k 1 P k 1Hk W 1 + HT k P k 1Hk 1
HT k P k 1
In this estimator we cannot replace xk 1 with xk 1|k 1 as xk ischanging between t k 1 and t k . The same applies to P k 1.
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
IntroductionClassicationLeast Squares Estimation
-
7/28/2019 9a SN UIUC Short Course 1
28/55
Least Squares EstimationRecursive Least Squares Estimation
Weighted Least SquaresDiscrete-Time Kalman Filter
State Space Identication
FeaturesDerivations
Discrete-Time Kalman Filter
Consider the state estimate equation. If we know the stateestimate based on k 1 measurements, xk 1|k 1 and thestate matrix, A, then, we can predict the quantity xk |k 1
using the relationship,xk |k 1 = Axk 1|k 1
We can write the state estimate equation as,
xk |k = xk |k 1 + P k |k HR 1 yk HT xk |k 1 (18)
The above equation assumes that the weighting matrix,W = R 1.Similarly, it can be shown that the covariance estimate is,
P k |k = P k |k 1 P k |k 1H R + HT
P k |k 1H
1
HT
P k |k 1Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
IntroductionClassicationLeast Squares Estimation
-
7/28/2019 9a SN UIUC Short Course 1
29/55
Least Squares EstimationRecursive Least Squares Estimation
Weighted Least SquaresDiscrete-Time Kalman Filter
State Space Identication
FeaturesDerivations
Discrete-Time Kalman Filter
Note that the matrix, H is constant. The quantity P k |k 1 canbe calculated as,
P k |k 1 = E [xk xk |k 1][xk xk |k 1]T
= AP k 1|k 1AT + GQGT
In summary, the following steps are for the discrete-timeKalman lter:
1 Prediction :P k |k
1= AP k
1|k
1AT + GQGT
2 Prediction : xk | k 1 = Axk 1| k 13 Covariance Estimate :P k | k =
P k | k 1 P k | k 1H R + HT P k | k 1H 1 HT P k | k 1
4 State Estimate : xk | k = xk | k 1 + P k | k HR 1 yk HT xk | k 1
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
IntroductionClassicationLeast Squares Estimation F
-
7/28/2019 9a SN UIUC Short Course 1
30/55
Least Squares EstimationRecursive Least Squares Estimation
Weighted Least SquaresDiscrete-Time Kalman Filter
State Space Identication
FeaturesDerivations
Discrete-Time Kalman Filter
If input is present, such that the equations are of the form,
xk = Axk 1 + Buk 1Gwk 1yk = HT xk + nk
Then, the state estimation becomes,
xk |k = xk |k 1 + Buk 1 + P k |k HR 1 yk HT xk |k 1
Note : Do not confuse the input matrix, B with Bk !
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
IntroductionClassicationLeast Squares Estimation
Weighting Sequence ModelState-space Observer Model
-
7/28/2019 9a SN UIUC Short Course 1
31/55
qRecursive Least Squares Estimation
Weighted Least SquaresDiscrete-Time Kalman Filter
State Space Identication
pLinear Difference ModelPhysical Interpretation of SVDEigen Realization Algorithm
State Space Identication
The objective is to identify the state matrices, A, B and C.The general state-space description for a dynamical system isgiven by:
x(t ) = Ac x(t ) + Bc u (t )y (t ) = Cx(t ) + Du (t ) (19)
for a system of order n, r inputs and q outputs.The discrete representation of the same system is given by:
x(k + 1) = Ax(k ) + Bu (k )y (k ) = Cx(k ) + Du (k ) (20)
Note the distinction in the state matrices between thecontinuous and discrete versions!
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
IntroductionClassicationLeast Squares Estimation
Weighting Sequence ModelState-space Observer Model
-
7/28/2019 9a SN UIUC Short Course 1
32/55
qRecursive Least Squares Estimation
Weighted Least SquaresDiscrete-Time Kalman Filter
State Space Identication
pLinear Difference ModelPhysical Interpretation of SVDEigen Realization Algorithm
Weighting Sequence Model
Representing output as a weighted sequence of inputsStart from the initial conditions, x(0):
x(0) = 0; y (0) = Cx(0) + Du (0)
x(1) = Ax(0) + Bu (0); y (1) = Cx(1) + Du (1)x(2) = Ax(1) + Bu (1); y (2) = Cx(2) + Du (2)
If x(0) is zero, or, k is sufficiently large,Ak 0 (stablesystem with damping). Hence,
y (k ) = CBu (k 1) + + CAk 1Bu () + Du (k )(21)
y (k ) =k
i =1
CAi 1Bu (k i ) + Du (k ) (22)
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
IntroductionClassicationLeast Squares Estimation
Weighting Sequence ModelState-space Observer Model
-
7/28/2019 9a SN UIUC Short Course 1
33/55
Recursive Least Squares EstimationWeighted Least Squares
Discrete-Time Kalman FilterState Space Identication
Linear Difference ModelPhysical Interpretation of SVDEigen Realization Algorithm
Weighting Sequence Model
Eq. 22 is known as the weighting-sequence model it does notinvolve any state measurements, and depends only on inputs.The output y (k ) is a weighted sum of input values
u (0) , u (1) , u (k )Weights CB, CAB, CA2B, are called Markov parameters.Markov parameters are invariant to state transformationsSince, the Markov parameters are the pulse responses of thesystem, they must be unique for a given system.
Note that the input-output description in eq. 22 is valid onlyunder zero initial conditions (steady-state). It is not applicableif the transient effects are present in the system.In this model, there is no need to consider the exact nature of the state equations.
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
IntroductionClassicationLeast Squares Estimation
R i L S E i i
Weighting Sequence ModelState-space Observer ModelLi Diff M d l
-
7/28/2019 9a SN UIUC Short Course 1
34/55
Recursive Least Squares EstimationWeighted Least Squares
Discrete-Time Kalman FilterState Space Identication
Linear Difference ModelPhysical Interpretation of SVDEigen Realization Algorithm
State-space Observer Model
If the system is asymptotically stable, then there are only anite number of steps in the weighting sequence model.
However, for lightly damped systems, the number of terms inthe weighting sequence model can be too large.Under these conditions, the state-space observer model isadvantageous.Consider the state space model:
x(k + 1) = Ax(k ) + Bu (k )y (k ) = Cx(k ) + Du (k )
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
IntroductionClassicationLeast Squares Estimation
R i L t S E ti ti
Weighting Sequence ModelState-space Observer ModelLi Diff M d l
-
7/28/2019 9a SN UIUC Short Course 1
35/55
Recursive Least Squares EstimationWeighted Least Squares
Discrete-Time Kalman FilterState Space Identication
Linear Difference ModelPhysical Interpretation of SVDEigen Realization Algorithm
State-space Observer Model
Add and subtract the term Gy (k ) to the state equation:
x(k + 1) = Ax(k ) + Bu (k ) + Gy (k ) Gy (k )y (k ) = Cx(k ) + Du (k )
x(k + 1) = Ax(k ) + Bv(k )y (k ) = Cx(k ) + Du (k )
where,A = A + GC
B = [B + GD G]and
v(k ) = u (k )y (k )Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
IntroductionClassicationLeast Squares Estimation
Recursive Least Squares Estimation
Weighting Sequence ModelState-space Observer ModelLinear Difference Model
-
7/28/2019 9a SN UIUC Short Course 1
36/55
Recursive Least Squares EstimationWeighted Least Squares
Discrete-Time Kalman FilterState Space Identication
Linear Difference ModelPhysical Interpretation of SVDEigen Realization Algorithm
State-space Observer Model
Continuing from the previous method, objective is to nd Gso that A + GC is asymptotically stable.The weighting-sequence model in terms of the Observer Markov parameters (from eq. 22) is:
y (k ) =k
i =1CAi 1Bv(k i ) + Du (k ) (23)
where, CAk 1B are known as the observer Markov parameters. If G is chosen appropriately, then Ap = 0 fornite p .
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
IntroductionClassicationLeast Squares Estimation
Recursive Least Squares Estimation
Weighting Sequence ModelState-space Observer ModelLinear Difference Model
-
7/28/2019 9a SN UIUC Short Course 1
37/55
Recursive Least Squares EstimationWeighted Least Squares
Discrete-Time Kalman FilterState Space Identication
Linear Difference ModelPhysical Interpretation of SVDEigen Realization Algorithm
Linear Difference Model
Eq. 23 can be written as (proceeding the same way as donefor the weighting-sequence description):
y (k ) =
k
i =1 CA
i 1
(B + GD) u (k i )
k
i =1 CA
i 1
Gy (k i )+ Du (k )
which can written as:
y (k ) +k
i =1
Y (2)i y (k i ) =k
i =1
Y (1)i u (k i ) + Du (k ) (24)
where Y (1)i =k
i =1CAi 1 (B + GD) and Y (2)i =
k
i =1CAi 1G
Eq. 24 is commonly referred to as ARX (AutoRegressive witheXogeneous part) model.
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
IntroductionClassicationLeast Squares Estimation
Recursive Least Squares Estimation
Weighting Sequence ModelState-space Observer ModelLinear Difference Model
-
7/28/2019 9a SN UIUC Short Course 1
38/55
Recursive Least Squares EstimationWeighted Least Squares
Discrete-Time Kalman FilterState Space Identication
Linear Difference ModelPhysical Interpretation of SVDEigen Realization Algorithm
Linear Difference Model
Models discussed so far (weighting sequence, ARX, etc.) arerelated in terms of the system matrices, A, B, C and D .If these matrices are known, all the models can be deriveddescribing the IO relationships.The system Markov parameters and the observer markovparameters play an important role in system identicationusing IO descriptions.Starting from the initial conditions, x (0) we get :
x (l 1) =l 1
i =1Ai 1Bu (l 1 i )
y (l 1) =l 1
i =1CAi 1Bu (l 1 i ) + Du (l 1)
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
IntroductionClassicationLeast Squares Estimation
Recursive Least Squares Estimation
Weighting Sequence ModelState-space Observer ModelLinear Difference Model
-
7/28/2019 9a SN UIUC Short Course 1
39/55
Recursive Least Squares EstimationWeighted Least Squares
Discrete-Time Kalman FilterState Space Identication
Linear Difference ModelPhysical Interpretation of SVDEigen Realization Algorithm
Linear Difference Model
which can be written as,
y (0) y (1) . . . y (l 1) = D CB . . . CAl 2B
u (0) u (1) u (2) . . . u (l 1)u (0) u (1) . . . u (l 2)
u (0) . . . u (l 3). . . ...
u (0)
In compact form,Y
q l = P
q rl V
rl l (25)
Hence,P = YV + (26)
where V + is called the pseudo-inverse of the matrix. The matrix,V becomes square in case if single-input system. ARX modelscan be expressed in this form.
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
IntroductionClassicationLeast Squares Estimation
Recursive Least Squares Estimation
Weighting Sequence ModelState-space Observer ModelLinear Difference Model
-
7/28/2019 9a SN UIUC Short Course 1
40/55
qWeighted Least Squares
Discrete-Time Kalman FilterState Space Identication
Physical Interpretation of SVDEigen Realization Algorithm
Linear Difference Model: ARX Model
Consider the ARX model given in eq.24. This can be written in aslightly modied form as:
y (k )+ 1y (k 1)+ + p y (k p ) = 0u (k )+ 1u (k 1)+ + p u (k p )
(27)where, p indicates the model order. This can be arranged as:
y (k ) = 1y (k 1) p y (k p )+ 0u (k )+ 1u (k 1)+ + p u (k p )(28)
which means that the output at any step k , y (k ) can be expressed interms of p previous output and input measurements, i.e.,y (k 1), , y (k p ) and u (k 1), , u (k p ).
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
IntroductionClassicationLeast Squares Estimation
Recursive Least Squares Estimation
Weighting Sequence ModelState-space Observer ModelLinear Difference Model
-
7/28/2019 9a SN UIUC Short Course 1
41/55
qWeighted Least Squares
Discrete-Time Kalman FilterState Space Identication
Physical Interpretation of SVDEigen Realization Algorithm
Linear Difference Model: ARX Model
Let us dene a vector v (k ) as,
v (k ) = y (k )u (k ), k = 1 , 2, , l
where, l is the length of the data. Eq. 28 can be written as,
[y 0 y ] = [V 0 V ] (29)
where, y 0 = [y (1) y (2) y (p )]
y = [y (p + 1) y (p + 2) y (l )]
= [ 0 ( 1 1) ( p 1 p 1) ( p p )]
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
IntroductionClassicationLeast Squares Estimation
Recursive Least Squares Estimation
Weighting Sequence ModelState-space Observer ModelLinear Difference Model
-
7/28/2019 9a SN UIUC Short Course 1
42/55
Weighted Least SquaresDiscrete-Time Kalman Filter
State Space Identication
Physical Interpretation of SVDEigen Realization Algorithm
Linear Difference Model: ARX Model
V 0 =
u (1) u (2) u (p )v (0) v (1) v (p 1)
...... . . .
...v (2 p ) v (3 p ) v (1)v (1 p ) v (2 p ) v (0)
V =
u (p + 1) u (p + 2) u (l )v (p ) v (p + 1) v (l 1)
......
. . ....
v (2) v (3) v (l p + 1)v (1) v (2) v (l p )
The parameters can then be solved as:
= [y 0 y ] [V 0 V ]+ (30)
If the system does not start from rest, the quantities y 0 and V 0 areusually unknown. In which case, the parameters are calculated as:
= yV + (31)
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
IntroductionClassicationLeast Squares Estimation
Recursive Least Squares Estimation
Weighting Sequence ModelState-space Observer ModelLinear Difference Model
f
-
7/28/2019 9a SN UIUC Short Course 1
43/55
Weighted Least SquaresDiscrete-Time Kalman Filter
State Space Identication
Physical Interpretation of SVDEigen Realization Algorithm
Linear Difference Model: Pulse Response Model
Given,
y (k ) = Du (k ) + CBu (k 1) + CABu (k 2) + + CAp 1BU (k p )(32)
Find, D , CB , CAB , , CAp 1B using,
y (k ) y (k + 1) . . . y (k + l 1)
= D CB . . . CAp 1B
u (k ) u (k + 1) u (k + l 1)u (k 1) u (k ) u (k + l 2)u (k 2) u (k 1) u (k + l 3)
......
...u (k p ) u (k p + 1) u (k + l 1 p )
In compact form,
Y q l = P q r (p +1) V r (p +1) l (33)
q :output, r :input and l :length of data.Hence,
P = YV + (34)
In Matlab, you can compute the pseudo-inverse through the command: pinv(V)
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
IntroductionClassicationLeast Squares Estimation
Recursive Least Squares EstimationW i h d L S
Weighting Sequence ModelState-space Observer ModelLinear Difference ModelPh i l I i f SVD
-
7/28/2019 9a SN UIUC Short Course 1
44/55
Weighted Least SquaresDiscrete-Time Kalman Filter
State Space Identication
Physical Interpretation of SVDEigen Realization Algorithm
Linear Difference Model: Pseudo-Inverse
Say, Am n Xn 1 = bm 1 X = A+ b, m equations in nunknowns.It has a unique (consistent) solution if:Rank [A, b] = Rank (A) = nIt has innite (fewer linear independent equations thanunknowns) number of solutions if: Rank [A, b] = Rank (A) < nIt has no solutions (inconsistent) if: Rank [A, b] > Rank (A)Note that [ A, b] is an augmented matrix.
Rank is the number of linearly independent columns or rows.Due to the presence of noise, system identication mostlyproduces a set of inconsistent equations. These can be dealtwith what is known as the Singular Value Decomposition(SVD).
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
IntroductionClassicationLeast Squares Estimation
Recursive Least Squares EstimationW ight d L t Sq
Weighting Sequence ModelState-space Observer ModelLinear Difference ModelPh i l I t t ti f SVD
-
7/28/2019 9a SN UIUC Short Course 1
45/55
Weighted Least SquaresDiscrete-Time Kalman Filter
State Space Identication
Physical Interpretation of SVDEigen Realization Algorithm
Physical Interpretation of SVD: Approximation Problem
Let A n m , where n m and rank (A) = n. Then, nd amatrix X n m with rank (X) = k < n such that A X 2is minimized (minimize 1), A X 2 k +1 (A) and
rank (X) = k .SVD addresses the question of rank and handles non-squarematrices automatically.
1 If the system has an unique solution, SVD provides this uniquesolution
2 For innite solutions, SVD provides the solution with minimumnorm3 When there is no solution, SVD provides a solution which
minimizes the error
Items 2 and 3 are called Least Squares Solutions
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
IntroductionClassicationLeast Squares Estimation
Recursive Least Squares EstimationWeighted Least Squares
Weighting Sequence ModelState-space Observer ModelLinear Difference ModelPhysical Interpretation of SVD
-
7/28/2019 9a SN UIUC Short Course 1
46/55
Weighted Least SquaresDiscrete-Time Kalman Filter
State Space Identication
Physical Interpretation of SVDEigen Realization Algorithm
Physical Interpretation of SVD: Basic Equations
if A is a m n, then there exists two ortho-normal matricesU (m m and V (n n such that
Am n = Um m m n VT n n (35)
where, is a matrix with the same dimension as A, butdiagonal.The scalar values, i are the singular values of A with1 2 3 k > 0 and k +1 = k +2 = = 0
Example
Example : Let = [1 , 0.3, 0.1, 0.0001, 10 12 , 0, 0]
Then, strong rank =3, weak rank =4, very weak rank =5.
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
IntroductionClassicationLeast Squares Estimation
Recursive Least Squares EstimationWeighted Least Squares
Weighting Sequence ModelState-space Observer ModelLinear Difference ModelPhysical Interpretation of SVD
-
7/28/2019 9a SN UIUC Short Course 1
47/55
Weighted Least SquaresDiscrete-Time Kalman Filter
State Space Identication
Physical Interpretation of SVDEigen Realization Algorithm
Physical Interpretation of SVD: Basic Equations
The nonzero singular values are unique, but U and V are not.U and V are square matrices.The columns of U are called the left singular vectors and thecolumns of V are called the right singular vectors of A.Since, U and V are orthonormal matrices, they obey therelationship,
UT U = Im m = U 1UVT V = In n = V 1V (36)
From eq. 35, if A = UV T then,
= UT AV
m n = k k 0k n k
0m k k 0m k n k Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
IntroductionClassicationLeast Squares Estimation
Recursive Least Squares EstimationWeighted Least Squares
Weighting Sequence ModelState-space Observer ModelLinear Difference ModelPhysical Interpretation of SVD
-
7/28/2019 9a SN UIUC Short Course 1
48/55
Weighted Least SquaresDiscrete-Time Kalman Filter
State Space Identication
Physical Interpretation of SVDEigen Realization Algorithm
Physical Interpretation of SVD: Basic Equations
SVD is closely related to the eigen-solution in case of symmetric positive-denite matrices AAT and AT A.
A = UV T AT = V T UT
Hence, the non-zero singular values of A are the positivesquare roots of the non-zero eigenvalues of AT A or AAT .The columns of U are the eigenvectors corresponding to theeigenvalues of AAT andThe columns of V are the eigenvectors corresponding to theeigenvalues of AT A.If A consists of complex elements, then the transpose isreplaced by complex-conjugate transpose.Denitions of condition number and rank are closely related
to the singular values.Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
IntroductionClassication
Least Squares EstimationRecursive Least Squares Estimation
Weighted Least Squares
Weighting Sequence ModelState-space Observer ModelLinear Difference ModelPhysical Interpretation of SVD
-
7/28/2019 9a SN UIUC Short Course 1
49/55
Weighted Least SquaresDiscrete-Time Kalman Filter
State Space Identication
Physical Interpretation of SVDEigen Realization Algorithm
Physical Interpretation of SVD: Condition Number
Rank: The rank of a matrix is equal to the number of non-zero singular values. This is the most reliable method of rank determination. Typically, a rank tolerance equal to thesquare of machine precision is chosen and the singular valuesabove it are counted to determine the rank.In order to calculate the Pseudo-inverse of matrix A, denotedby A+ using SVD,
A+ = V1 1UT 1 = V1diag [ 11 ,
12 , ,
1k ]U
T 1 (37)
where,
A = UV T = U1... U2
1 00 0
VT 1 VT 2
andSatish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
IntroductionClassication
Least Squares EstimationRecursive Least Squares Estimation
Weighted Least Squares
Weighting Sequence ModelState-space Observer ModelLinear Difference ModelPhysical Interpretation of SVD
-
7/28/2019 9a SN UIUC Short Course 1
50/55
g qDiscrete-Time Kalman Filter
State Space Identication
y pEigen Realization Algorithm
Eigen Realization AlgorithmGiven the pulse-response histories (system Markov parameters),ERA is used to extract the state-space model of the system.
Dene the Markov parameters as follows:
Y 0 = D = 0Y 1 = CB =
(1)0
Y 2 = CAB = (2)0
Y 3 = CA2B = (3)0...
Y k = CAk 1B = (k )0 (38)
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
IntroductionClassication
Least Squares EstimationRecursive Least Squares Estimation
Weighted Least Squares
Weighting Sequence ModelState-space Observer ModelLinear Difference ModelPhysical Interpretation of SVD
-
7/28/2019 9a SN UIUC Short Course 1
51/55
g qDiscrete-Time Kalman Filter
State Space Identication
y pEigen Realization Algorithm
Eigen Realization Algorithm
Start with a generalized m r Hankel matrix (m outputsand r inputs and , are integers)
H (k 1) =
Y k Y k +1 Y k + 1
Y k +1 Y k +2 Y k + ...... . . .
...Y k + 1 Y k + Y k + + 2
For the case when k = 1,
H (0) =
Y 1 Y 2 Y Y 2 Y 3 Y 1+ ...
... . . ....
Y Y 1+ Y + 1
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
IntroductionClassication
Least Squares EstimationRecursive Least Squares Estimation
Weighted Least Squares
Weighting Sequence ModelState-space Observer ModelLinear Difference ModelPhysical Interpretation of SVD
-
7/28/2019 9a SN UIUC Short Course 1
52/55
Discrete-Time Kalman FilterState Space Identication
Eigen Realization Algorithm
Eigen Realization AlgorithmIf n and n, the matrix H (k 1) is of rank n.Substituting the Markov parameters from eq. 38 intoH (k 1), we can factorize the Hankel matrix as:
H (k 1) = P Ak 1Q (39)
ERA starts with the SVD of the Hankel matrix
H (0) = R S T (40)
where the columns of R and S are orthonormal and is,
= n 00 0
in which 0s are zero matrices of appropriate dimensions, and n = diag [1, 2, , i , i +1 , , n ] and1 2 i n 0.
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
IntroductionClassication
Least Squares EstimationRecursive Least Squares Estimation
Weighted Least SquaresDi Ti K l Fil
Weighting Sequence ModelState-space Observer ModelLinear Difference ModelPhysical Interpretation of SVDEi R li i Al i h
-
7/28/2019 9a SN UIUC Short Course 1
53/55
Discrete-Time Kalman FilterState Space Identication
Eigen Realization Algorithm
Eigen Realization AlgorithmLet R n and S n be the matrices formed by the rst n columnsof R and S respectively. Then,
H (0) = R n n S T n = [R n 1/ 2n ][1/ 2n S T n ] (41)
and the following relationships hold: R T n R n = S
T n S n = I nNow, examining, eq. 39 for k = 1, that is,
H (0) = P Q (42)
Equating eq. 42 and eq. 41, we get,
P = R n 1/ 2nQ =
1/ 2n S T n (43)
That means, B = the rst r columns of Q , and C = the rstm rows of P and D = Y 0.
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
IntroductionClassication
Least Squares EstimationRecursive Least Squares Estimation
Weighted Least SquaresDi t Ti K l Filt
Weighting Sequence ModelState-space Observer ModelLinear Difference ModelPhysical Interpretation of SVDEi R li ti Al ith
-
7/28/2019 9a SN UIUC Short Course 1
54/55
Discrete-Time Kalman FilterState Space Identication
Eigen Realization Algorithm
Eigen Realization AlgorithmIn order to determine the state matrix, A, we start with,
H (1) =
Y 2 Y 3 Y +1Y 3 Y 4 Y 2+ ...
..
.
. . . ...Y +1 Y 2+ Y +
From eq. 38, we can then see that H (1) canbe factorizedusing SVD as:
H (1) = P AQ = R n
1/ 2n A 1/ 2n S T n (44)
from whichA = 1/ 2n R T n H (1)S n
1/ 2n (45)
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication
IntroductionClassication
Least Squares EstimationRecursive Least Squares Estimation
Weighted Least SquaresDiscrete Time Kalman Filter
Weighting Sequence ModelState-space Observer ModelLinear Difference ModelPhysical Interpretation of SVDEigen Realization Algorithm
-
7/28/2019 9a SN UIUC Short Course 1
55/55
Discrete-Time Kalman FilterState Space Identication
Eigen Realization Algorithm
AcknowledgementsDr. Sriram Narasimhan, Assistant Professor, Univ. of Waterloo,and Dharma Teja Reddy Pasala, Graduate Student, Rice Univ,assisted in putting together this presentation. The materialspresented in this short course are a condensed version of lecturenotes of a course taught at Rice University and at Univ. of Waterloo.
References
1. Jer-Nan Juang, Applied System Identication, Prentice Hall.2. Jer-Nan Juang and M. Q. Phan, Identication and control of mechanical systems, Cambridge Press.3. DeRusso et al., State variables for engineers, Wiley Interscience.
Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identication