modeling and analysis of dynamic systems · 2017-11-20 · modeling and analysis of dynamic systems...

39
Modeling and Analysis of Dynamic Systems Dr. Guillaume Ducard Fall 2017 Institute for Dynamic Systems and Control ETH Zurich, Switzerland G. Ducard c 1 / 36

Upload: others

Post on 10-Aug-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Modeling and Analysis of Dynamic Systems · 2017-11-20 · Modeling and Analysis of Dynamic Systems Dr. Guillaume Ducard Fall 2017 Institute for Dynamic Systems and Control ETH Zurich,

Modeling and Analysis of Dynamic Systems

Dr. Guillaume Ducard

Fall 2017

Institute for Dynamic Systems and Control

ETH Zurich, Switzerland

G. Ducard c© 1 / 36

Page 2: Modeling and Analysis of Dynamic Systems · 2017-11-20 · Modeling and Analysis of Dynamic Systems Dr. Guillaume Ducard Fall 2017 Institute for Dynamic Systems and Control ETH Zurich,

Outline

1 Lecture 10: Model ParametrizationPlanning ExperimentsLeast Squares Methods for Linear SystemsSolution of the Least Squares Problem

2 Lecture 10: Iterative Least SquaresProblem DefinitionLeast Squares with Exponential ForgettingSimplified Recursive LS Algorithm

G. Ducard c© 2 / 36

Page 3: Modeling and Analysis of Dynamic Systems · 2017-11-20 · Modeling and Analysis of Dynamic Systems Dr. Guillaume Ducard Fall 2017 Institute for Dynamic Systems and Control ETH Zurich,

Lecture 10: Model ParametrizationLecture 10: Iterative Least Squares

Planning ExperimentsLeast Squares Methods for Linear SystemsSolution of the Least Squares Problem

Outline

1 Lecture 10: Model ParametrizationPlanning ExperimentsLeast Squares Methods for Linear SystemsSolution of the Least Squares Problem

2 Lecture 10: Iterative Least SquaresProblem DefinitionLeast Squares with Exponential ForgettingSimplified Recursive LS Algorithm

G. Ducard c© 3 / 36

Page 4: Modeling and Analysis of Dynamic Systems · 2017-11-20 · Modeling and Analysis of Dynamic Systems Dr. Guillaume Ducard Fall 2017 Institute for Dynamic Systems and Control ETH Zurich,

Lecture 10: Model ParametrizationLecture 10: Iterative Least Squares

Planning ExperimentsLeast Squares Methods for Linear SystemsSolution of the Least Squares Problem

Introduction

You came up with a mathematical model of a system, whichcontains some parameters (ex: mass, elasticity, specific heat,...).

⇒ Now you need to run experiments to identify the modelparameters.

How to proceed?

G. Ducard c© 4 / 36

Page 5: Modeling and Analysis of Dynamic Systems · 2017-11-20 · Modeling and Analysis of Dynamic Systems Dr. Guillaume Ducard Fall 2017 Institute for Dynamic Systems and Control ETH Zurich,

Lecture 10: Model ParametrizationLecture 10: Iterative Least Squares

Planning ExperimentsLeast Squares Methods for Linear SystemsSolution of the Least Squares Problem

Introduction

Least Squares Methods:

Classical LS methods for static and linear systems ⇒closed-form solutions available.

Nonlinear LS methods for dynamic and nonlinear systems ⇒only numerical (optimization) solutions available.

Remark: there are closed-form approaches for linear dynamic systems aswell. See master-level courses (e.g. “Introduction to Recursive Filteringand Estimation”).

G. Ducard c© 5 / 36

Page 6: Modeling and Analysis of Dynamic Systems · 2017-11-20 · Modeling and Analysis of Dynamic Systems Dr. Guillaume Ducard Fall 2017 Institute for Dynamic Systems and Control ETH Zurich,

Lecture 10: Model ParametrizationLecture 10: Iterative Least Squares

Planning ExperimentsLeast Squares Methods for Linear SystemsSolution of the Least Squares Problem

Planning Experiments

Planning experiments is about knowing

What excitation of the system : choice of correct input signals

What to measure in the system (choice of sensors, theirlocation, etc.)

Measurements for linear or nonlinear model identification

Frequency content of the excitation signals

Noise level at input and output of the system

Safety issues

are best to efficiently identify the system parameters.

Choose “signals such that all the relevant dynamics and staticeffects inside the plant are excited with the correct amount ofinput energy.” in p. 75 - 76 script.

G. Ducard c© 6 / 36

Page 7: Modeling and Analysis of Dynamic Systems · 2017-11-20 · Modeling and Analysis of Dynamic Systems Dr. Guillaume Ducard Fall 2017 Institute for Dynamic Systems and Control ETH Zurich,

Lecture 10: Model ParametrizationLecture 10: Iterative Least Squares

Planning ExperimentsLeast Squares Methods for Linear SystemsSolution of the Least Squares Problem

Planning Experiments

The data obtained experimentally may be used for two purposes:

1 To identify unknown system structures and system parameters.Using a first set of data: u1, yr,1

1u ,1r

y

my

Real Plant

Modeled System

2 To validate the results of the system modeling and parameteridentification.Using a second set of data: u2, yr,2

Real Plant

Modeled System

2u ,2r

y

my

G. Ducard c© 7 / 36

Page 8: Modeling and Analysis of Dynamic Systems · 2017-11-20 · Modeling and Analysis of Dynamic Systems Dr. Guillaume Ducard Fall 2017 Institute for Dynamic Systems and Control ETH Zurich,

Lecture 10: Model ParametrizationLecture 10: Iterative Least Squares

Planning ExperimentsLeast Squares Methods for Linear SystemsSolution of the Least Squares Problem

A word of caution: It is of fundamental importance

not to use the same data set for both purposes.

The real quality of a parameterized model may only be assessedby:

comparing the prediction of that model

with measurement data that have not been used in the modelparametrization.

Real Plant

Modeled System

2u ,2r

y

my

Remark: the model and its identification are validated if: for the same input signal u2, the output signals yr,2and ym are sufficiently “similar”.

G. Ducard c© 8 / 36

Page 9: Modeling and Analysis of Dynamic Systems · 2017-11-20 · Modeling and Analysis of Dynamic Systems Dr. Guillaume Ducard Fall 2017 Institute for Dynamic Systems and Control ETH Zurich,

Lecture 10: Model ParametrizationLecture 10: Iterative Least Squares

Planning ExperimentsLeast Squares Methods for Linear SystemsSolution of the Least Squares Problem

Outline

1 Lecture 10: Model ParametrizationPlanning ExperimentsLeast Squares Methods for Linear SystemsSolution of the Least Squares Problem

2 Lecture 10: Iterative Least SquaresProblem DefinitionLeast Squares with Exponential ForgettingSimplified Recursive LS Algorithm

G. Ducard c© 9 / 36

Page 10: Modeling and Analysis of Dynamic Systems · 2017-11-20 · Modeling and Analysis of Dynamic Systems Dr. Guillaume Ducard Fall 2017 Institute for Dynamic Systems and Control ETH Zurich,

Lecture 10: Model ParametrizationLecture 10: Iterative Least Squares

Planning ExperimentsLeast Squares Methods for Linear SystemsSolution of the Least Squares Problem

Introduction

Least Squares estimation

is used to fit the parameters of a linear and static model (modelthat mathematically describes inputs/outputs of the system).

The model is never exact: ⇒ for the same input signals there willbe a difference between the outputs of the model, and true systemoutputs ⇒ modeling errors.

Remark: These errors may be considered a deterministic orstochastic variables. Both formulations are equivalent, as long asthese errors are completely unpredictable and not correlated withthe inputs.

G. Ducard c© 10 / 36

Page 11: Modeling and Analysis of Dynamic Systems · 2017-11-20 · Modeling and Analysis of Dynamic Systems Dr. Guillaume Ducard Fall 2017 Institute for Dynamic Systems and Control ETH Zurich,

Lecture 10: Model ParametrizationLecture 10: Iterative Least Squares

Planning ExperimentsLeast Squares Methods for Linear SystemsSolution of the Least Squares Problem

LS Formulation

System’smodel

yu

e

+

+

Figure: Elementary least-squares model structure.

It is assumed that the output of the real system may beapproximated by the output of the system’s model with somemodel error e according to the linear equation:

y(k) = hT (u(k)) · π + e(k)

with: k ∈ [1, . . . , r]: discrete-time instantG. Ducard c© 11 / 36

Page 12: Modeling and Analysis of Dynamic Systems · 2017-11-20 · Modeling and Analysis of Dynamic Systems Dr. Guillaume Ducard Fall 2017 Institute for Dynamic Systems and Control ETH Zurich,

Lecture 10: Model ParametrizationLecture 10: Iterative Least Squares

Planning ExperimentsLeast Squares Methods for Linear SystemsSolution of the Least Squares Problem

y(k) = hT (u(k)) · π + e(k)

k: represents the index of discrete time (discrete-time instant k)u(k) ∈ Rm input vectory(k) ∈ R is the output signal (measurement) (scalar).π ∈ Rq is the vector of the q unknown parameters (those we wantto estimate).h(.) ∈ Rq is the regressor, depends on u in a nonlinear butalgebraic way.e(k) is the error (scalar).

Typically, there are more measurements than unknown parameters:(r ≫ q).

G. Ducard c© 12 / 36

Page 13: Modeling and Analysis of Dynamic Systems · 2017-11-20 · Modeling and Analysis of Dynamic Systems Dr. Guillaume Ducard Fall 2017 Institute for Dynamic Systems and Control ETH Zurich,

Lecture 10: Model ParametrizationLecture 10: Iterative Least Squares

Planning ExperimentsLeast Squares Methods for Linear SystemsSolution of the Least Squares Problem

LS Objective:

Estimate π ∈ Rq is the vector of unknown parameters (those wewant to estimate) such that the model error e is minimized.

In order to do that, let’s formulate the problem into a matrix form(derived on the blackboard during the class).

+ Example.

G. Ducard c© 13 / 36

Page 14: Modeling and Analysis of Dynamic Systems · 2017-11-20 · Modeling and Analysis of Dynamic Systems Dr. Guillaume Ducard Fall 2017 Institute for Dynamic Systems and Control ETH Zurich,

Lecture 10: Model ParametrizationLecture 10: Iterative Least Squares

Planning ExperimentsLeast Squares Methods for Linear SystemsSolution of the Least Squares Problem

Outline

1 Lecture 10: Model ParametrizationPlanning ExperimentsLeast Squares Methods for Linear SystemsSolution of the Least Squares Problem

2 Lecture 10: Iterative Least SquaresProblem DefinitionLeast Squares with Exponential ForgettingSimplified Recursive LS Algorithm

G. Ducard c© 14 / 36

Page 15: Modeling and Analysis of Dynamic Systems · 2017-11-20 · Modeling and Analysis of Dynamic Systems Dr. Guillaume Ducard Fall 2017 Institute for Dynamic Systems and Control ETH Zurich,

Lecture 10: Model ParametrizationLecture 10: Iterative Least Squares

Planning ExperimentsLeast Squares Methods for Linear SystemsSolution of the Least Squares Problem

Least-square solution and comments

πLS =[HT ·W ·H

]−1HT ·W · y

The regression matrix H must have full column rank, i.e., all qparameters (π1, π2, . . . πq) are required to explain the data.

Moore-Penrose inverse:

M † = (MT ·M )−1 ·MT ,

M ∈ Rr×q, r > q, rankM = q

G. Ducard c© 15 / 36

Page 16: Modeling and Analysis of Dynamic Systems · 2017-11-20 · Modeling and Analysis of Dynamic Systems Dr. Guillaume Ducard Fall 2017 Institute for Dynamic Systems and Control ETH Zurich,

Lecture 10: Model ParametrizationLecture 10: Iterative Least Squares

Planning ExperimentsLeast Squares Methods for Linear SystemsSolution of the Least Squares Problem

Least-square solution and comments

If the error e is an uncorrelated white noise signal with:

mean value 0

and variance σ,

then

1 the expected value of the parameter estimation πLS is equalto its true value,a E(πLS) = πtrue

2 covariance matrix: Σ = σ2 · (HT ·W ·H)−1.

aOf course, only if the model perfectly describes the true system.

G. Ducard c© 16 / 36

Page 17: Modeling and Analysis of Dynamic Systems · 2017-11-20 · Modeling and Analysis of Dynamic Systems Dr. Guillaume Ducard Fall 2017 Institute for Dynamic Systems and Control ETH Zurich,

Lecture 10: Model ParametrizationLecture 10: Iterative Least Squares

Planning ExperimentsLeast Squares Methods for Linear SystemsSolution of the Least Squares Problem

Least Squares Solution: geometric interpretation

Particular case: q = 2, r = 3

The result of the LS identification can be geometrically interpreted:the columns of H define the directions (projection vectors) thatdefine a plane (defined by the 2 vectors in this case: h1 h2)and therefore, eLS is perpendicular to that plane.

y = HπLS + eLS

y = [h1 h2]

[πLS,1πLS,2

]+ eLS

y = πLS,1 · h1 + πLS,2 · h2 + eLS

G. Ducard c© 17 / 36

Page 18: Modeling and Analysis of Dynamic Systems · 2017-11-20 · Modeling and Analysis of Dynamic Systems Dr. Guillaume Ducard Fall 2017 Institute for Dynamic Systems and Control ETH Zurich,

Lecture 10: Model ParametrizationLecture 10: Iterative Least Squares

Planning ExperimentsLeast Squares Methods for Linear SystemsSolution of the Least Squares Problem

h2

πLS,1 · h1

πLS,2 · h2

eLSh1y

G. Ducard c© 18 / 36

Page 19: Modeling and Analysis of Dynamic Systems · 2017-11-20 · Modeling and Analysis of Dynamic Systems Dr. Guillaume Ducard Fall 2017 Institute for Dynamic Systems and Control ETH Zurich,

Lecture 10: Model ParametrizationLecture 10: Iterative Least Squares

Problem DefinitionLeast Squares with Exponential ForgettingSimplified Recursive LS Algorithm

Outline

1 Lecture 10: Model ParametrizationPlanning ExperimentsLeast Squares Methods for Linear SystemsSolution of the Least Squares Problem

2 Lecture 10: Iterative Least SquaresProblem DefinitionLeast Squares with Exponential ForgettingSimplified Recursive LS Algorithm

G. Ducard c© 19 / 36

Page 20: Modeling and Analysis of Dynamic Systems · 2017-11-20 · Modeling and Analysis of Dynamic Systems Dr. Guillaume Ducard Fall 2017 Institute for Dynamic Systems and Control ETH Zurich,

Lecture 10: Model ParametrizationLecture 10: Iterative Least Squares

Problem DefinitionLeast Squares with Exponential ForgettingSimplified Recursive LS Algorithm

Iterative Least Squares: Problem Definition

Up to now, a batch-like approach1 has been assumed:

πLS =[HT ·W ·H

]−1HT ·W · y

Problems:

1 The computation the matrix inversion part is the mosttime-consuming step.

2 Assuming that

r measurements have been taken,a solution has been computed,

⇒ numerically very inefficient to repeat the full matrixinversion procedure when an additional measurement databecomes available.

1Batch-like approach: 1. all measurement are made, 2. data are organized

in the LS pb formulation, 3. the LS solution is computed once G. Ducard c© 20 / 36

Page 21: Modeling and Analysis of Dynamic Systems · 2017-11-20 · Modeling and Analysis of Dynamic Systems Dr. Guillaume Ducard Fall 2017 Institute for Dynamic Systems and Control ETH Zurich,

Lecture 10: Model ParametrizationLecture 10: Iterative Least Squares

Problem DefinitionLeast Squares with Exponential ForgettingSimplified Recursive LS Algorithm

Instead, an iterative solution of the form

πLS(r + 1) = f (πLS(r),y(r + 1)) ,

initialized byπLS(0) = Eπ

would be much more efficient.

How do we build up a recursive Least-Squares algorithm?

G. Ducard c© 21 / 36

Page 22: Modeling and Analysis of Dynamic Systems · 2017-11-20 · Modeling and Analysis of Dynamic Systems Dr. Guillaume Ducard Fall 2017 Institute for Dynamic Systems and Control ETH Zurich,

Lecture 10: Model ParametrizationLecture 10: Iterative Least Squares

Problem DefinitionLeast Squares with Exponential ForgettingSimplified Recursive LS Algorithm

Recursive LS Formulation

1 Start:πLS =

[HT ·W ·H

]−1HT ·W · y

G. Ducard c© 22 / 36

Page 23: Modeling and Analysis of Dynamic Systems · 2017-11-20 · Modeling and Analysis of Dynamic Systems Dr. Guillaume Ducard Fall 2017 Institute for Dynamic Systems and Control ETH Zurich,

Lecture 10: Model ParametrizationLecture 10: Iterative Least Squares

Problem DefinitionLeast Squares with Exponential ForgettingSimplified Recursive LS Algorithm

Recursive LS Formulation

1 Start:πLS =

[HT ·W ·H

]−1HT ·W · y

2 Simplification: consider weighting matrix simply as W = I

(extension with W easily possible).

πLS =[HT ·H

]−1HT · y

G. Ducard c© 22 / 36

Page 24: Modeling and Analysis of Dynamic Systems · 2017-11-20 · Modeling and Analysis of Dynamic Systems Dr. Guillaume Ducard Fall 2017 Institute for Dynamic Systems and Control ETH Zurich,

Lecture 10: Model ParametrizationLecture 10: Iterative Least Squares

Problem DefinitionLeast Squares with Exponential ForgettingSimplified Recursive LS Algorithm

Recursive LS Formulation

1 Start:πLS =

[HT ·W ·H

]−1HT ·W · y

2 Simplification: consider weighting matrix simply as W = I

(extension with W easily possible).

πLS =[HT ·H

]−1HT · y

3 Formulate matrix products as sums:

πLS(r) =

[r∑

k=1

h(k) · hT (k)

]−1

·r∑

k=1

h(k) · y(k)

G. Ducard c© 22 / 36

Page 25: Modeling and Analysis of Dynamic Systems · 2017-11-20 · Modeling and Analysis of Dynamic Systems Dr. Guillaume Ducard Fall 2017 Institute for Dynamic Systems and Control ETH Zurich,

Lecture 10: Model ParametrizationLecture 10: Iterative Least Squares

Problem DefinitionLeast Squares with Exponential ForgettingSimplified Recursive LS Algorithm

Recursive LS Formulation

1 Start:πLS =

[HT ·W ·H

]−1HT ·W · y

2 Simplification: consider weighting matrix simply as W = I

(extension with W easily possible).

πLS =[HT ·H

]−1HT · y

3 Formulate matrix products as sums:

πLS(r) =

[r∑

k=1

h(k) · hT (k)

]−1

·r∑

k=1

h(k) · y(k)

4 Use matrix inversion Lemma

G. Ducard c© 22 / 36

Page 26: Modeling and Analysis of Dynamic Systems · 2017-11-20 · Modeling and Analysis of Dynamic Systems Dr. Guillaume Ducard Fall 2017 Institute for Dynamic Systems and Control ETH Zurich,

Lecture 10: Model ParametrizationLecture 10: Iterative Least Squares

Problem DefinitionLeast Squares with Exponential ForgettingSimplified Recursive LS Algorithm

Matrix Inversion Lemma

Suppose

M ∈ Rn×n is a regular matrix (det(M ) 6= 0),

and v ∈ Rn is a column vector,

which satisfies the condition: 1 + vT ·M−1 · v 6= 0.

In this case:

[M + v · vT ]−1 = M−1 −1

1 + vT ·M−1 · v·M−1 · v · vT ·M−1

Remarks

Proof by inspection: multiply from the left with M + v · vT .

Main advantage of this lemma: no additional matrix inversionthan M−1 is needed.Inversion of the new matrix M + v · vT may be carried outvery efficiently.

G. Ducard c© 23 / 36

Page 27: Modeling and Analysis of Dynamic Systems · 2017-11-20 · Modeling and Analysis of Dynamic Systems Dr. Guillaume Ducard Fall 2017 Institute for Dynamic Systems and Control ETH Zurich,

Lecture 10: Model ParametrizationLecture 10: Iterative Least Squares

Problem DefinitionLeast Squares with Exponential ForgettingSimplified Recursive LS Algorithm

πLS(r) =

[r∑

k=1

h(k) · hT (k)

]−1

·

r∑

k=1

h(k) · y(k)

To simplify the notation, a matrix Ω is defined as:

Ω(r) =

[r∑

k=1

h(k) · hT (k)

]−1

Then compute Ω(r + 1):

Ω(r + 1) =

[r+1∑

k=1

h(k) · hT (k)

]−1

=

[r∑

k=1

h(k) · hT (k) + h(r + 1) · hT (r + 1)

]−1

G. Ducard c© 24 / 36

Page 28: Modeling and Analysis of Dynamic Systems · 2017-11-20 · Modeling and Analysis of Dynamic Systems Dr. Guillaume Ducard Fall 2017 Institute for Dynamic Systems and Control ETH Zurich,

Lecture 10: Model ParametrizationLecture 10: Iterative Least Squares

Problem DefinitionLeast Squares with Exponential ForgettingSimplified Recursive LS Algorithm

Ω(r + 1) =

[r∑

k=1

h(k) · hT (k) + h(r + 1) · hT (r + 1)

]−1

we use the Inversion Lemma:

[M + v · vT ]−1 = M−1 −1

1 + vT ·M−1 · v·M−1 · v · vT ·M−1

Recursive formulation of the matrix inverse

Ω(r + 1) = Ω(r)−1

1 + c(r + 1)·Ω(r) · h(r + 1) · hT (r + 1) ·Ω(r)

where c(r + 1) = hT (r + 1) ·Ω(r) · h(r + 1) (scalar).

G. Ducard c© 25 / 36

Page 29: Modeling and Analysis of Dynamic Systems · 2017-11-20 · Modeling and Analysis of Dynamic Systems Dr. Guillaume Ducard Fall 2017 Institute for Dynamic Systems and Control ETH Zurich,

Lecture 10: Model ParametrizationLecture 10: Iterative Least Squares

Problem DefinitionLeast Squares with Exponential ForgettingSimplified Recursive LS Algorithm

πLS(r) = Ω(r) ·

r∑

k=1

h(k) · y(k)

How to compute recursively the estimate ?

πLS(r + 1) = Ω(r + 1) ·

r+1∑

k=1

h(k) · y(k)

=

[Ω(r)−

1

1 + c(r + 1)Ω(r)h(r + 1)hT (r + 1)Ω(r)

]

·

(r∑

k=1

h(k) · y(k) + h(r + 1) · y(r + 1)

)

G. Ducard c© 26 / 36

Page 30: Modeling and Analysis of Dynamic Systems · 2017-11-20 · Modeling and Analysis of Dynamic Systems Dr. Guillaume Ducard Fall 2017 Institute for Dynamic Systems and Control ETH Zurich,

Lecture 10: Model ParametrizationLecture 10: Iterative Least Squares

Problem DefinitionLeast Squares with Exponential ForgettingSimplified Recursive LS Algorithm

πLS(r+1) = Ω(r) ·r∑

k=1

h(k) · y(k)

︸ ︷︷ ︸πLS(r)

+ Ω(r) · h(r+1) · y(r+1)

−1

1 + c(r+1)Ω(r)h(r+1)h

T(r+1) Ω(r) ·

r∑

k=1

h(k) · y(k)

︸ ︷︷ ︸πLS(r)

−1

1 + c(r+1)Ω(r)h(r+1) h

T(r+1)Ω(r) · h(r+1)︸ ︷︷ ︸

c(r+1)

·y(r+1)

and

−c(r+1)

1 + c(r+1)=

1

1 + c(r+1)− 1

G. Ducard c© 27 / 36

Page 31: Modeling and Analysis of Dynamic Systems · 2017-11-20 · Modeling and Analysis of Dynamic Systems Dr. Guillaume Ducard Fall 2017 Institute for Dynamic Systems and Control ETH Zurich,

Lecture 10: Model ParametrizationLecture 10: Iterative Least Squares

Problem DefinitionLeast Squares with Exponential ForgettingSimplified Recursive LS Algorithm

πLS(r+1) = πLS(r) + Ω(r)·h(r+1) · y(r+1)︸ ︷︷ ︸

−1

1 + c(r+1)Ω(r)h(r+1)h

T(r+1) πLS(r)

+

(1

1 + c(r+1)− 1︸ ︷︷ ︸

)Ω(r)h(r+1) · y(r+1)

πLS(r+1) = πLS(r) −1

1 + c(r+1)Ω(r)h(r+1)

︸ ︷︷ ︸hT(r+1) πLS(r)

+1

1 + c(r+1)Ω(r)h(r+1)

︸ ︷︷ ︸· y(r+1)

G. Ducard c© 28 / 36

Page 32: Modeling and Analysis of Dynamic Systems · 2017-11-20 · Modeling and Analysis of Dynamic Systems Dr. Guillaume Ducard Fall 2017 Institute for Dynamic Systems and Control ETH Zurich,

Lecture 10: Model ParametrizationLecture 10: Iterative Least Squares

Problem DefinitionLeast Squares with Exponential ForgettingSimplified Recursive LS Algorithm

Recursive computation of the parameter vector πLS(r)

πLS(r+1) = πLS(r)+1

1 + c(r+1)Ω(r)h(r+1)

(y(r+1) − hT

(r+1) πLS(r)

)

with

Recursive update of the gain matrix Ω

Ω(r+1) = Ω(r) −1

1 + c(r+1)·Ω(r) · h(r+1) · h

T(r+1) ·Ω(r)

where c(r+1) = hT(r+1) ·Ω(r) · h(r+1) (scalar).

and

Initialization

πLS(0), Ω(0)G. Ducard c© 29 / 36

Page 33: Modeling and Analysis of Dynamic Systems · 2017-11-20 · Modeling and Analysis of Dynamic Systems Dr. Guillaume Ducard Fall 2017 Institute for Dynamic Systems and Control ETH Zurich,

Lecture 10: Model ParametrizationLecture 10: Iterative Least Squares

Problem DefinitionLeast Squares with Exponential ForgettingSimplified Recursive LS Algorithm

Recursive computation of the parameter vector πLS(r)

πLS(r+1) = πLS(r)+1

1 + c(r+1)Ω(r)h(r+1)

(y(r+1) − hT

(r+1) πLS(r)

)

can be rewritten as:

πLS(r+1) = πLS(r) + δ(r + 1)(y(r+1) − hT

(r+1) πLS(r)

)

Comments on the recursive formulation:

The blue term is a vector indicating the correction direction:δ(r + 1) applied by the innovation term (or prediction error).

Interesting to note that the correction direction is notdependent on the magnitude of the prediction error.

G. Ducard c© 30 / 36

Page 34: Modeling and Analysis of Dynamic Systems · 2017-11-20 · Modeling and Analysis of Dynamic Systems Dr. Guillaume Ducard Fall 2017 Institute for Dynamic Systems and Control ETH Zurich,

Lecture 10: Model ParametrizationLecture 10: Iterative Least Squares

Problem DefinitionLeast Squares with Exponential ForgettingSimplified Recursive LS Algorithm

Outline

1 Lecture 10: Model ParametrizationPlanning ExperimentsLeast Squares Methods for Linear SystemsSolution of the Least Squares Problem

2 Lecture 10: Iterative Least SquaresProblem DefinitionLeast Squares with Exponential ForgettingSimplified Recursive LS Algorithm

G. Ducard c© 31 / 36

Page 35: Modeling and Analysis of Dynamic Systems · 2017-11-20 · Modeling and Analysis of Dynamic Systems Dr. Guillaume Ducard Fall 2017 Institute for Dynamic Systems and Control ETH Zurich,

Lecture 10: Model ParametrizationLecture 10: Iterative Least Squares

Problem DefinitionLeast Squares with Exponential ForgettingSimplified Recursive LS Algorithm

Exponential Forgetting

New error weighting for the recursive case

ǫ(r) =r∑

k=1

λr−k · [y(k)− hT (k) · πLS(k)]2, λ < 1

This introduces an “exponential forgetting” process: oldererrors have a smaller influence on the result of the parameterestimation.Can cope with slowly varying parameters.

Update equations

πLS(r+1) = πLS(r) +1

λ+ c(r+1)Ω(r)h(r+1)

[y(r+1) − hT

(r+1)πLS(r)

]

Ω(r+1) =1

λΩ(r)

[I −

1

λ+ c(r+1)h(r+1)h

T(r+1)Ω(r)

]

G. Ducard c© 32 / 36

Page 36: Modeling and Analysis of Dynamic Systems · 2017-11-20 · Modeling and Analysis of Dynamic Systems Dr. Guillaume Ducard Fall 2017 Institute for Dynamic Systems and Control ETH Zurich,

Lecture 10: Model ParametrizationLecture 10: Iterative Least Squares

Problem DefinitionLeast Squares with Exponential ForgettingSimplified Recursive LS Algorithm

Outline

1 Lecture 10: Model ParametrizationPlanning ExperimentsLeast Squares Methods for Linear SystemsSolution of the Least Squares Problem

2 Lecture 10: Iterative Least SquaresProblem DefinitionLeast Squares with Exponential ForgettingSimplified Recursive LS Algorithm

G. Ducard c© 33 / 36

Page 37: Modeling and Analysis of Dynamic Systems · 2017-11-20 · Modeling and Analysis of Dynamic Systems Dr. Guillaume Ducard Fall 2017 Institute for Dynamic Systems and Control ETH Zurich,

Lecture 10: Model ParametrizationLecture 10: Iterative Least Squares

Problem DefinitionLeast Squares with Exponential ForgettingSimplified Recursive LS Algorithm

Simplified Recursive LS Algorithm

Kaczmarz’s projection algorithm

Each new prediction error : e(r+1) = y(r+1) − hT(r+1) · π(r) contains

new information on the parameters π only in the direction of h(r+1).Therefore, π(r+1) is sought, which requires the smallest possiblechange π(r+1) − π(r) to explain the new observation

Cost function to minimize:

J(π) =1

2·[π(r+1) − π(r)

]T·(π(r+1)−π(r))+µ·[y(r+1)−hT

(r+1)·π(r+1)]

Necessary conditions for the minimum:

∂J

∂π(r+1)= 0

∂J

∂µ= 0

G. Ducard c© 34 / 36

Page 38: Modeling and Analysis of Dynamic Systems · 2017-11-20 · Modeling and Analysis of Dynamic Systems Dr. Guillaume Ducard Fall 2017 Institute for Dynamic Systems and Control ETH Zurich,

Lecture 10: Model ParametrizationLecture 10: Iterative Least Squares

Problem DefinitionLeast Squares with Exponential ForgettingSimplified Recursive LS Algorithm

Simplified Recursive LS Algorithm

Solve this linear equations for π(r + 1) and µ

π(r+1) = π(r) +h(r + 1)

hT (r + 1) · h(r+1)

· [y(r + 1)− hT(r+1) · π(r)]

Usually this solution is modified as

π(r+1) = π(r) +γ · h(r+1)

λ+ hT(r+1) · h(r+1)

· [y(r + 1)− hT(r+1) · π(r)]

0 < γ < 2, 0 < λ < 1 to achieve desired convergence and forgetting.

Discussions

Kaczmarz’ projection algorithm requires less computational effortsthan regular LS

It converges much slower than regular LS algorithm.

Choice of algorithm depending on resources at hand and convergence speed requirements.

G. Ducard c© 35 / 36

Page 39: Modeling and Analysis of Dynamic Systems · 2017-11-20 · Modeling and Analysis of Dynamic Systems Dr. Guillaume Ducard Fall 2017 Institute for Dynamic Systems and Control ETH Zurich,

Lecture 10: Model ParametrizationLecture 10: Iterative Least Squares

Problem DefinitionLeast Squares with Exponential ForgettingSimplified Recursive LS Algorithm

Next lecture + Upcoming Exercise

Next lecture

Stability Analysis

Properties of Linear Systems

Next exercises:

Least squaresParameter identification

G. Ducard c© 36 / 36