parameter estimation ii - adaptive control

49
EE 264 SIST, ShanghaiTech Parameter Estimation II YW 7-1

Upload: others

Post on 26-Apr-2022

12 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Parameter Estimation II - Adaptive Control

EE 264 SIST, ShanghaiTech

Parameter Estimation II

YW 7-1

Page 2: Parameter Estimation II - Adaptive Control

Contents

Vector Case

Gradient-based Algorithms

Least-square Algorithms

Parameter Estimation II 7-2

Page 3: Parameter Estimation II - Adaptive Control

Generalized SPM

Consider a LTI SISO system

y = G(s)u, G(s) = Z(s)R(s) = kp

Z(s)R(s) (1)

with

R(s) = sn + an−1sn−1 + · · ·+ a1s+ a0

Z(s) = bmsm + · · ·+ b1s+ b0

and kp = bm is a.k.a. high-frequency gain. Express the system as

an nth-order differential equation, we obtain

y(n) + an−1y(n−1) + · · ·+ a1y + a0y = bmu

(m) + · · ·+ b1u+ b0u.

Parameter Estimation II 7-3

Page 4: Parameter Estimation II - Adaptive Control

Generalized SPM

Consider a LTI SISO system

y = G(s)u, G(s) = Z(s)R(s) = kp

Z(s)R(s) (1)

with

R(s) = sn + an−1sn−1 + · · ·+ a1s+ a0

Z(s) = bmsm + · · ·+ b1s+ b0

and kp = bm is a.k.a. high-frequency gain. Express the system as

an nth-order differential equation, we obtain

y(n) + an−1y(n−1) + · · ·+ a1y + a0y = bmu

(m) + · · ·+ b1u+ b0u.

Parameter Estimation II 7-4

Page 5: Parameter Estimation II - Adaptive Control

Generalized SPM

Lumping all the parameters in the vector and filtering both sides

with 1Λ(s) with Λ(s) = sn + λn−1s

n−1 + · · ·+ λ1s+ λ0 is a monic

Hurwitz polynomial, we obtain the parametric model

z = θ∗>φ

where

z = 1Λ(s)y

(n) = sn

Λ(s)y

θ∗ = [bm, . . . , b0, an−1, . . . , a0]T ∈ Rn+m+1

φ =[sm

Λ(s)u, . . . ,1

Λ(s)u,−sn−1

Λ(s)y, . . . ,−1

Λ(s)y]T

Parameter Estimation II 7-5

Page 6: Parameter Estimation II - Adaptive Control

Generalized SPM

The objective is to process the signals z and φ in order to

generate an estimate θ(t) for θ∗ at each time t, as follows

θ = Φ(z, φ)

Different choices of Φ(·) lead to a wide class of adaptive laws

with, sometimes, different convergence properties, as

demonstrated in the following lectures.

Questions:

1. What if a0, a1, bm are known?

2. If we use the previous gradient-based method, What kind of u

shall we choose to ensure the exponential convergence?Parameter Estimation II 7-6

Page 7: Parameter Estimation II - Adaptive Control

Generalized SPM

The objective is to process the signals z and φ in order to

generate an estimate θ(t) for θ∗ at each time t, as follows

θ = Φ(z, φ)

Different choices of Φ(·) lead to a wide class of adaptive laws

with, sometimes, different convergence properties, as

demonstrated in the following lectures.

Questions:

1. What if a0, a1, bm are known?

2. If we use the previous gradient-based method, What kind of u

shall we choose to ensure the exponential convergence?Parameter Estimation II 7-7

Page 8: Parameter Estimation II - Adaptive Control

Generalized SPM

The objective is to process the signals z and φ in order to

generate an estimate θ(t) for θ∗ at each time t, as follows

θ = Φ(z, φ)

Different choices of Φ(·) lead to a wide class of adaptive laws

with, sometimes, different convergence properties, as

demonstrated in the following lectures.

Questions:

1. What if a0, a1, bm are known?

2. If we use the previous gradient-based method, What kind of u

shall we choose to ensure the exponential convergence?Parameter Estimation II 7-8

Page 9: Parameter Estimation II - Adaptive Control

Contents

Vector Case

Gradient-based Algorithms

Least-square Algorithms

Parameter Estimation II 7-9

Page 10: Parameter Estimation II - Adaptive Control

Gradient-based Algorithms

Start with SPM

z = θ∗>φ,

with θ∗ ∈ Rn, φ ∈ Rn and the estimation error is constructed as

ε = z − zm2s

= z − θTφm2s

where m2s is the normalizing signal.The gradient algorithm is

developed by using the gradient method to minimize some

appropriate functional J(θ).

θ = −Γ∇J

Different choices for J(θ) lead to different algorithms.Parameter Estimation II 7-10

Page 11: Parameter Estimation II - Adaptive Control

Gradient-based Algorithms

Start with SPM

z = θ∗>φ,

with θ∗ ∈ Rn, φ ∈ Rn and the estimation error is constructed as

ε = z − zm2s

= z − θTφm2s

where m2s is the normalizing signal.The gradient algorithm is

developed by using the gradient method to minimize some

appropriate functional J(θ).

θ = −Γ∇J

Different choices for J(θ) lead to different algorithms.Parameter Estimation II 7-11

Page 12: Parameter Estimation II - Adaptive Control

Gradient-based Algorithms

1. Instantaneous cost function

J(θ) = ε2m2s

2 =

(z − θ>φ

)2

2m2s

Adaptive Law for θ(t):

θ = Γεφ, θ(0) = θ0

guarantees the following properties:

ε, εms, θ ∈ L2 ∩ L∞ and θ ∈ L∞

If there exists T0 > 0 such that φms

is P.E. with level of α0,

then θ(t) converges to θ∗ exponentially fast.Parameter Estimation II 7-12

Page 13: Parameter Estimation II - Adaptive Control

Gradient-based Algorithms

1. Instantaneous cost function

J(θ) = ε2m2s

2 =

(z − θ>φ

)2

2m2s

Adaptive Law for θ(t):

θ = Γεφ, θ(0) = θ0

guarantees the following properties:

ε, εms, θ ∈ L2 ∩ L∞ and θ ∈ L∞

If there exists T0 > 0 such that φms

is P.E. with level of α0,

then θ(t) converges to θ∗ exponentially fast.Parameter Estimation II 7-13

Page 14: Parameter Estimation II - Adaptive Control

Gradient-based Algorithms

1. Instantaneous cost function

J(θ) = ε2m2s

2 =

(z − θ>φ

)2

2m2s

Adaptive Law for θ(t):

θ = Γεφ, θ(0) = θ0

guarantees the following properties:

ε, εms, θ ∈ L2 ∩ L∞ and θ ∈ L∞

If there exists T0 > 0 such that φms

is P.E. with level of α0,

then θ(t) converges to θ∗ exponentially fast.Parameter Estimation II 7-14

Page 15: Parameter Estimation II - Adaptive Control

Gradient-based Algorithms

In addition, for all t ≥ nT0, n = 0, 1..., it holds

(θ(t)− θ∗)> Γ−1 (θ(t)− θ∗) ≤ (1− γ1)n (θ(0)− θ∗)> Γ−1 (θ(0)− θ∗)

with γ1 = 2α0T0λmin(Γ)2+β4λ2

max(Γ)T 20, β = supt

∣∣∣ φms

∣∣∣.Furthermore, if the regressor signal φ is of the form

φ = H(s)u

with H(s) =[sm

Λ(s) , . . . ,1

Λ(s) ,−sn−1G(s)

Λ(s) , . . . ,−G(s)Λ(s)

]>and G(s) is

Hurwitz defined in (1) and has no pole-zero cancellation, then

given a sufficiently rich u signal or order n+m+ 1, we have φ

and φms

are P.E., θ, ε, θ all converge to zero exponentially fast.Parameter Estimation II 7-15

Page 16: Parameter Estimation II - Adaptive Control

Gradient-based Algorithms

In addition, for all t ≥ nT0, n = 0, 1..., it holds

(θ(t)− θ∗)> Γ−1 (θ(t)− θ∗) ≤ (1− γ1)n (θ(0)− θ∗)> Γ−1 (θ(0)− θ∗)

with γ1 = 2α0T0λmin(Γ)2+β4λ2

max(Γ)T 20, β = supt

∣∣∣ φms

∣∣∣.Furthermore, if the regressor signal φ is of the form

φ = H(s)u

with H(s) =[sm

Λ(s) , . . . ,1

Λ(s) ,−sn−1G(s)

Λ(s) , . . . ,−G(s)Λ(s)

]>and G(s) is

Hurwitz defined in (1) and has no pole-zero cancellation, then

given a sufficiently rich u signal or order n+m+ 1, we have φ

and φms

are P.E., θ, ε, θ all converge to zero exponentially fast.Parameter Estimation II 7-16

Page 17: Parameter Estimation II - Adaptive Control

Remark:The rate of convergence can be improved if we choose the design

parameters so that 1− γ1 is as small as possible. Examining the

expression for γ1

γ1 = 2α0T0λmin(Γ)2 + β4λ2

max(Γ)T 20

The only free design parameter is the adaptive gain matrix Γ.

However, very small or very large values of Γ lead to slower

convergence rates. In general, the convergence rate depends on

the signal input and filters used in addition to Γ in a way that is

not understood quantitatively.

Parameter Estimation II 7-17

Page 18: Parameter Estimation II - Adaptive Control

Remark:The rate of convergence can be improved if we choose the design

parameters so that 1− γ1 is as small as possible. Examining the

expression for γ1

γ1 = 2α0T0λmin(Γ)2 + β4λ2

max(Γ)T 20

The only free design parameter is the adaptive gain matrix Γ.

However, very small or very large values of Γ lead to slower

convergence rates. In general, the convergence rate depends on

the signal input and filters used in addition to Γ in a way that is

not understood quantitatively.

Parameter Estimation II 7-18

Page 19: Parameter Estimation II - Adaptive Control

Gradient-based algorithms

2. Integral cost function

J(θ) = 12

∫ t

0e−β(t−τ)ε2(t, τ)m2

s(τ)dτ

where β > 0 is a design constant acting as a forgetting factor and

ε(t, τ) = z(τ)− θT (t)φ(τ)m2s(τ) , ε(t, t) = ε, τ ≤ t

Remark:

The forgetting factor e−β(t−τ) is used to put more weight on

recent data by discounting the earlier ones.

It is clear that J is a convex function of θ at each time tParameter Estimation II 7-19

Page 20: Parameter Estimation II - Adaptive Control

Gradient-based algorithms

2. Integral cost function

J(θ) = 12

∫ t

0e−β(t−τ)ε2(t, τ)m2

s(τ)dτ

where β > 0 is a design constant acting as a forgetting factor and

ε(t, τ) = z(τ)− θT (t)φ(τ)m2s(τ) , ε(t, t) = ε, τ ≤ t

Remark:

The forgetting factor e−β(t−τ) is used to put more weight on

recent data by discounting the earlier ones.

It is clear that J is a convex function of θ at each time tParameter Estimation II 7-20

Page 21: Parameter Estimation II - Adaptive Control

Gradient-based algorithms

Applying the gradient method, we have the adaptive law

θ = Γ∫ t

0e−β(t−τ) z(τ)− θ>(t)φ(τ)

m2s(τ) φ(τ)dτ

this can be implemented as

θ = −Γ(R(t)θ +Q(t)), θ(0) = θ0

R = −βR+ φφT

m2s, R(0) = 0 ∈ Rn×n

Q = −βQ− zφm2

s, Q(0) = 0 ∈ Rn

with n is the dimension of the vector θ∗.

Parameter Estimation II 7-21

Page 22: Parameter Estimation II - Adaptive Control

Gradient-based algorithms

Applying the gradient method, we have the adaptive law

θ = Γ∫ t

0e−β(t−τ) z(τ)− θ>(t)φ(τ)

m2s(τ) φ(τ)dτ

this can be implemented as

θ = −Γ(R(t)θ +Q(t)), θ(0) = θ0

R = −βR+ φφT

m2s, R(0) = 0 ∈ Rn×n

Q = −βQ− zφm2

s, Q(0) = 0 ∈ Rn

with n is the dimension of the vector θ∗.

Parameter Estimation II 7-22

Page 23: Parameter Estimation II - Adaptive Control

Gradient-based algorithm

Lemma: For SPM, the gradient-based algorithm with integral

cost function guarantees that

ε, εms, θ ∈ L2 ∩∞ and θ ∈ L∞

limt→∞ |θ(t)| = 0

If φms

is PE, then θ(t)→ θ∗ exponentially fast. Furthermore,

for Γ = γI, the rate of convergence increases with γ.

Example: Consider the plant

y = b1s+ b0s2 + 2s+ 1u

where the parameters b0 and b1 are unknown.Parameter Estimation II 7-23

Page 24: Parameter Estimation II - Adaptive Control

Gradient-based algorithm

Lemma: For SPM, the gradient-based algorithm with integral

cost function guarantees that

ε, εms, θ ∈ L2 ∩∞ and θ ∈ L∞

limt→∞ |θ(t)| = 0

If φms

is PE, then θ(t)→ θ∗ exponentially fast. Furthermore,

for Γ = γI, the rate of convergence increases with γ.

Example: Consider the plant

y = b1s+ b0s2 + 2s+ 1u

where the parameters b0 and b1 are unknown.Parameter Estimation II 7-24

Page 25: Parameter Estimation II - Adaptive Control

Contents

Vector Case

Gradient-based Algorithms

Least-square Algorithms

Parameter Estimation II 7-25

Page 26: Parameter Estimation II - Adaptive Control

Least-square Algorithms

The basic idea behind is: fitting a mathematical model to a

sequence of observed data by minimizing the sum of the squares

of the difference between the observed and computed data.

Also for SPM

z = θ∗>φ

Recall the instantaneous cost function

J(θ) = ε2m2s

2 =

(z − θ>φ

)2

2m2s

at each time, its minimum satisfies

OJ(θ) = −

(z − θ>φ

)m2s

φ = 0Parameter Estimation II 7-26

Page 27: Parameter Estimation II - Adaptive Control

Least-square Algorithms

The basic idea behind is: fitting a mathematical model to a

sequence of observed data by minimizing the sum of the squares

of the difference between the observed and computed data.

Also for SPM

z = θ∗>φ

Recall the instantaneous cost function

J(θ) = ε2m2s

2 =

(z − θ>φ

)2

2m2s

at each time, its minimum satisfies

OJ(θ) = −

(z − θ>φ

)m2s

φ = 0Parameter Estimation II 7-27

Page 28: Parameter Estimation II - Adaptive Control

LS Algorithms

However,

z − θ>φ = 0

is not solvable for non-scalar θ, since φφ> is singular at each time

instant. For scalar case, if the measurement is corrupted by an

additive disturbance dn

z = θ∗>φ+ dn

then the estimate given by

θ(t) = z(τ)φ(τ) = θ∗ + dn(τ)

φ(τ)

may be far off from true value due to small disturbance.Parameter Estimation II 7-28

Page 29: Parameter Estimation II - Adaptive Control

LS Algorithms

However,

z − θ>φ = 0

is not solvable for non-scalar θ, since φφ> is singular at each time

instant. For scalar case, if the measurement is corrupted by an

additive disturbance dn

z = θ∗>φ+ dn

then the estimate given by

θ(t) = z(τ)φ(τ) = θ∗ + dn(τ)

φ(τ)

may be far off from true value due to small disturbance.Parameter Estimation II 7-29

Page 30: Parameter Estimation II - Adaptive Control

LS Algorithms

Consider the integral cost function

J(θ) = 12

∫ t

0ε2(t, τ)m2

s(τ)dτ

then

∇J = −∫ t

0

z(τ)− θ>(t)φ(τ)m2s(τ) φ(τ)dτ = 0

admits a solution

θ(t) =(∫ t

0

φ(τ)φ>(τ)m2s(τ) dτ

)−1 ∫ t

0

z(τ)φ(τ)m2s(τ) dτ

for any persistent exciting φms

and t ≥ T0

Parameter Estimation II 7-30

Page 31: Parameter Estimation II - Adaptive Control

Recursive LS Algorithm

Consider the cost function

J(θ) = 12

∫ t

0e−β(t−τ)

[z(τ)− θ>(t)φ(τ)

]2m2s(τ) dτ

+ 12e−βt (θ − θ0)>Q0 (θ − θ0)

where β > 0 and Q0 = Q>0 > 0. J(θ) is a convex function of θ

over Rn space, hence the global minimum, i.e. the estimate of θ∗.

is therefore obtained by solving ∇J(θ) = 0.

Parameter Estimation II 7-31

Page 32: Parameter Estimation II - Adaptive Control

Recursive LS Algorithm

Consider the cost function

J(θ) = 12

∫ t

0e−β(t−τ)

[z(τ)− θ>(t)φ(τ)

]2m2s(τ) dτ

+ 12e−βt (θ − θ0)>Q0 (θ − θ0)

where β > 0 and Q0 = Q>0 > 0. J(θ) is a convex function of θ

over Rn space, hence the global minimum, i.e. the estimate of θ∗.

is therefore obtained by solving ∇J(θ) = 0.

Parameter Estimation II 7-32

Page 33: Parameter Estimation II - Adaptive Control

Recursive LS Algorithm

Consider the cost function

J(θ) = 12

∫ t

0e−β(t−τ)

[z(τ)− θ>(t)φ(τ)

]2m2s(τ) dτ

+ 12e−βt (θ − θ0)>Q0 (θ − θ0)

where β > 0 and Q0 = Q>0 > 0. J(θ) is a convex function of θ

over Rn space, hence the global minimum, i.e. the estimate of θ∗.

is therefore obtained by solving ∇J(θ) = 0.

Parameter Estimation II 7-33

Page 34: Parameter Estimation II - Adaptive Control

Recursive LS Algorithm

Solving

∇J(θ) = e−βtQ0 (θ(t)− θ0)−∫ t

0e−β(t−τ) z(τ)− θ>(t)φ(τ)

m2s(τ) φ(τ)dτ = 0

yields the nonrecursive LS algorithm

θ(t) = P (t)[e−βtQ0θ0 +

∫ t

0e−β(t−τ) z(τ)φ(τ)

m2s(τ) dτ

]where

P (t) =[e−βtQ0 +

∫ t

0e−β(t−τ)φ(τ)φ>(τ)

m2s(τ) dτ

]−1

To avoid the calculation of matrix inverse, we can express θ(t) and

P (t) in a recursive way.Parameter Estimation II 7-34

Page 35: Parameter Estimation II - Adaptive Control

Recursive LS Algorithm

The recursive LS algorithm

θ = Pεφ, θ(0) = θ0

P = βP − P φφT

m2sP, P (0) = P0 = Q−1

0

Theorem: If φms

is PE and β > 0 then the recursive LS algorithm

with forgetting factor guarantees that P, P−1 ∈ L∞ and that

θ(t)→ θ∗ as t→∞ exponentially fast.

Remark:

1. Stability cannot be established unless φms

is PE.

2. P (t) may grow without bound.

Parameter Estimation II 7-35

Page 36: Parameter Estimation II - Adaptive Control

Recursive LS Algorithm

The recursive LS algorithm

θ = Pεφ, θ(0) = θ0

P = βP − P φφT

m2sP, P (0) = P0 = Q−1

0

Theorem: If φms

is PE and β > 0 then the recursive LS algorithm

with forgetting factor guarantees that P, P−1 ∈ L∞ and that

θ(t)→ θ∗ as t→∞ exponentially fast.

Remark:

1. Stability cannot be established unless φms

is PE.

2. P (t) may grow without bound.

Parameter Estimation II 7-36

Page 37: Parameter Estimation II - Adaptive Control

Recursive LS Algorithm

The recursive LS algorithm

θ = Pεφ, θ(0) = θ0

P = βP − P φφT

m2sP, P (0) = P0 = Q−1

0

Theorem: If φms

is PE and β > 0 then the recursive LS algorithm

with forgetting factor guarantees that P, P−1 ∈ L∞ and that

θ(t)→ θ∗ as t→∞ exponentially fast.

Remark:

1. Stability cannot be established unless φms

is PE.

2. P (t) may grow without bound.

Parameter Estimation II 7-37

Page 38: Parameter Estimation II - Adaptive Control

Modified Recursive LS Algorithm

To avoid the unboundedness of P (t), LS is modified as follows:

θ = Pεφ, θ(0) = θ0

P =

βP − Pφφ>Pm2

sif ‖P (t)‖ ≤ R0

0 otherwise

where P (0) > 0, ‖P (0)‖ ≤ R0, R0 is a constant that serves as an

upper bound for ‖P‖.

Theorem: The modified recursive LS algorithm guarantees that

(i) ε, εms, θ ∈ L2 ∩ L∞ and θ ∈ L∞(ii) If φ

msis PE, then θ(t)→ θ∗ as t→∞ exponentially fast.

Parameter Estimation II 7-38

Page 39: Parameter Estimation II - Adaptive Control

Modified Recursive LS Algorithm

To avoid the unboundedness of P (t), LS is modified as follows:

θ = Pεφ, θ(0) = θ0

P =

βP − Pφφ>Pm2

sif ‖P (t)‖ ≤ R0

0 otherwise

where P (0) > 0, ‖P (0)‖ ≤ R0, R0 is a constant that serves as an

upper bound for ‖P‖.

Theorem: The modified recursive LS algorithm guarantees that

(i) ε, εms, θ ∈ L2 ∩ L∞ and θ ∈ L∞(ii) If φ

msis PE, then θ(t)→ θ∗ as t→∞ exponentially fast.

Parameter Estimation II 7-39

Page 40: Parameter Estimation II - Adaptive Control

Pure LS

When β = 0, the recursive LS estimation algorithm reduced to

θ = Pεφ, θ(0) = θ0

P = −P φφ>

m2sP, P (0) = P0

which is referred to as the pure LS algorithm.

Theorem The pure LS algorithm guarantees that

(i) ε, εms, θ ∈ L2 ∩ L∞ and θ, P ∈ L∞(ii) limt→∞ θ(t) = θ. If φ

msis PE, then θ = θ∗

Note that, only asymptotic convergence can be assured.

Parameter Estimation II 7-40

Page 41: Parameter Estimation II - Adaptive Control

Pure LS

When β = 0, the recursive LS estimation algorithm reduced to

θ = Pεφ, θ(0) = θ0

P = −P φφ>

m2sP, P (0) = P0

which is referred to as the pure LS algorithm.

Theorem The pure LS algorithm guarantees that

(i) ε, εms, θ ∈ L2 ∩ L∞ and θ, P ∈ L∞(ii) limt→∞ θ(t) = θ. If φ

msis PE, then θ = θ∗

Note that, only asymptotic convergence can be assured.

Parameter Estimation II 7-41

Page 42: Parameter Estimation II - Adaptive Control

Pure LS

When β = 0, the recursive LS estimation algorithm reduced to

θ = Pεφ, θ(0) = θ0

P = −P φφ>

m2sP, P (0) = P0

which is referred to as the pure LS algorithm.

Theorem The pure LS algorithm guarantees that

(i) ε, εms, θ ∈ L2 ∩ L∞ and θ, P ∈ L∞(ii) limt→∞ θ(t) = θ. If φ

msis PE, then θ = θ∗

Note that, only asymptotic convergence can be assured.

Parameter Estimation II 7-42

Page 43: Parameter Estimation II - Adaptive Control

Pure LS

Drawbacks:

covariance wind-up problem : covariance matrix P may become

arbitrarily small and slow down adaptation in some directions.

non-exponential convergence speed. see a scalar example:

p = −p2φ2, p(0) = p0 > 0

let φ = 1 which is PE in this case, then we have

p(t) = p01 + p0t

and

θ(t) = θ01 + p0t

Parameter Estimation II 7-43

Page 44: Parameter Estimation II - Adaptive Control

Pure LS

Drawbacks:

covariance wind-up problem : covariance matrix P may become

arbitrarily small and slow down adaptation in some directions.

non-exponential convergence speed. see a scalar example:

p = −p2φ2, p(0) = p0 > 0

let φ = 1 which is PE in this case, then we have

p(t) = p01 + p0t

and

θ(t) = θ01 + p0t

Parameter Estimation II 7-44

Page 45: Parameter Estimation II - Adaptive Control

Pure LS

Drawbacks:

covariance wind-up problem : covariance matrix P may become

arbitrarily small and slow down adaptation in some directions.

non-exponential convergence speed. see a scalar example:

p = −p2φ2, p(0) = p0 > 0

let φ = 1 which is PE in this case, then we have

p(t) = p01 + p0t

and

θ(t) = θ01 + p0t

Parameter Estimation II 7-45

Page 46: Parameter Estimation II - Adaptive Control

Pure LS

Drawbacks:

covariance wind-up problem : covariance matrix P may become

arbitrarily small and slow down adaptation in some directions.

non-exponential convergence speed. see a scalar example:

p = −p2φ2, p(0) = p0 > 0

let φ = 1 which is PE in this case, then we have

p(t) = p01 + p0t

and

θ(t) = θ01 + p0t

Parameter Estimation II 7-46

Page 47: Parameter Estimation II - Adaptive Control

Modified pure LS

Avoid the covariance wind-up problem via resetting

θ = Pεφ, θ(0) = θ0

P = −P φφT

m2sP, P

(t+r)

= P0 = ρ0I

where t+r is the time at which λmin(P (t)) ≤ ρ1 and ρ0 > ρ1 > 0

are some design scalars. Therefore, P is guaranteed to be positive

definite for all t > 0.

Theorem: The pure LS algorithm with covariance resetting

guarantees that

(i) ε, εms, θ ∈ L2 ∩ L∞ and θ ∈ L∞(ii) If φ

msis PE, then θ(t)→ θ∗ as t→∞ exponentially fast.

Parameter Estimation II 7-47

Page 48: Parameter Estimation II - Adaptive Control

Modified pure LS

Avoid the covariance wind-up problem via resetting

θ = Pεφ, θ(0) = θ0

P = −P φφT

m2sP, P

(t+r)

= P0 = ρ0I

where t+r is the time at which λmin(P (t)) ≤ ρ1 and ρ0 > ρ1 > 0

are some design scalars. Therefore, P is guaranteed to be positive

definite for all t > 0.

Theorem: The pure LS algorithm with covariance resetting

guarantees that

(i) ε, εms, θ ∈ L2 ∩ L∞ and θ ∈ L∞(ii) If φ

msis PE, then θ(t)→ θ∗ as t→∞ exponentially fast.

Parameter Estimation II 7-48

Page 49: Parameter Estimation II - Adaptive Control

Modified pure LS

Avoid the covariance wind-up problem via resetting

θ = Pεφ, θ(0) = θ0

P = −P φφT

m2sP, P

(t+r)

= P0 = ρ0I

where t+r is the time at which λmin(P (t)) ≤ ρ1 and ρ0 > ρ1 > 0

are some design scalars. Therefore, P is guaranteed to be positive

definite for all t > 0.

Theorem: The pure LS algorithm with covariance resetting

guarantees that

(i) ε, εms, θ ∈ L2 ∩ L∞ and θ ∈ L∞(ii) If φ

msis PE, then θ(t)→ θ∗ as t→∞ exponentially fast.

Parameter Estimation II 7-49