h1 full-information control and estimationmocha-java.uccs.edu/ece5530/ece5530-ch08.pdf · we will...

21
ECE5530: Multivariable Control Systems II. 8–1 H 1 FULL-INFORMATION CONTROL AND ESTIMATION 8.1: Differential games H 2 control optimizes average-case performance. Assumes complete and perfect plant knowledge. H 1 control optimizes worst-case performance (gain). Also assumes perfect and complete plant knowledge. H 1 does not guarantee robustness directly. We will see more next chapter re. -synthesis to guarantee robustness. LQR, LQE and LQG are all posed as H 2 optimization problems. The cost functions may be modified easily to pose them as H 1 optimization problems. So, for H 1 control, we will have a similar set of problems to solve, but different cost functions to optimize. The H 1 problem seeks to minimize the maximum gain of some system over the set of disturbance inputs. “Disturbance” inputs W are true disturbance, reference inputs, etc. Performance “outputs” Y comprise the system state x.t/, input u.t/, etc. Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2016, 2010, 2008, 2006, 2004, 2002, 2001, 1999, Gregory L. Plett

Upload: others

Post on 17-Aug-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: H1 FULL-INFORMATION CONTROL AND ESTIMATIONmocha-java.uccs.edu/ECE5530/ECE5530-CH08.pdf · We will see more next chapter re. !-synthesis to guarantee robustness. LQR, LQE and LQG are

ECE5530: Multivariable Control Systems II. 8–1

H1 FULL-INFORMATION CONTROL

AND ESTIMATION

8.1: Differential games

■ H2 control optimizes average-case performance. Assumes complete

and perfect plant knowledge.

■ H1 control optimizes worst-case performance (gain). Also assumes

perfect and complete plant knowledge.

! H1 does not guarantee robustness directly.

! We will see more next chapter re. !-synthesis to guarantee

robustness.

■ LQR, LQE and LQG are all posed as H2 optimization problems. The

cost functions may be modified easily to pose them as H1

optimization problems.

■ So, for H1 control, we will have a similar set of problems to solve, but

different cost functions to optimize.

■ The H1 problem seeks to minimize the maximum gain of some

system over the set of disturbance inputs.

! “Disturbance” inputs W are true disturbance, reference inputs, etc.

! Performance “outputs” Y comprise the system state x.t/, input

u.t/, etc.

Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2016, 2010, 2008, 2006, 2004, 2002, 2001, 1999, Gregory L. Plett

Page 2: H1 FULL-INFORMATION CONTROL AND ESTIMATIONmocha-java.uccs.edu/ECE5530/ECE5530-CH08.pdf · We will see more next chapter re. !-synthesis to guarantee robustness. LQR, LQE and LQG are

ECE5530, H1 FULL-INFORMATION CONTROL AND ESTIMATION 8–2

■ The H1 problem may be posed as a two-player game: The designer

seeks a control to minimize the system gain; Nature seeks a

disturbance input to maximize the gain.

■ Dynamics specified by differential equations ➠ “Differential Game.”

■ The state dynamics are modeled by

Px.t/ D Ax.t/ C Buu.t/ C Bww.t/

■ The objective function is a real function of the state, control, and

disturbance: J fx.t/; u.t/; w.t/g:

■ The solution consists of the optimal control trajectory u".t/ and the

worst-case disturbance input w".t/.

J fx.u"; w/; u"; wg # J fx.u"; w"/; u"; w"g # J fx.u; w"/; u; w"g:

■ The control solution is then of the mini-max problem

u" D arg minu

.maxw

J fx.u; w/; u; wg/:

■ Lagrange multipliers may be used to convert a constrained mini-max

problem into an unconstrained mini-max problem of higher order

Ja.x; u; w; "/ D J.x; u; w/C

Z tf

0

"T .t/fAx.t/CBuu.t/CBww.t/$ Px.t/g dt:

■ A necessary condition for a saddle point is that the variation of Ja

must be zero.

Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2016, 2010, 2008, 2006, 2004, 2002, 2001, 1999, Gregory L. Plett

Page 3: H1 FULL-INFORMATION CONTROL AND ESTIMATIONmocha-java.uccs.edu/ECE5530/ECE5530-CH08.pdf · We will see more next chapter re. !-synthesis to guarantee robustness. LQR, LQE and LQG are

ECE5530, H1 FULL-INFORMATION CONTROL AND ESTIMATION 8–3

8.2: Full-information control

■ Our first control design will assume “full information:”

! Plant state known,

! Input “disturbance” known.

■ The plant is modeled as

Px.t/ D Ax.t/ C Buu.t/ C Bww.t/

y.t/ D Cyx.t/ C Dyuu.t/;

where DTyuCy D 0 and DT

yuDyu D I .

■ The output consists of two parts: Linear combinations of the state

x.t/ and linear combinations of the control u.t/.

! The conditions DTyuCy D 0 and DT

yuDyu D I enforce separation

between these two parts, and normalize the contribution of u.t/ to

the output.

! For example, Cy Dh

In 0iT

and Dyu Dh

0 Im

iT

.

■ These restrictions may be relaxed at the expense of more difficult

mathematics to follow (we won’t relax them!)

■ The H1 full-information control problem

is to find a feedback controller, using the

state and disturbance, that minimizes

the closed-loop 1-norm

J D!!Gyw

!!

1;Œ0;tf #D sup

kw.t/k2;Œ0;tf #¤0

ky.t/k2;Œ0;tf #

kw.t/k2;Œ0;tf #

:

w.t/

u.t/ x.t/

y.t/

ControllerK.%/

Plant

■ The controller is a linear system (not necessarily time-invariant),

denoted K.%/.

Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2016, 2010, 2008, 2006, 2004, 2002, 2001, 1999, Gregory L. Plett

Page 4: H1 FULL-INFORMATION CONTROL AND ESTIMATIONmocha-java.uccs.edu/ECE5530/ECE5530-CH08.pdf · We will see more next chapter re. !-synthesis to guarantee robustness. LQR, LQE and LQG are

ECE5530, H1 FULL-INFORMATION CONTROL AND ESTIMATION 8–4

■ This objective function may not be used directly as a differential game

since it is a function of only the controller, and not of any particular

single disturbance signal.

! The sup.%/ makes the cost J independent of any particular

disturbance input.

! We can drop the sup.%/ but do not end up with a tractable

optimization problem.

■ We modify the function in a few steps to get a more useful

representation that does remove the sup.%/.

■ First, consider that the optimum controller must be better than some

suboptimal controller having cost $ . So,

!!Gyw

!!

1;Œ0;tf #D sup

kw.t/k2;Œ0;tf #¤0

ky.t/k2;Œ0;tf #

kw.t/k2;Œ0;tf #

< $:

■ Controllers that satisfy this bound are suboptimal solutions to the H1

full-information control problem (or simply suboptimal controllers).

■ The same result is obtained by squaring both sides of the inequality

!!Gyw

!!

2

1;Œ0;tf #D sup

kw.t/k2;Œ0;tf #¤0

(

ky.t/k22;Œ0;tf #

kw.t/k22;Œ0;tf #

)

< $2:

■ To satisfy this strict inequality, the term in f%g must be bounded away

from $2.ky.t/k2

2;Œ0;tf #

kw.t/k22;Œ0;tf #

# $2 $ "2:

■ Regrouping,

ky.t/k22;Œ0;tf # $ $2 kw.t/k2

2;Œ0;tf # # $"2 kw.t/k22;Œ0;tf # :

Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2016, 2010, 2008, 2006, 2004, 2002, 2001, 1999, Gregory L. Plett

Page 5: H1 FULL-INFORMATION CONTROL AND ESTIMATIONmocha-java.uccs.edu/ECE5530/ECE5530-CH08.pdf · We will see more next chapter re. !-synthesis to guarantee robustness. LQR, LQE and LQG are

ECE5530, H1 FULL-INFORMATION CONTROL AND ESTIMATION 8–5

■ Satisfying this inequality means that the closed-loop infinity norm is

bounded by $ for all disturbance inputs and for some ".

■ We can then use the expression on the left as an objective function

for differential game theory

J$.x; u; w/ D ky.t/k22;Œ0;tf # $ $2 kw.t/k2

2;Œ0;tf # :

Bisection Algorithm

■ If $ is initially set high enough (we will see how to test for this later),

then a solution exists for J$ (to be developed in the next sections).

■ This solution is a suboptimal solution to the original H1 optimization

problem.

■ The goal is to find the smallest $ and approximate the H1 optimal

solution as closely as possible.

■ The following algorithm may be used to search for a good suboptimal

controller.

Bisection search algorithm steps

1. Select $u, $l such that $l #!!Gyw

!!

1;Œ0;tf ## $u.

2. Test .$u $ $l /=$l < tol.

■ If true, stop.!!Gyw

!!

1;Œ0;tf #&

1

2.$u C $l /.

■ If false, continue.

3. With $ D1

2.$l C $u/, test if

!!Gyw

!!

1;Œ0;tf #< $ using a test TBD.

■ If true, set $u D $ (test value too high) and go to step 2.

■ If false, set $l D $ (test value too low) and go to step 2.

Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2016, 2010, 2008, 2006, 2004, 2002, 2001, 1999, Gregory L. Plett

Page 6: H1 FULL-INFORMATION CONTROL AND ESTIMATIONmocha-java.uccs.edu/ECE5530/ECE5530-CH08.pdf · We will see more next chapter re. !-synthesis to guarantee robustness. LQR, LQE and LQG are

ECE5530, H1 FULL-INFORMATION CONTROL AND ESTIMATION 8–6

8.3: The Hamiltonian equations

■ We now solve the suboptimal H1 control problem.

■ We form the augmented cost function

Ja;$.u; w; "/ D

Z tf

0

yT .t/y.t/ $ $2wT .t/w.t/

C2"T .t/fAx.t/ C Buu.t/ C Bww.t/ $ Px.t/g dt:

■ Note that, by our assumptions on Cy and Dyu,

yT .t/y.t/ D xT .t/C Ty Cyx.t/ C xT .t/C T

y Dyuu.t/„ ƒ‚ …

0

C uT .t/DTyuCyx.t/

„ ƒ‚ …

0

CuT .t/ DTyuDyu

„ ƒ‚ …

I

u.t/:

■ Next, we form the increment of the cost function

%Ja;$.u; w; "; ıu; ıw; ı"/

D Ja;$.u C ıu; w C ıw; " C ı"/ $ Ja;$.u; w; "/

D

Z tf

0

.x C ıx/T C Ty Cy.x C ıx/ C .u C ıu/T .u C ıu/

$$2.w C ıw/T .w C ıw/

C2." C ı"/T fA.x C ıx/ C Bu.u C ıu/ C Bw.w C ıw/ $ . Px C ı Px/g dt

$

Z tf

0

xT C Ty Cyx C uT u $ $2wT w C 2"T fAx C Buu C Bww $ Pxg dt:

■ Expanding and grouping,

Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2016, 2010, 2008, 2006, 2004, 2002, 2001, 1999, Gregory L. Plett

Page 7: H1 FULL-INFORMATION CONTROL AND ESTIMATIONmocha-java.uccs.edu/ECE5530/ECE5530-CH08.pdf · We will see more next chapter re. !-synthesis to guarantee robustness. LQR, LQE and LQG are

ECE5530, H1 FULL-INFORMATION CONTROL AND ESTIMATION 8–7

%Ja;$ D

Z tf

0

ıxT C Ty Cyıx C ıuT ıu $ $2ıwT ıw

C2ı"T fAıx C Buıu C Bwıw $ ı Pxg C 2uT ıu

C2xT C Ty Cyıx $ 2$2wT ıw C 2ı"T fAx C Buu C Bww $ Pxg

C2"T fAıx C Buıu C Bwıw $ ı Pxg dt:

■ The variation is (set to zero to find optimum)

ıJa;$ D

Z tf

0

2xT C Ty Cyıx C 2uT ıu $ 2$2wT ıw

C2ı"T fAx C Buu C Bww $ Pxg

C2"T fAıx C Buıu C Bwıw $ ı Pxg dt D 0:

■ Integration by parts yieldsZ tf

0

"T .t/ı Px.t/ dt D "T .tf /ıx.tf / $ "T .0/ıx.0/ $

Z tf

0

P"T .t/ıx.t/ dt:

■ We assume that the initial state is fixed so ıx.0/ D 0. Substituting:

ıJa;$ D $2"T .tf /ıx.tf / C

Z tf

0

.2 P"T C 2xT C Ty Cy C 2"T A/ıx

C.2"T Bw $ 2$2wT /ıw C .2uT C 2"T Bu/ıu

C2ı"T .Ax C Buu C Bww $ Px/ dt D 0:

■ The conditions for optimality is then

Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2016, 2010, 2008, 2006, 2004, 2002, 2001, 1999, Gregory L. Plett

Page 8: H1 FULL-INFORMATION CONTROL AND ESTIMATIONmocha-java.uccs.edu/ECE5530/ECE5530-CH08.pdf · We will see more next chapter re. !-synthesis to guarantee robustness. LQR, LQE and LQG are

ECE5530, H1 FULL-INFORMATION CONTROL AND ESTIMATION 8–8

".tf / D 0I

P".t/ D $C Ty Cyx.t/ $ AT ".t/I

u.t/ D $BTu ".t/I

w.t/ D $$2BTw ".t/I

Px.t/ D Ax.t/ C Buu.t/ C Bww.t/:

■ Combining, we get a Hamiltonian equation

"

Px.t/

P".t/

#

D

"

A $BuBTu C $$2BwBT

w

$C Ty Cy $AT

#

„ ƒ‚ …

H1 Hamiltonian Matrix

"

x.t/

".t/

#

D Z1

"

x.t/

".t/

#

with initial condition x.0/ D 0 and final condition ".tf / D 0.

■ The solution to this system, at final time, is

"

x.tf /

".tf /

#

D eZ1.tf $t /

"

x.t/

".t/

#

D

"

ˆ11.tf $ t/ ˆ12.tf $ t/

ˆ21.tf $ t/ ˆ22.tf $ t/

# "

x.t/

".t/

#

:

■ Substitute the final condition ".tf / D 0,

"

x.tf /

0

#

D

"

ˆ11.tf $ t/ ˆ12.tf $ t/

ˆ21.tf $ t/ ˆ22.tf $ t/

# "

x.t/

".t/

#

■ Solving the lower-triangular block

".t/ D $fˆ22.tf $ t/g$1ˆ21.tf $ t/x.t/ D P.t/x.t/:

■ Concluding,

u.t/ D $BTu P.t/x.t/ D BT

u fˆ22.tf $ t/g$1ˆ21.tf $ t/x.t/ D $K.t/x.t/:

Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2016, 2010, 2008, 2006, 2004, 2002, 2001, 1999, Gregory L. Plett

Page 9: H1 FULL-INFORMATION CONTROL AND ESTIMATIONmocha-java.uccs.edu/ECE5530/ECE5530-CH08.pdf · We will see more next chapter re. !-synthesis to guarantee robustness. LQR, LQE and LQG are

ECE5530, H1 FULL-INFORMATION CONTROL AND ESTIMATION 8–9

EXAMPLE 9.1: We wish to control a plant that is modeled as

Px.t/ D x.t/ C u.t/ C 2w.t/

y.t/ D

"

10

0

#

x.t/ C

"

0

1

#

u.t/

where the objective function is

J$fx.t/; u.t/; w.t/g D ky.t/k22 $ $2 kw.t/k2

2 :

■ Note: State and control weightings are incorporated into the “output”

equation, which is scaled to yield a unity-weight on the control.

■ The Hamiltonian system is:"

Px.t/P".t/

#

D

"

1 $1 C 4$$2

$100 $1

# "

x.t/

".t/

#

;

which has state-transition matrix

eZ1t D L$1Œ.sI $ Z1/$1#

D L$1

"

1

s2 $ 101 C 400$$2

"

s C 1 $1 C 4$$2

$100 s $ 1

##

■ An H1 full-information suboptimal controller exists if ˆ22 is invertible

throughout the desired time interval.

■ This element of the state-transition matrix is:

ˆ22.t/ D L$1

"

s $ 1

s2 $ 101 C 400$$2

#

D

8

ˆˆˆˆˆˆ<

ˆˆˆˆˆˆ:

a $ 1

2aeat C

a C 1

2ae$at ; $ >

r

400

101

1 $ t; $ D

r

400

101r

!2 C 1

!2sin.!t C &/; $ <

r

400

101

Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2016, 2010, 2008, 2006, 2004, 2002, 2001, 1999, Gregory L. Plett

Page 10: H1 FULL-INFORMATION CONTROL AND ESTIMATIONmocha-java.uccs.edu/ECE5530/ECE5530-CH08.pdf · We will see more next chapter re. !-synthesis to guarantee robustness. LQR, LQE and LQG are

ECE5530, H1 FULL-INFORMATION CONTROL AND ESTIMATION 8–10

where a Dp

101 $ 400$$2, ! Dp

$101 C 400$$2, and & D $ tan$1.!/.

■ Note that as $ gets smaller, a gets smaller, ! gets bigger, and & gets

more negative.

■ When $ <p

400=101 the eigenvalues of the Hamiltonian matrix are

imaginary, and the gain does not exist for long time intervals since

ˆ22.t/ periodically crosses zero.

■ The lack of solution for feedback gain means there is no solution of

the differential game for the given bound.

■ For the suboptimal bound

$ D 2:03 (a D 2/ and the final

time tf D 3, the H1 suboptimal

feedback gain is

K.t/ D100e2.3$t / $ 100e$2.3$t /

e2.3$t / C 3e$2.3$t /:

0 0.5 1 1.5 2 2.5 3

0

20

40

60

80

100

Time (s)

Feed

back

gai

n

Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2016, 2010, 2008, 2006, 2004, 2002, 2001, 1999, Gregory L. Plett

Page 11: H1 FULL-INFORMATION CONTROL AND ESTIMATIONmocha-java.uccs.edu/ECE5530/ECE5530-CH08.pdf · We will see more next chapter re. !-synthesis to guarantee robustness. LQR, LQE and LQG are

ECE5530, H1 FULL-INFORMATION CONTROL AND ESTIMATION 8–11

8.4: Riccati equations to find state feedback

The Riccati equation

■ Solving the Hamiltonian for the time-varying state feedback may be

done via a Riccati equation.

■ Given that ".t/ D P.t/x.t/, differentiate to get

P".t/ D PP .t/x.t/ C P.t/ Px.t/:

■ Substitute for P".t/ and Px.t/ from the Hamiltonian system

$C Ty Cyx.t/$AT ".t/ D PP .t/x.t/CP.t/fAx.t/$.BuBT

u $$$2BwBTw /".t/g:

■ Then, substituting for ".t/ D P.t/x.t/,

f PP .t/CP.t/ACAT P.t/CC Ty Cy$P.t/.BuBT

u $$$2BwBTw /P.t/gx.t/ D 0:

■ Since this is valid for any x.t/

PP .t/ D $P.t/A $ AT P.t/ $ C Ty Cy C P.t/.BuBT

u $ $$2BwBTw /P.t/;

which is the Riccati equation for the H1 suboptimal control problem,

initialized with P.tf / D 0.

■ A suboptimal H1 controller exists for performance bound $ iff there

exists a solution P ' 0 to

PA C AT P $ P.BuBTu $ $$2BwBT

w /P C C Ty Cy D 0;

where the matrix A $ .BuBTu $ $$2BwBT

w /P is stable.

! This may be used as the test in the bisection search algorithm for

existence of an H1 controller for performance bound $ .

EXAMPLE 9.1 CONTINUED: Here, we use the Riccati equation to solve

the same problem as in the prior example.

Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2016, 2010, 2008, 2006, 2004, 2002, 2001, 1999, Gregory L. Plett

Page 12: H1 FULL-INFORMATION CONTROL AND ESTIMATIONmocha-java.uccs.edu/ECE5530/ECE5530-CH08.pdf · We will see more next chapter re. !-synthesis to guarantee robustness. LQR, LQE and LQG are

ECE5530, H1 FULL-INFORMATION CONTROL AND ESTIMATION 8–12

■ The Riccati equation is (A D 1, Bu D 1, Bw D 2, Cy Dh

10 0iT

):

PP .t/ D $2P.t/ C .1 $ 4$$2/P 2.t/ $ 100;

with final condition P.3/ D 0.

■ The solution is

P.t/ D K.t/ D100e2.3$t / $ 100e$2.3$t /

e2.3$t / C 3e$2.3$t /;

which is the same as we found before.

■ (The solution can be verified by substituting P.t/ back into the Riccati

equation). Note, P.t/ D K.t/ because Bu D 1.

Steady-state result

■ The steady-state result may be found by setting PP .t/ D 0 and solving

the A.R.E.

■ An eigenvector method may again be used to solve for Pss

P D ‰21.‰11/$1;

where ‰ is the eigensystem of the Hamiltonian system, and ‰21 and

‰11 are formed from the eigenvectors associated with the stable

eigenvalues.

■ A suboptimal H1 controller exists for performance bound $ iff Z1

has no eigenvalues on the imaginary axis, ‰11 is invertible, and

P D ‰21.‰11/$1 ' 0.

! This may also be used in the test in the bisection search algorithm

for existence of an H1 controller for performance bound $ .

Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2016, 2010, 2008, 2006, 2004, 2002, 2001, 1999, Gregory L. Plett

Page 13: H1 FULL-INFORMATION CONTROL AND ESTIMATIONmocha-java.uccs.edu/ECE5530/ECE5530-CH08.pdf · We will see more next chapter re. !-synthesis to guarantee robustness. LQR, LQE and LQG are

ECE5530, H1 FULL-INFORMATION CONTROL AND ESTIMATION 8–13

■ The suboptimal controller is then given as

u.t/ D $BTu ‰21.‰11/

$1x.t/ D $Kx.t/:

EXAMPLE 9.2: An antenna is required to remain pointed at a satellite in

the presence of disturbance torques caused by gravity, wind,

squirrels, etc. The state model is

Px.t/ D

"

0 1

0 $1

#

x.t/ C

"

0

1

#

u.t/ C

"

0

1

#

w.t/

e.t/ Dh

1 0i

x.t/:

■ As usual, u.t/ is the control torque, w.t/ is the disturbance torque,

and e.t/ is the tracking error.

■ A tracking error of less than 0:1 rad is desired, even in the presence of

disturbances up to 2 N-m.

■ The control magnitude is also required to be less than 10 N-m.

■ These specifications are appended to the plant as weighting functions

to yield a model in standard form:

Px.t/ D

"

0 1

0 $1

#

x.t/ C

"

0

10

#

u1.t/ C

"

0

2

#

w1.t/

y.t/ D

"

10 0

0 0

#

x.t/ C

"

0

1

#

u1.t/;

where w1.t/ and u1.t/ are the normalized disturbance and control

inputs.

■ In this form, w1.t/ is less than 1 and both elements of y.t/ are

required to be less than 1. The original control input can be recovered

from the normalized control as u.t/ D 10u1.t/.

Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2016, 2010, 2008, 2006, 2004, 2002, 2001, 1999, Gregory L. Plett

Page 14: H1 FULL-INFORMATION CONTROL AND ESTIMATIONmocha-java.uccs.edu/ECE5530/ECE5530-CH08.pdf · We will see more next chapter re. !-synthesis to guarantee robustness. LQR, LQE and LQG are

ECE5530, H1 FULL-INFORMATION CONTROL AND ESTIMATION 8–14

■ A full-information controller is generated for $ D 1:

u1.t/ D $h

10:2 1:36i

x.t/ or u.t/ D $h

102 13:6i

x.t/:

100 101 102−90

−80

−70

−60

−50

−40

Frequency (rad/s)

Gai

n (d

B)

Tracking error

γ = 1γ = 0.2

100 101 102−20

−15

−10

−5

0

5

Frequency (rad/s)G

ain

(dB)

Control effort

γ = 1γ = 0.2

■ The left plot shows frequency response of tracking error; the right plot

shows frequency response of control effort.

■ Maximum gain from sinusoidal disturbance to tracking error is

10$40=20 D 0:01, which is less than spec of 0:05.

■ Maximum gain from disturbance input to control is 1.26, which is less

than spec of 5.

■ A second full-information controller is generated for $ D 0:21.

u1.t/ D $h

32:8 7:39i

x.t/ or u.t/ D $h

328 73:9i

x.t/:

■ Performance is better, but gains are higher. Disturbance rejection is

higher, but control bandwidth is also higher.

Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2016, 2010, 2008, 2006, 2004, 2002, 2001, 1999, Gregory L. Plett

Page 15: H1 FULL-INFORMATION CONTROL AND ESTIMATIONmocha-java.uccs.edu/ECE5530/ECE5530-CH08.pdf · We will see more next chapter re. !-synthesis to guarantee robustness. LQR, LQE and LQG are

ECE5530, H1 FULL-INFORMATION CONTROL AND ESTIMATION 8–15

8.5: H1 Estimation

■ When investigating H2 control, we split the problem into a regulation

part (LQR) and an estimation part (LQE).

■ When joined, the result turned out to be optimal as well (LQG).

■ One reason that this works is because H2 optimization of $K.t/x.t/

is equal to $K.t/ times the H2 optimization of x.t/.

! This same property does not hold for H1 estimation; we must be

more careful when generating an H1 output feedback controller.

! H1 estimation is still required, so we study it briefly here.

■ The plant model that is assumed looks like

Px.t/ D Ax.t/ C Buu.t/ C Bww.t/

m.t/ D Cmx.t/ C Dmww.t/

with constraints DmwBTw D 0 and DmwDT

mw D I .

■ Goal is to estimate a linear combination of the state, y.t/ D Cyx.t/.

! One example would be to estimate the state itself: Cy D I .

■ H1 estimation is the dual problem to H1 regulation. As you might

expect, there is a Hamiltonian system involved

2

4 Px Oh.t/

3

5 D

2

4$AT C T

m Cm $ $$2C Ty Cy

BwBTw A

3

5

2

4x Oh.t/

3

5 D Y1x Oh.t/:

■ The H1 suboptimal estimator gain may be found in terms of the

state-transition matrix of the Hamiltonian system:

Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2016, 2010, 2008, 2006, 2004, 2002, 2001, 1999, Gregory L. Plett

Page 16: H1 FULL-INFORMATION CONTROL AND ESTIMATIONmocha-java.uccs.edu/ECE5530/ECE5530-CH08.pdf · We will see more next chapter re. !-synthesis to guarantee robustness. LQR, LQE and LQG are

ECE5530, H1 FULL-INFORMATION CONTROL AND ESTIMATION 8–16

L.t/ D Q.t/C Tm D ˆ21.t/fˆ11.t/g

$1C Tm

where the ˆij are terms of the state-transition matrix:

eY1t D

2

4ˆ11.t/ ˆ12.t/

ˆ21.t/ ˆ22.t/

3

5 :

■ A steady-state solution may be found from the eigensystem of Y1 as

Q D ‰22.‰12/$1;

where

2

4‰12

‰22

3

5

is a matrix whose columns are the eigenvectors of the estimator

Hamiltonian associated with the unstable eigenvalues.

■ A suboptimal H1 estimator exists if there is a solution to Y1 that has

no eigenvalues on the imaginary axis, ‰12 is invertible, and

Q D ‰22.‰12/$1 ' 0.

! This may be used in the bisection search algorithm for existence of

an H1 estimator for performance bound $ .

■ These result may also be obtained from a Riccati equation

PQ.t/ D Q.t/AT C AQ.t/ C BwBTw $ Q.t/fC T

m Cm $ $$2C Ty CygQ.t/;

with initial condition Q.0/ D 0.

■ A suboptimal H1 estimator exists if there is a solution to

AQ C QT A $ Q.C Tm Cm $ $$2C T

y Cy/Q C BwBTw D 0;

Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2016, 2010, 2008, 2006, 2004, 2002, 2001, 1999, Gregory L. Plett

Page 17: H1 FULL-INFORMATION CONTROL AND ESTIMATIONmocha-java.uccs.edu/ECE5530/ECE5530-CH08.pdf · We will see more next chapter re. !-synthesis to guarantee robustness. LQR, LQE and LQG are

ECE5530, H1 FULL-INFORMATION CONTROL AND ESTIMATION 8–17

such that Q ' 0 and that the matrix A $ Q.C Tm Cm $ $$2C T

y Cy/ is

stable.

! This may also be used in the bisection search algorithm for

existence of an H1 estimator for performance bound $ .

EXAMPLE 9.3: An H1 estimator can be applied to the radar range

tracking of an aircraft. A model equations for the range r.t/ and radial

velocity of an aircraft is (range is measured):"

Pr.t/

Rr.t/

#

D

"

0 1

0 0

# "

r.t/

Pr.t/

#

C

"

0

1

#

w.t/

m.t/ Dh

1 0i

"

r.t/

Pr.t/

#

C v.t/:

■ The output to be estimated is the entire state:

y.t/ D

"

1 0

0 1

# "

r.t/

Pr.t/

#

■ The plant input and measurement error are assumed to be bounded:"

jw.t/j

jv.t/j

#

#

"

4 m=sec2

20 m

#

■ Model equations using normalized inputs are:"

Pr.t/

Rr.t/

#

D

"

0 1

0 0

# "

r.t/

Pr.t/

#

C

"

0 0

4 0

# "

w1.t/

v1.t/

#

m.t/ Dh

1 0i

"

r.t/

Pr.t/

#

Ch

0 20i

"

w1.t/

v1.t/

#

■ Normalizing this equation so that the input-output coupling matrix

satisfies DmwDTmw D I gives the measurement equation

Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2016, 2010, 2008, 2006, 2004, 2002, 2001, 1999, Gregory L. Plett

Page 18: H1 FULL-INFORMATION CONTROL AND ESTIMATIONmocha-java.uccs.edu/ECE5530/ECE5530-CH08.pdf · We will see more next chapter re. !-synthesis to guarantee robustness. LQR, LQE and LQG are

ECE5530, H1 FULL-INFORMATION CONTROL AND ESTIMATION 8–18

m1.t/ D

"

1

200

#"

r.t/

Pr.t/

#

Ch

0 1i

"

w1.t/

v1.t/

#

where m.t/ D 20m1.t/.

■ The resulting H1 estimator is given as

POx.t/ D A Ox.t/ C L1.t/

"

m1.t/ $

"

1

200

#

Ox.t/

#

Oy.t/ D Cy Ox.t/:

■ We can write this in terms of the original measurements,

POx.t/ D A Ox.t/ C1

20L1.t/

"

20m1.t/ $ 20

"

1

200

#

Ox.t/

#

D A Ox.t/ C L.t/h

m.t/ $h

1 0i

Ox.t/i

;

where L.t/ D L1.t/=20:

■ Baseline results are plotted

below.

0 5 10 15 20 250

1

2

3

4

5

Time (s)

Gai

ns

1

2L

L

0 5 10 15 20 250.8

0.9

1

1.1

1.2 x 104

Time (s)

Ran

ge

ActualEstimate

0 5 10 15 20 25−100

−50

0

50

100

Time (s)

Ran

ge ra

te

ActualEstimate

Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2016, 2010, 2008, 2006, 2004, 2002, 2001, 1999, Gregory L. Plett

Page 19: H1 FULL-INFORMATION CONTROL AND ESTIMATIONmocha-java.uccs.edu/ECE5530/ECE5530-CH08.pdf · We will see more next chapter re. !-synthesis to guarantee robustness. LQR, LQE and LQG are

ECE5530, H1 FULL-INFORMATION CONTROL AND ESTIMATION 8–19

■ If meas. error decreases from 20

to 2, we get the these results.

■ Estimation gains increase;

estimates converge more quickly.0 5 10 15 20 250

5

10

15

Time (s)

Gai

ns

1

2L

L

0 5 10 15 20 250.8

0.9

1

1.1

1.2 x 104

Time (s)

Ran

ge

ActualEstimate

0 5 10 15 20 25−100

−50

0

50

100

Time (s)

Ran

ge ra

te

ActualEstimate

■ Finally, if reference output is only

range rate (we still estimate

range, but don’t care about its

accuracy), we get these results.

■ Gains change (slightly)! The H1

estimator depends on the output

of interest!0 5 10 15 20 250

1

2

3

4

5

Time (s)

Gai

ns

1

2L

L

0 5 10 15 20 250.8

0.9

1

1.1

1.2 x 104

Time (s)

Ran

ge

ActualEstimate

0 5 10 15 20 25−100

−50

0

50

100

Time (s)

Ran

ge ra

te

ActualEstimate

Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2016, 2010, 2008, 2006, 2004, 2002, 2001, 1999, Gregory L. Plett

Page 20: H1 FULL-INFORMATION CONTROL AND ESTIMATIONmocha-java.uccs.edu/ECE5530/ECE5530-CH08.pdf · We will see more next chapter re. !-synthesis to guarantee robustness. LQR, LQE and LQG are

ECE5530, H1 FULL-INFORMATION CONTROL AND ESTIMATION 8–20

Summary

■ H1 controller and estimator design is an iterative procedure.

■ Perform the bisection algorithm to find the lowest value of the

performance bound $ that you can, within some tolerance.

! You will generally back off of this performance bound somewhat,

as designs that are nearly optimal can have numeric instabilities.

■ For controller design, the conditions for existence of some

suboptimal controller with bound $ are:

! There exists a solution P ' 0 to

PA C AT P $ P.BuBTu $ $$2BwBT

w /P C C Ty Cy D 0;

where the matrix A $ .BuBTu $ $$2BwBT

w /P is stable; or,

! Z1 has no eigenvalues on the imaginary axis, ‰11 is invertible,

and P D ‰21.‰11/$1 ' 0.

■ For estimator design, the conditions for existence of a suboptimal

controller with bound $ are:

! There exists a solution Q ' 0 to

AQ C QT A $ Q.C Tm Cm $ $$2C T

y Cy/Q C BwBTw D 0;

where the matrix A $ Q.C Tm Cm $ $$2C T

y Cy/ is stable; or,

! Y1 has no eigenvalues on the imaginary axis, ‰12 is invertible, and

Q D ‰22.‰12/$1 ' 0.

■ For controller implementation, with some $ , solve the differential

Riccati equation

Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2016, 2010, 2008, 2006, 2004, 2002, 2001, 1999, Gregory L. Plett

Page 21: H1 FULL-INFORMATION CONTROL AND ESTIMATIONmocha-java.uccs.edu/ECE5530/ECE5530-CH08.pdf · We will see more next chapter re. !-synthesis to guarantee robustness. LQR, LQE and LQG are

ECE5530, H1 FULL-INFORMATION CONTROL AND ESTIMATION 8–21

PP .t/ D $P.t/A $ AT P.t/ $ C Ty Cy C P.t/.BuBT

u $ $$2BwBTw /P.t/;

backward in time, initialized with P.tf / D 0. Then,

u.t/ D $K.t/x.t/ D $BTu P.t/x.t/:

■ A steady-state suboptimal controller, with some $ , uses the

eigensystem ‰ of the Z1 matrix to find the steady-state gain matrix:

u.t/ D $Kx.t/ D $BTu ‰21.‰11/

$1x.t/;

where ‰21 and ‰11 are formed from the eigenvectors associated with

the stable eigenvalues of Z1.

■ For estimator implementation, with some $ , solve the differential

Riccati equation

PQ.t/ D Q.t/AT C AQ.t/ C BwBTw $ Q.t/fC T

m Cm $ $$2C Ty CygQ.t/;

forward in time, initialized with Q.0/ D 0. Then,

L.t/ D Q.t/C Tm :

■ A steady-state suboptimal estimator, with some $ , uses the

eigensystem ‰ of the Y1 matrix to find the steady-state gain:

L D QC Tm D ‰22.‰12/

$1C Tm ;

where ‰22 and ‰12 are formed from the eigenvectors associated with

the unstable eigenvalues of Y1.

Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2016, 2010, 2008, 2006, 2004, 2002, 2001, 1999, Gregory L. Plett