adaptive control systems
TRANSCRIPT
Adaptive Control Systems
Real-time parameter estimation, Variations of least-squares methods
Announcement
Homework, solutions to exercises, etc. will be posted on the
course webpage.
Please follow the link
http://www.ee.nchu.edu.tw/main.asp?un=14&bn=2&pcid=
&pgid=1
and find the course number G64213-II
Real-time Parameter Estimation
Motivation from indirect adaptive control
Sample the signal u and y with a sampling period of T sec.
Real-time Parameter Estimation
• Both the estimation and controller design are performed at
every T sec, i.e., performed online
• When T is very small, all the processes done in real-time
• This yields a time constraint on the processes
• Need a fast algorithm for the parameter estimation
Standard Least-squares Method
• Estimate parameters of the model
• Minimize
• With N data points, the solution is
Some issues:
• When N is large, computation of is time consumption
and memory to store large data is also required
• Need a more efficient way to compute
y vA 2| ||| A y
1( ( )ˆ ) T TA yN A A
ˆ( )N
ˆ( )N
Recursive Least-squares Method
Idea:
At 0th point: An initial guess is given
…
At Nth point: New data is obtained. Then compute
the current estimate
The function F can be derived from the standard LS solution
At (N+1)th point: …
ˆ(0) n R
( , )N Nu y
1ˆ ˆ( ) ( ( , ,, , , )1) N N N N nN F u uN y u
The estimate from the previous step
Recursive Least-squares Method • is simply computed from , which leads to a
recursive computation
• Small requirement on memory, because not all the data are stored
• Used in central path of adaptive control systems
• Easily modified into real-time algorithms
ˆ( )N ˆ( 1)N
Recursive Least-squares Method
The least-square estimate satisfies
• Interpret as a prediction error
• When , i.e., the prediction error is nonzero,
then is interpreted as a gain factor to
adjust the parameter
ˆ( )N
ˆ ˆ ˆ( ) ( 1) ( ) ( 1)
( 1)( ) ( 1
( 1)
( 1))
1
N
T
N
T
N
T
N
N
N
N
N N P N a y N
P N aP N
a
a P N
a P NP N
a
ˆ( 1)T
NNy a N
ˆ( 1)T
NNy a N
( ) ( ) NK N P N a
Recursive Least-squares Method
The gain can be also computed from
Moreover
( ) ( ) NK N P N a
( )K N
( (1)( 1)
1
1)
( 1)
NN
NN
N
T
N
T
a PP N a
a
N aP N
Na
P a
( 1)
( 1)1 N
N
N
Ta
P N
P
a
N a
( 1)( ) ( 1) ( 1) ( ) (
( 1)
( 1)1)
1
T
N T
N
T N
N
N a P N
a
P N aP N P
PN P N K N a P N
N a
( ) ( 1)T
n NI K N a P N
RLS Algorithm
Given and , the LS estimate
can be computed from
ˆ( 1), ( 1), NN P N y Na ˆ( )N
ˆ ˆ ˆ( ) ( 1) ( ) ( 1)T
NNN N K N y a N
( 1)( )
1 ( 1)T
N
N
Na P N
N aK N
a
P
( ) ( ) ( 1)T
n NP N I K N a P N
Initial Conditions for RLS
• Need to choose and to start the algorithm
• If is small (in elements), then will be small
and will not change much
• If is large, will quickly change from
• In practice, commonly choose
where is a positive constant, due to the fact that
can be factored as
ˆ(0) (0)P(0)P ( )K Nˆ( )N
(0)P ˆ( )N ˆ(0)
ˆ(0) 0, (0)P I
(0)P
0 0 0, : f(0 ull ran) kTP A A A
RLS with Exponential Forgetting
Given and , the LS estimate
can be computed from
ˆ( 1), ( 1), NN P N y Na ˆ( )N
ˆ ˆ ˆ( ) ( 1) ( ) ( 1)T
NNN N K N y a N
( 1)(
( ))
1
N
N N
Ta
PK
a
aN
P
N
N
( ) ( ) (1
1)T
n NP N I K N a P N
Simplified Algorithms
Projection Algorithm (Kaczmarz’s algorithm):
• Avoid computation of , because is replaced by
• Convergence is slower than the RLS algorithm
Gradient Algorithm:
• Avoid division by zero by adding
• Convergence is guaranteed by bounding
ˆ ˆ ˆ( ) ( 1) ( 1)NNT
N N
T
Naa
N N y Na a
( )P N ( )P N1/ ( )T
N Na a
ˆ ˆ ˆ( ) ( 1) ( 1) , 0, 0 2NNT
N N
T
Naa
N N y Na a
0
Learning Materials
Reading:
• K. J. Astrom and B. Wittenmark. Adaptive Control
- Chapter 2, Sections 2.2 and 2.5
References:
• Lecture slide no. 13 on System Identification
http://jitkomut.lecturer.eng.chula.ac.th/ee531.html
• Lecture slide no. 3 on Iterative Learning and Adaptive Control http://www8.tfe.umu.se/forskning/Control_Systems/Courses/IterativeLearningAndAdaptiveControl/