[ieee 2014 american control conference - acc 2014 - portland, or, usa (2014.6.4-2014.6.6)] 2014...

6

Click here to load reader

Upload: andrew-g

Post on 14-Apr-2017

215 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: [IEEE 2014 American Control Conference - ACC 2014 - Portland, OR, USA (2014.6.4-2014.6.6)] 2014 American Control Conference - Iterative Learning Control for image based visual servoing

Abstract — Fabrication of nano/micro-scale functional

devices, in the context of a continuous or semi-continuous

manufacturing process, is often performed via successive

processes in multiple localized zones. As the substrate traverses

downstream in the process flow, proper registration of the pre-

existing features is necessary prior to entering the next

fabrication zone in order to accurately complement previous

manufacturing steps. In this work, we consider a 2D planar

arrangement where the substrate can be panned and oriented

and we performed a direct visual servoing technique to correct

both the pose and the translational alignment of a pre-existing

feature. Based on the recorded image data, Iterative Learning

Control (ILC) is implemented on top of the feedback controller

to simultaneously improve the position and orientation tracking

precision of the feature.

I. INTRODUCTION

terative Learning Control (ILC) was first introduced by

Arimoto in 1984 to improve the performance of robotic manipulators performing a repetitive task [1]. ILC is a

data driven feedforward control method by which errors from

previous trials are mapped to the input signals in the current

trial. At a glance, the repetitive requirement of ILC may

appear restrictive. However, ILC is particularly useful for

manufacturing systems that, by definition, perform repeated

actions. Good overviews of ILC are covered in [2].

Significant work has been performed both in the theoretical

space [3], [4] as well as practice [5]–[7] in the of ILC. The

interested reader is referred to [8] for detailed background on

the various approaches available.

In this paper, we utilize the ILC approach to complement a slightly modified Image Based Visual Servoing (IBVS)

technique in attempt to improve the feature registration

precision for multistep manufacturing processes. IBVS is

largely used in robotics application, where a camera is

mounted on a 6 degree of freedom robotic manipulator to

directly servo an object using the information extracted from

the image acquired by the camera [9]–[11]. Current camera

technology allows for image acquisition at several hundred

frames per second (fps), making it suitable for precision

motion control application [12], [13].

In this work, we consider a 2D planar arrangement where the

substrate can be panned and oriented using a 3 DOF

electromechanical system (x,y,θ). The camera is assumed to be mounted on a frame with infinite stiffness and directed

normal to the surface of a substrate. We perform a direct

visual servoing technique to correct both the pose and the

translational alignment of a pre-existing features. Based on

recorded image data, Iterative Learning Control (ILC) is

implemented on top of the feedback controller to

simultaneously improve the position and orientation tracking

precision of the feature.

It is assumed that each degree of freedom (x,y,θ) is actuated

by an electromechanical system providing a first order open

loop system. First, the workspace is transformed to a camera

frame perspective in which the ILC will be performed. Then

a dual rate feedback controller is implemented on each axis.

An inner loop is closed on the servo-positioning of each

degree of freedom in the camera frame of reference. In an

outer loop, a parallel ILC is applied as a reference to the inner-

loop controller. The ILC is designed and tuned using a simple

frequency domain approach. Simulation results are presented

to demonstrate the benefit of this approach.

The remainder of the paper is organized as follows. Section

II describes the coordinate setup as well as the dynamic model

of the fiducial marker. A two layer visual servoing feedback

structure is discussed in Section III. Section IV presents the

ILC implementation and simulation results. Section V

provides concluding remarks and future directions.

II. SYSTEM DESCRIPTION

In this section we describe the coordinate setup of the

system that is used in this work as well as the dynamic model of the fiducial relative to the camera reference frame.

A. Coordinate Setup and Fiducial Kinematics

Unlike the “eye-in-hand” configuration found in many

robot manipulators, it is assumed in this work that the camera

is mounted stationary to an infinitely stiff structure and directed normal to the surface of the substrate. Figure 1

illustrates the physical system setup and the inset describes

the coordinate system used in this article onward. The

columns and the rows of the image, similar to the standard

Cartesian coordinate system, defines the ˆxe and ˆ

ye axis

respectively. Moreover, the center of the camera’s field of

view (FOV) is where the origin of the global inertial reference

frame, O, resides.

Iterative Learning Control for

Image Based Visual Servoing Applications

Erick Sutanto, Student Member, ASME and Andrew G. Alleyne, Fellow, ASME

I

This work was supported in part by the NanoCEMMS Research Center

under Grant CMI 07-49028.

E. Sutanto is with the Mechanical Science and Engineering Department,

University of Illinois at Urbana Champaign, Urbana, IL 61801 USA

A. Alleyne, is a faculty member Mechanical Science and Engineering

Department, University of Illinois at Urbana Champaign, Urbana, IL

61801 USA ([email protected]*)

2014 American Control Conference (ACC)June 4-6, 2014. Portland, Oregon, USA

978-1-4799-3274-0/$31.00 ©2014 AACC 1811

Page 2: [IEEE 2014 American Control Conference - ACC 2014 - Portland, OR, USA (2014.6.4-2014.6.6)] 2014 American Control Conference - Iterative Learning Control for image based visual servoing

Figure 1. The coordinate setup of IBVS formulation. The camera is assumed

to be mounted on an infinitely stiff structure and the inset describes the

coordinate setup of the visual servoing system.

A massless planar substrate is securely mounted on a 3

DOF (x,y,θ) electromechanical systems, constituting one

moving body, . In Figure 1, P defines the local origin and

also the pivot point of . Here, xf and yf are two

orthogonal linear forces which translate along the ˆxe and

ˆye axis respectively while f rotates about pivot point, P.

On the substrate are a set of features which may come from

the preceding fabrication processes or have been intentionally

patterned to serve as a fiducial for the visual servoing purpose.

The fiducial’s center of mass with respect to the camera frame

is denoted by ,T

OC OC OCr x y and , the angle between 1̂e

and ˆxe , defines the orientation or the pose of the fiducial. Both

OCr and at any given point in time constitute the

instantaneous configuration of the fiducial, t , which is

formally described by (1).

,OCt r t t

(1)

In this work, the primary objective of the direct visual

servoing action is to minimize the configuration error, t ,

defined by (2),

*t t t (3)

where *t is a finite time configuration reference. Motions

performed by indirectly alter the configuration of the fiducial, and it is therefore necessary to understand the

kinematics of t as a function of the body kinematics. As

depicted by Figure 1, OCr can be described as

OC OP PCr r r (4)

Since the fiducial is attached to , the angle can be

expressed in terms of the fiducial orientation, and the

relationship is given by

(5)

where is the constant angular offset between and . By

defining as PCr , we can rewrite (4) as

cos

sin

OC OP

OC OP

x x

y y

(6)

The first and second derivative of OCx and OCy are further

described by (7) and (8)

sin

cos

OC OP

OC OP

x x

y y

(7)

2

cos sin

sin cos

OC OP

OC OP

x x

y y

(8)

B. System Dynamic Models

It is assumed that each degree of freedom , ,x y is

actuated by an electromechanical system that resembles a

simple mass-damper system. The dynamics equation for each

axis of the electromechanical system is given by (9),

x OP x OP x

y OP y OP y

m x b x f t

m y b y f t

J b f t

(9)

where ,m b and J respectively denote the mass, the damping

coefficient and the rotational moment of inertia of each axis.

By substituting (7) and (8) into (9), we obtain a set of equation

that describes the dynamics of the fiducial, (10).

,

,

x OC x OC x x

y OC y OC y y

m x b x f t g f t

m y b y f t g f t

J b f t

(10)

In (10), xg and yg are both non-linear terms introduced

primarily by the rotational motion of the -axis. In practice,

both xg and yg will depend on how the substrate is placed

and oriented relative to the electromechanical systems.

Consequently, it is rather challenging to define xg and yg

accurately, making it impractical to design non-linear

feedback controllers to compensate for the non-linear

dynamics. In the context of direct image based visual

servoing, we assume no prior knowledge about the fiducial’s

dynamics and therefore simple feedback controllers are preferred.

III. FEEDBACK CONTROL ARCHITECTURE

Most high precision electromechanical systems operate

at sampling frequencies higher than 1 kHz, much faster than

the maximum sampling frequencies attainable by most vision

sensors. As such, the machine vision cannot be used directly

to close the loop of the electromechanical systems. In this

work, we implement a two layers feedback control loop on each axis as depicted by Figure 2. The inner loop samples data

every 1 ms and uses an encoder for the feedback signal,

whereas the outer loop samples data every 10 ms and uses the

image features for the feedback signal.

1812

Page 3: [IEEE 2014 American Control Conference - ACC 2014 - Portland, OR, USA (2014.6.4-2014.6.6)] 2014 American Control Conference - Iterative Learning Control for image based visual servoing

Figure 2. A dual rate feedback control architecture. The inner loop samples

every 1 ms and uses the encoder as the feedback signal. The outer loop

samples every 10 ms and uses information extracted from image data.

In Figure 2, the index , ,i x y refers to the ith axis on

the electromechanical systems. Each pixel within the field of

view of the camera corresponds to a physical distance of 1

µm. iG s is the input-position transfer function of the

electromechanical system, which according to the equation of

motions described in (9), can be written as

1

i i

i

i i

s KG s

U s s s

(11)

where iK and i represents the gain and time constants of

each axis respectively. iC s is the feedback controller

applied to each axis and is defined in (12). It assumes the

form of a double lead compensator to improve the inner loop

bandwidth of each axis. The numerical parameters of iG s

and iC s are tabulated in Table I and Table II respectively.

Figure 3 presents step response plots of the inner loop of the

individual axis. The control architecture presented in Figure 2

can also be applied to electromechanical systems with a

closed architecture, such as a CNC machine.

1 2

1 2

i i

i i

i i

s z s zC s

s p s p

(12)

TABLE I: NUMERICAL PARAMETERS OF iG s

Definition

Axis Index - i

x y θ

Ki Gain Constant 5 5 20

τi Time Constant 0.01 0.01 0.01

TABLE II: NUMERICAL PARAMETERS OF iC s

Definition

Axis Index - i

x y θ

i Controller Gain 4.8 4.8 2.4

1

iz Lead Controller 1 - Zero 25 25 25

1

ip Lead Controller 1 - Pole 40 40 40

2

iz Lead Controller 2 - Zero 23.81 23.81 23.81

2

ip Lead Controller 2 - Pole 35.71 35.71 35.71

Figure 3. Step response plot of the inner loop of each axis. Double lead

compensators are used on each axis to improve the bandwidth of the

electromechanical systems.

On each axis, the outer loop compares the reference

configuration, * with the image measurements and

generates a reference trajectories for the inner loop such that can track * closely. Based on the structure of iG s

and iC s described in (11) and (12), the closed loop

transfer function of the inner loop does not have a free

integrator. As such, based on the internal model principle, it is

necessary to introduce an integral action to the outer loop to

achieve zero steady state error. The machine vision produces

the measurement signal for all the three axes. The tracking

performance of the proposed control architecture is presented

in Figure 4. Here, we observe the non-linear effect on both the

x and y axis, resulting from the rotational motion of .

Figure 4. The ramp tracking performance of the visual servo system. The

non-linear dynamics introduced by the rotational motion are apparent on the

x and y axis.

1813

Page 4: [IEEE 2014 American Control Conference - ACC 2014 - Portland, OR, USA (2014.6.4-2014.6.6)] 2014 American Control Conference - Iterative Learning Control for image based visual servoing

Figure 5 presents the time evolution of on the x-y

plot, simulating what the camera observes underneath. It

corresponds to the tracking performance results presented in

Figure 4. Each corner of the fiducial is color coded to provide

some intuition of the fiducial’s pose. Referring to the

coordinate setup in Figure 1, the black lines represents the

vector PCr , while the black hollow circle represents the center

of mass indicating the trajectory which the fiducial

underwent. Perfect tracking of * will correspond to a line

motion on the x-y plane.

Figure 5. The x-y plot showing the time traces of the fiducial configuration.

The trajectory corresponds to the tracking results presented in Figure 4

IV. ITERATIVE LEARNING CONTROL IMPLEMENTATION

To improve the tracking performance of the visual servoing

system, ILC is augmented on top of the control structure

presented in Figure 2 is augmented by ILC. In the outer loop

of each axis, a parallel ILC is applied as a feedforward

reference generator for the inner-loop. Figure 6 presents the

control block diagram of the parallel ILC architecture, where

the index j denotes the iteration index and k denotes the

discrete time step index.

Figure 6. The control block diagram of the parallel ILC architecture.

ILC collects and stores the error signal, j

ie k and the

input signal, j

iu k from each axis during the current

iteration in the system memory and uses it to modify the

control input for the next iteration, 1j

iu k

. In this work, we

use a simple p-type ILC which is mathematically defined in

(13), where ,U iL and

,E iL denote the learning gains. The

updated ILC input signal is then filtered using a zero-phase

first order low pass filter to improve the robustness of the

learning process.

1

, ,

j j j

i U i i E i iu k L u k L e k (13)

As presented in Figure 7, we can observe a significant

improvement in the tracking performance of the fiducial

configuration for the x and y axes. After 20 iterations, the non-

linear dynamics of the system can be alleviated, though not

completely, with the proposed ILC formulation. Figure 8

presents the input signals applied to the inner loop during the

0th iteration and the 20th iteration. Here, we can compare the

input signals in the 20th iteration with the input signals from

the 0th iteration.

Figure 9 presents the time evolution of on the x-y plot

during the 20th iteration. The black lines presented in Figure

5, which indicates the vector PCr are intentionally removed

for visual clarity. Here, we can observe that the trajectory

taken by the fiducial does not meander as much compared to

the trajectory that is previously presented in Figure 5. The

normalized RMS error convergence is presented in Figure 10.

On each individual axis, the RMS error is normalized against

the RMS error from the 0th iteration. The RMS error on each

axis asymptotically converges to approximately 20 percent of its original RMS value.

Figure 7. ILC ramp tracking performance of the visual servo system. The

tracking performance is significantly improved on each axis. The non-linear

dynamics on the x and y axis are compensated through the learning process.

-300 -200 -100 0 100 200 300

-300

-200

-100

0

100

200

300

X [Pixel]

Y [P

ixel]

1814

Page 5: [IEEE 2014 American Control Conference - ACC 2014 - Portland, OR, USA (2014.6.4-2014.6.6)] 2014 American Control Conference - Iterative Learning Control for image based visual servoing

Figure 8. The input signals of the outer loop or the reference trajectory for

the inner loop. The feedforward nature of ILC commands motion much

earlier compared to the feedback input signal.

Figure 9. The x-y plot showing the time traces of the fiducial configuration

after ILC is implemented. The trajectory corresponds to the tracking results

presented in Figure 7

While keeping the rest of the simulation parameters

constant, the authors alter the sampling frequency of the outer

loop to evaluate whether the proposed control scheme can be

implemented on a slower vision system. Figure 11 and 12

presents the normalized RMS error of a visual servo system

that runs at 30 Hz and 20 Hz respectively. The asymptotic

RMS error are relatively similar. However, we can observe

that the learning transient increases as sampling frequency of

the outer loop decreases. Visual servoing systems with slower

sampling frequency may necessitate the use of Norm Optimal

design framewok, where monotonic convergence of the RMS

errors can be guaranteed.

Figure 10. Normalized RMS error plot of the visual servo system operating

at 100 Hz. On each individual axis, the RMS error is normalized against the

RMS error from the 0th iteration.

Figure 11. Normalized RMS error plot of the visual servo system operating

at 30 Hz. On each individual axis, the RMS error is normalized against the

RMS error from the 0th iteration.

Figure 12. Normalized RMS error plot of the visual servo system operating

at 20 Hz. On each individual axis, the RMS error is normalized against the

RMS error from the 0th iteration.

-300 -200 -100 0 100 200 300

-300

-200

-100

0

100

200

300

X [Pixel]

Y [P

ixel]

1815

Page 6: [IEEE 2014 American Control Conference - ACC 2014 - Portland, OR, USA (2014.6.4-2014.6.6)] 2014 American Control Conference - Iterative Learning Control for image based visual servoing

V. CONCLUSION AND FUTURE WORK

This paper presents the implementation of a p-type ILC for a direct image based visual servoing application. A dual

rate feedback controller is implemented on each axis to

visually servo the configuration of a fiducial marker on a

substrate. ILC complements the proposed visual servo

architecture and significantly improve the tracking

performance of the system. The simulation results

demonstrate the benefits of the proposed approach and

provide a motivation for transition to an experimental testbed

presented in Figure 12. The system presented in Figure 12 is

a Roll to Roll manufacturing platform that is designed to

improve the scalability of the Electrohydrodynamic-Jet (E-

Jet) printing process [14], [15]. The visual servoing approach discussed in this article is helpful to align preexisting features

on the web to E-Jet printing station.

Figure 13. A Roll to Roll manufacturing system to improve the scalability of

E-Jet printing process.

ACKNOWLEDGMENT

The author would like to acknowledge the contribution

and support of the NSF Nano-CEMMS Center under award

numbers CMI 07-49028.

REFERENCES

[1] F. Arimoto, S; Kawamura, S; Miyazaki, “Bettering operation of

dynamic systems by learning: A new control theory for

servomechanism or mechatronics systems,” in Conference on

Decision and Control, 1984, pp. 1–6.

[2] H.-S. Ahn, Y. Chen, and K. L. Moore, “Iterative Learning Control:

Brief Survey and Categorization,” IEEE Trans. Syst. Man Cybern.

Part C (Applications Rev., vol. 37, no. 6, pp. 1099–1121, Nov.

2007.

[3] N. Amann, D. Owens, and E. Rogers, “Iterative learning control for

discrete-time systems with exponential rate of convergence,”

Control Theory …, 1996.

[4] S. Gunnarsson and M. Norrlöf, “On the design of ILC algorithms

using optimization,” Automatica, vol. 37, no. 2001, pp. 2011–2016,

2001.

[5] D. A. Bristow and A. G. Alleyne, “A high precision motion control

system with application to microscale robotic deposition,” Control

Syst. Technol. IEEE, vol. 14, no. 6, pp. 1008–1020, 2006.

[6] K. Barton and A. Alleyne, “Precision coordination and motion

control of multiple systems via iterative learning control,” in

American Control Conference, 2010, pp. 1272–1277.

[7] E. Sutanto and A. G. Alleyne, “Norm Optimal Iterative Learning

Control for a Roll to Roll Nano/Micro-Manufacturing System,” in

American Control Conference, 2013.

[8] D. A. Bristow, M. Tharayil, and A. G. Alleyne, “A Survey of

Iterative Learning Control,” Control Systems Magazine, IEEE, no.

June, pp. 96–114, 2006.

[9] P. Corke, “Visual control of robot manipulators-a review,” Vis.

servoing, 1993.

[10] S. J. Ralis, B. Vikramaditya, and B. J. Nelson, “Micropositioning

of a weakly calibrated microassembly system using coarse-to-fine

visual servoing strategies,” IEEE Trans. Electron. Packag. Manuf.,

vol. 23, no. 2, pp. 123–131, Apr. 2000.

[11] F. Chaumette and S. Hutchinson, “Visual servo control. I. Basic

approaches,” Robot. Autom. Mag. …, no. December, pp. 82–90,

2006.

[12] J. De Best, R. van de Molengraft, and M. Steinbuch, “High Speed

Visual Motion Control Applied to Products with Repetitive

Structures,” IEEE Trans. Control Syst. Technol., vol. 20, no. 6, pp.

1450–1460, 2012.

[13] P. J. White and D. A. Bristow, “Vision based Iterative Learning

Control of a MEMS micropositioning stage with intersample

estimation and adaptive model correction,” in American Control

Conference, 2011, pp. 4293–4298.

[14] J. U. Park, M. Hardy, S. J. Kang, K. Barton, K. Adair, D. K.

Mukhopadhyay, C. Y. Lee, M. S. Strano, A. G. Alleyne, J. G.

Georgiadis, P. M. Ferreira, and J. A. Rogers, “High-resolution

electrohydrodynamic jet printing.,” Nat. Mater., vol. 6, no. 10, pp.

782–9, Oct. 2007.

[15] E. Sutanto, K. Shigeta, Y. K. Kim, P. G. Graf, D. J. Hoelzle, K. L.

Barton, A. G. Alleyne, P. M. Ferreira, and J. A. Rogers, “A

multimaterial electrohydrodynamic jet (E-jet) printing system,” J.

Micromechanics Microengineering, vol. 22, no. 4, p. 045008, Apr.

2012.

1816