vision-based control of robot manipulators - prisma lab · the first real-time visual servo...

40
Vision-Based Control of Robot Manipulators Nicholas Gans — University of Florida Peter Corke — CSIRO, Australia Sourabh Battacharya — University of Illinois Seth Hutchinson — University of Illinois Napoli - Dec 2006 – p. 1

Upload: others

Post on 01-Nov-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Vision-Based Control of Robot Manipulators - PRISMA Lab · The first real-time visual servo systems were reported in • Agin, 1979 • Weiss et al., 1984, 1987 • Feddema et al.,

Vision-Based Control

of Robot Manipulators

Nicholas Gans — University of Florida

Peter Corke — CSIRO, Australia

Sourabh Battacharya — University of Illinois

Seth Hutchinson — University of Illinois

Napoli - Dec 2006 – p. 1

Page 2: Vision-Based Control of Robot Manipulators - PRISMA Lab · The first real-time visual servo systems were reported in • Agin, 1979 • Weiss et al., 1984, 1987 • Feddema et al.,

Visual Servo Control

Visual servo control is merely the use of computer vision data to control themotion of a robot.

In some sense, Shakey [SRI, 1966-1972 or so] was an example of a visualservo system — but with a very, very slow servo rate.

The first real-time visual servo systems were reported in

• Agin, 1979• Weiss et al., 1984, 1987• Feddema et al., 1989

In each of these, simple image features (e.g., centroids of binaryobjects) were used, primarily due to limitations in computation power.

There are two basic paradigms: Position-Based and Image-BasedVisual Servo Control.

Napoli - Dec 2006 – p. 2

Page 3: Vision-Based Control of Robot Manipulators - PRISMA Lab · The first real-time visual servo systems were reported in • Agin, 1979 • Weiss et al., 1984, 1987 • Feddema et al.,

POSITION-BASED VISUAL SERVO CONTROL

Napoli - Dec 2006 – p. 3

Page 4: Vision-Based Control of Robot Manipulators - PRISMA Lab · The first real-time visual servo systems were reported in • Agin, 1979 • Weiss et al., 1984, 1987 • Feddema et al.,

POSITION-BASED VISUAL SERVO

• Computer vision data are used to compute the pose of the camera d, R

relative to the world frame.• The error e(t) is defined in the pose space, again d, R.

• The control signal ξ = (v, ω) is a camera body velocity.• The camera velocity ξ is specified w.r.t. the camera frame.

If the goal pose is given by d = 0, R = I, the role of the computer visionsystem is to provide, in real time, a measurement of the pose error.

Napoli - Dec 2006 – p. 4

Page 5: Vision-Based Control of Robot Manipulators - PRISMA Lab · The first real-time visual servo systems were reported in • Agin, 1979 • Weiss et al., 1984, 1987 • Feddema et al.,

PBVS (cont.)

If u, θ be the axis/angle parameterization of R, the error is given by

e(t) =

[

d

]

and its derivative is given by

e =

[

R 0

0 Lω(u, θ)

]

ξ = LPBVS(u, θ)ξ

in which [Malis 98]

Lω(u, θ) = I −θ

2u× +

1 −

sinc θ

sinc2 θ

2

u2×

Napoli - Dec 2006 – p. 5

Page 6: Vision-Based Control of Robot Manipulators - PRISMA Lab · The first real-time visual servo systems were reported in • Agin, 1979 • Weiss et al., 1984, 1987 • Feddema et al.,

PBVS (cont.)

Since Lω is non singular when θ 6= 2kπ, [Malis, Chaumette, Boudet 99], toachieve the error dynamics e = −λe, we can use

−λe = e = LPBVS(u, θ)ξ

→ ξ = −λL−1PBVS(u, θ)e

and using the Lyapunov function L = 12‖e(t)‖

2 we obtain

L = eT e

= eT LPBVS(u, θ)ξ

= −λeT LPBVS(u, θ)L−1PBVS(u, θ)e

= −λ‖e‖2

and we have, not surprisingly, asymptotic stability.

Napoli - Dec 2006 – p. 6

Page 7: Vision-Based Control of Robot Manipulators - PRISMA Lab · The first real-time visual servo systems were reported in • Agin, 1979 • Weiss et al., 1984, 1987 • Feddema et al.,

PBVS Task Example

160o rotation about the optical axis

−1−0.5

00.5

11.5

2

−2

−1.5

−1

−0.5

0

0.5

1

−2

−1.5

−1

−0.5

0

0 50 100 150 200 250 300 350 400 450 500

0

50

100

150

200

250

300

350

400

450

500

Napoli - Dec 2006 – p. 7

Page 8: Vision-Based Control of Robot Manipulators - PRISMA Lab · The first real-time visual servo systems were reported in • Agin, 1979 • Weiss et al., 1984, 1987 • Feddema et al.,

Why not just use PBVS ?

• Feedback is computed using estimated quantities that are afunction of the system calibration parameters. Thus,

L = −λeT LPBVS(u, θ)L−1PBVS(u, θ)e

and we need LPBVS(u, θ)L−1PBVS to be positive definite.

Even small errors in computing the orientation of the camerascan lead to reconstruction errors that can impact significantly theaccuracy of the system.

• Position-based control requires an accurate model of the targetobject—a form of calibration.

• In task space, the robot will move a minimal distance.In the image, features may move a non-minimal distance duringexecution - features may even leave the field of view.

Napoli - Dec 2006 – p. 8

Page 9: Vision-Based Control of Robot Manipulators - PRISMA Lab · The first real-time visual servo systems were reported in • Agin, 1979 • Weiss et al., 1984, 1987 • Feddema et al.,

PBVS Task Example

Large translation and rotation about all axes

−1−0.5

00.5

11.5

2

−2

−1.5

−1

−0.5

0

0.5

1

−2

−1.5

−1

−0.5

0

0 50 100 150 200 250 300 350 400 450 500

0

50

100

150

200

250

300

350

400

450

500

Napoli - Dec 2006 – p. 9

Page 10: Vision-Based Control of Robot Manipulators - PRISMA Lab · The first real-time visual servo systems were reported in • Agin, 1979 • Weiss et al., 1984, 1987 • Feddema et al.,

IMAGE-BASED VISUAL SERVO CONTROL

Napoli - Dec 2006 – p. 10

Page 11: Vision-Based Control of Robot Manipulators - PRISMA Lab · The first real-time visual servo systems were reported in • Agin, 1979 • Weiss et al., 1984, 1987 • Feddema et al.,

Image-Based Visual Servo Control

For Image-Based Visual Servo (IBVS)

• Features s(t) are extracted from computer vision data.• Camera pose is not explicitly computed.• The error is defined in the image feature space, e(t) = s(t) − s∗.

• The control signal ξ = (V, Ω) is again a camera body velocity specifiedw.r.t. the camera frame, but in this case it is computed directly usings(t).

For example, if the feature is a single image point with image planecoordinates u and v, we have s(t) = (u(t), v(t)).

Since e(t) = s(t), we’ll need to know the relationship between s and ξ todesign a controller that achieves the error dynamics e = −λe.

Napoli - Dec 2006 – p. 11

Page 12: Vision-Based Control of Robot Manipulators - PRISMA Lab · The first real-time visual servo systems were reported in • Agin, 1979 • Weiss et al., 1984, 1987 • Feddema et al.,

The Image Jacobian (for a point feature)

Consider a point P with coordinates (x, y, z) w.r.t. the camera frame.Using perspective projection, P ’s image plane coordinates are given by

u = λx

z, v = λ

y

z→ x =

uz

λ, y =

vz

λ

in which λ is the camera focal length.Using the quotient rule

u = λzx − xz

z2, v = λ

zy − yz

z2

The velocity of (the fixed point) P relative to the camera frame is given by

P = −Ω × P − V

which gives equations for each of x, y, and z.Now it’s just algebra, solving for u and v in terms of u, v and z...

Napoli - Dec 2006 – p. 12

Page 13: Vision-Based Control of Robot Manipulators - PRISMA Lab · The first real-time visual servo systems were reported in • Agin, 1979 • Weiss et al., 1984, 1987 • Feddema et al.,

The Image Jacobian (cont.)

At the end of the algebra, we obtain

[

u

v

]

=

−λ

z0

u

z

uv

λ−

λ2 + u2

λv

0 −λ

z

v

z

λ2 + v2

λ−

uv

λ−u

ξ

which can be written more compactly as

s = L(s, z)ξ.

The matrix L is known as the Interaction Matrix [Espiau, et al., 1992], or theimage Jacobian.

Weiss et al. [1987] used the term feature sensitivity matrix, while Feddema etal. [1989] merely used the term Jacobian to describe this matrix.

Napoli - Dec 2006 – p. 13

Page 14: Vision-Based Control of Robot Manipulators - PRISMA Lab · The first real-time visual servo systems were reported in • Agin, 1979 • Weiss et al., 1984, 1987 • Feddema et al.,

The Image Jacobian for Multiple Image Points

• Since L(s, z) has a nonzero null space, we cannot control six degreesof camera freedom using only a single image point.

• One solution is to simply use multiple image points.• In this case, we merely stack the Jacobians to obtain

s(t) =

s1(t)...

sn(t)

=

L1(s1, z1)...

Ln(sn, zn)

ξ

• Using this approach, three points provide sufficient information tocontrol the camera’s six degrees of freedom.

Napoli - Dec 2006 – p. 14

Page 15: Vision-Based Control of Robot Manipulators - PRISMA Lab · The first real-time visual servo systems were reported in • Agin, 1979 • Weiss et al., 1984, 1987 • Feddema et al.,

Proportional Image-Based Control

To achieve the error dynamics e = −λe

−λe = e = s = L(s, z)ξ

→ ξ = −λL+(s, z)e

in which L+ = (LT L)−1LT .Using the Lyapunov function L = 1

2‖e(t)‖2 we obtain

L = eT e

= eT Lξ

= −λeT LL+e

and we have asymptotic stability when the matrix LL+ is positive definite.Unfortunately, this condition is rarely achieved, e.g., when dim(s) > 6.

Napoli - Dec 2006 – p. 15

Page 16: Vision-Based Control of Robot Manipulators - PRISMA Lab · The first real-time visual servo systems were reported in • Agin, 1979 • Weiss et al., 1984, 1987 • Feddema et al.,

IBVS Task Example

Large translation and rotation about all axes

−1−0.5

00.5

11.5

2

−2

−1.5

−1

−0.5

0

0.5

1

−2

−1.5

−1

−0.5

0

0 50 100 150 200 250 300 350 400 450 500

0

50

100

150

200

250

300

350

400

450

500

Napoli - Dec 2006 – p. 16

Page 17: Vision-Based Control of Robot Manipulators - PRISMA Lab · The first real-time visual servo systems were reported in • Agin, 1979 • Weiss et al., 1984, 1987 • Feddema et al.,

Why not just use IBVS?

Image-Based approaches have some very nice properties -• No pose estimation is required, which can save significant

computation.• IBVS is remarkably robust to errors in calibration.• Even though depth estimates are required, these often enter the

closed-loop dynamics as gains, and thus uncertainties can be dealtwith by choosing conservative control gains.

BUT... there are some problems

• There may be singularities in the image Jacobian.• Some 3D information is still required (e.g., z), and must be estimated.• Unpredictable, often suboptimal Cartesian trajectory of the the camera.

The last of these can cause fairly serious problems...

Napoli - Dec 2006 – p. 17

Page 18: Vision-Based Control of Robot Manipulators - PRISMA Lab · The first real-time visual servo systems were reported in • Agin, 1979 • Weiss et al., 1984, 1987 • Feddema et al.,

IBVS Task Example

160o rotation about the optical axis

−1−0.5

00.5

11.5

2

−2

−1.5

−1

−0.5

0

0.5

1

−2

−1.5

−1

−0.5

0

0 50 100 150 200 250 300 350 400 450 500

0

50

100

150

200

250

300

350

400

450

500

Napoli - Dec 2006 – p. 18

Page 19: Vision-Based Control of Robot Manipulators - PRISMA Lab · The first real-time visual servo systems were reported in • Agin, 1979 • Weiss et al., 1984, 1987 • Feddema et al.,

Coping with PBVS and IBVS performance prob-

lems

It seems that neither PBVS nor IBVS are ideal choices for visual servocontrol.There are (at least) three ways to cope with these problems:

1. find completely new, very clever control schemes,

2. partition the systems degrees of freedom, controlling some with IBVS,some with PBVS, some with other new methods,

3. partition the system along the time axis, switching between IBVS andPBVS (or other) controllers.

We’ll investigate options 2 and 3...

Napoli - Dec 2006 – p. 19

Page 20: Vision-Based Control of Robot Manipulators - PRISMA Lab · The first real-time visual servo systems were reported in • Agin, 1979 • Weiss et al., 1984, 1987 • Feddema et al.,

PARTITIONED METHODS

Napoli - Dec 2006 – p. 20

Page 21: Vision-Based Control of Robot Manipulators - PRISMA Lab · The first real-time visual servo systems were reported in • Agin, 1979 • Weiss et al., 1984, 1987 • Feddema et al.,

Partitioned Control

Suppose we have a control for ω. We can partition the image Jacobian:

s = L(s, z)ξ

=[

Lv(s, z) | Lω(s)]

v

ω

= Lv(s, z)v + Lω(s)ω

Now, again setting e = −λe, we obtain

−λe = e = s = Lv(s, z)v + Lω(s)ω

→ v = −L+v (λe(t) + Lωω)

We can think of (λe(t) + Lωω) as an error term that combines theoriginal error with the error that would be induced by ω.

Napoli - Dec 2006 – p. 21

Page 22: Vision-Based Control of Robot Manipulators - PRISMA Lab · The first real-time visual servo systems were reported in • Agin, 1979 • Weiss et al., 1984, 1987 • Feddema et al.,

2.5D Visual Servo [Malis, et al. 1999]

Let m and m∗ denote the homogeneous image coordinates for a point in thecurrent and desired images. These m is related to m∗ by

m = Hm∗

If all feature points lie on a 3D plane, the homography matrix H can bedecomposed as

H = R +t

d∗n∗T

,

H Homography relating current and desired imagesR Rotation matrix relating orientation of current and desired frames

n∗T normal to 3D feature planed∗ distance to 3D feature planet translation between current and desired frames

Napoli - Dec 2006 – p. 22

Page 23: Vision-Based Control of Robot Manipulators - PRISMA Lab · The first real-time visual servo systems were reported in • Agin, 1979 • Weiss et al., 1984, 1987 • Feddema et al.,

2.5D Visual Servo (cont.)

• It is possible to recover R directly when the feature points are coplanar.• Let uθ be the equivalent axis/angle rotation for R.• The 2.5D control law is given by

ω = λωuθ (1)

v = −L+v (λe(t) + Lωω) (2)

The main problems with this approach are due to the fact that computationsto recover R from the homography matrix are not particularly robust.

Napoli - Dec 2006 – p. 23

Page 24: Vision-Based Control of Robot Manipulators - PRISMA Lab · The first real-time visual servo systems were reported in • Agin, 1979 • Weiss et al., 1984, 1987 • Feddema et al.,

An Image-Based Partitioned Method

[Corke and Hutchinson 2001]

We can also partition the image Jacobian to isolate motion related to theoptic axis.

s =[

LvzLvy

LvzLωx

LωyLωz

]

ξ

=[

LvxLvy

LωxLωy

]

vx

vy

ωx

ωy

+[

LvzLωz

]

[

vz

ωz

]

= Lxyξxy + Lzξz

= sxy + sz

Here, sz = Lzξz gives the component of s due to the camera motion alongand rotation about the optic axis, while sxy = Lxyξxy gives the component ofs due to velocity along and rotation about the camera x and y axes.

Napoli - Dec 2006 – p. 24

Page 25: Vision-Based Control of Robot Manipulators - PRISMA Lab · The first real-time visual servo systems were reported in • Agin, 1979 • Weiss et al., 1984, 1987 • Feddema et al.,

The Partitioned Control Law

As usual setting e = −λe, we obtain

−λe = e = s = Lxyξxy + Lzξz

which leads toξxy = −L+

xy (λe(t) + Lzξz)

We can consider (λe(t) + Lzξz) as a modified error that incorporates theoriginal image feature error while taking into account the feature error thatwill be induced by the translation along and rotation about the optic axis dueto ξz.

All that remains is to choose ξz...

Napoli - Dec 2006 – p. 25

Page 26: Vision-Based Control of Robot Manipulators - PRISMA Lab · The first real-time visual servo systems were reported in • Agin, 1979 • Weiss et al., 1984, 1987 • Feddema et al.,

Two Image Features Used to Determineξz

Consider a collection of image points.

• Define θij , with 0 ≤ θij < 2π as the angle between the horizontal axisof the image plane and the directed line segment joining two featurepoints.

• Define σ2 to be the area of the polygon defined by these points.

1

3

4

2

σ2

θ12

vz = λvzln

σ∗

σωz = λωz

(θ∗ij − θij)

Napoli - Dec 2006 – p. 26

Page 27: Vision-Based Control of Robot Manipulators - PRISMA Lab · The first real-time visual servo systems were reported in • Agin, 1979 • Weiss et al., 1984, 1987 • Feddema et al.,

Simulation Results

A general target motion.

0 100 200 300 400 5000

50

100

150

200

250

300

350

400

450

500

Camera

0 20 40 60 80 100 120 140 160 180 200−200

0

200

400

Poi

nt fe

atur

e er

ror

(pix

)

0 20 40 60 80 100 120 140 160 180 2000

20

40

60

80

σ

0 20 40 60 80 100 120 140 160 180 2000

0.5

1

1.5

2

θ

Time (s)

image plane feature motion feature trajectory

The dashed lines in the left figure show IBVS feature motion.

Napoli - Dec 2006 – p. 27

Page 28: Vision-Based Control of Robot Manipulators - PRISMA Lab · The first real-time visual servo systems were reported in • Agin, 1979 • Weiss et al., 1984, 1987 • Feddema et al.,

Simulation Results

A comparison of the X,Y,Z trajectory of the camera for the traditional IBVS(left) and the partitioned system (right).

0 100 200 300 400 500 600−8

−6

−4

−2

0

2

4

6

Sample

XYZ

0 20 40 60 80 100 120 140 160 180 200−4

−3

−2

−1

0

1

2

3

4

5

Sample

XYZ

Napoli - Dec 2006 – p. 28

Page 29: Vision-Based Control of Robot Manipulators - PRISMA Lab · The first real-time visual servo systems were reported in • Agin, 1979 • Weiss et al., 1984, 1987 • Feddema et al.,

Keeping Features in the Field of View

• Even with our partitioned approach, it is sometimes the case thatfeatures can leave the field of view.

• One way to deal with this problem is to add a control input thatexplicitly works to keep features in the field of view.

• Motivated by work in path planning, we use an artificial potential fieldto repel features points from the boundary of the image

• The potential should go to infinity at the image boundary, and be zeroin the interior of the image (i.e., the repulsive potential should have noeffect when feature points are far from the image boundary).

• The potential function should be continuously differentiable.

Napoli - Dec 2006 – p. 29

Page 30: Vision-Based Control of Robot Manipulators - PRISMA Lab · The first real-time visual servo systems were reported in • Agin, 1979 • Weiss et al., 1984, 1987 • Feddema et al.,

A Potential Function to Keep Features in the Field

of View

Such a potential is given by

Urep(u, v) =

12η

(

1

ρ(u, v)−

1

ρ0

)

: ρ(u, v) ≤ ρ0

0 : ρ(u, v) > ρ0

• ρ(u, v) is the shortest distance to the edge of the image plane from theimage point with coordinates (u, v)

• ρ0 specifies the zone of the image in which Urep affects the control

• η is a scalar gain coefficient

For an Nr × Nc image, the value of ρ is easily computed as

ρ(u, v) = min u, v, Nr − u, Nc − v .

Napoli - Dec 2006 – p. 30

Page 31: Vision-Based Control of Robot Manipulators - PRISMA Lab · The first real-time visual servo systems were reported in • Agin, 1979 • Weiss et al., 1984, 1987 • Feddema et al.,

The Repulsive Force

• Traditional potential-based methods use the gradient of the potential (aforce) to define the direction of motion.

• In our case, if n is the unit vector directed from the nearest boundary toimage feature point with coordinates (u, v), then ∇Urep = Fn, with F

given by

F (u, v) =

η

(

1

ρ(u, v)−

1

ρ0

)

1

ρ2(u, v): ρ(u, v) ≤ ρ0

0 : ρ(u, v) > ρ0

.

Napoli - Dec 2006 – p. 31

Page 32: Vision-Based Control of Robot Manipulators - PRISMA Lab · The first real-time visual servo systems were reported in • Agin, 1979 • Weiss et al., 1984, 1987 • Feddema et al.,

The Resulting Control

• Since a pure translation in the negative z-direction will cause featurepoints to move toward the center of the image, F is mapped directly tothe Tz component of the velocity command.

• Because of chatter effects (where the feature points oscillate in andout of the potential field), we smooth and clip the resulting Tz, yieldingthe discrete-time controller

T ′

z(k) = µT ′

z(k − 1) + (1 − µ)(lnσ⋆

σ− F ) (3)

Tz = min

max

T ′

z(k), Tzmin

, Tzmax

. (4)

in which µ is a parameter determined empirically, and the choice ofTzmin

and Tzmaxallow implementation of asymmetric velocity clipping.

Napoli - Dec 2006 – p. 32

Page 33: Vision-Based Control of Robot Manipulators - PRISMA Lab · The first real-time visual servo systems were reported in • Agin, 1979 • Weiss et al., 1984, 1987 • Feddema et al.,

Simulation Results

In this example, the original partitioned controller would cause the featurepoints to leave the field of view.

0 100 200 300 400 5000

50

100

150

200

250

300

350

400

450

500

Camera

0 50 100 150 200 250 300 350 400 450−500

0

500

Poi

nt fe

atur

e er

ror

(pix

)

0 50 100 150 200 250 300 350 400 450−50

0

50

100

150

σ

0 50 100 150 200 250 300 350 400 4500

1

2

3

4

θ

Time (s)

image feature motion feature error trajectory

In the left, the straight lines indicate feature motion for the traditional IBVScontroller.

Napoli - Dec 2006 – p. 33

Page 34: Vision-Based Control of Robot Manipulators - PRISMA Lab · The first real-time visual servo systems were reported in • Agin, 1979 • Weiss et al., 1984, 1987 • Feddema et al.,

HYBRID SWITCHED VISUALSERVO CONTROL

Napoli - Dec 2006 – p. 34

Page 35: Vision-Based Control of Robot Manipulators - PRISMA Lab · The first real-time visual servo systems were reported in • Agin, 1979 • Weiss et al., 1984, 1987 • Feddema et al.,

Hybrid Switched System Visual Servoing

• Designed to moderate between IBVS and PBVS to controldistance in both Cartesian and image Space

• State-based switching between IBVS and PBVS based onsurfaces in Cartesian and/or image Space

• Time-based switching switches between methods based ontime-intervals

• Random switching is a time-based method, at each time periodIBVS is selected with probability p and PBVS is chosen withprobability q = 1 − p

Napoli - Dec 2006 – p. 35

Page 36: Vision-Based Control of Robot Manipulators - PRISMA Lab · The first real-time visual servo systems were reported in • Agin, 1979 • Weiss et al., 1984, 1987 • Feddema et al.,

Switched System Control

A hybrid, switched system is comprised of a number of continuoussubsystems fj and a discrete system σ that switches between them

x(t) = fσ(t)(x, t, uσ(t)) : σ ∈ 1..n

• The switching signal σ can be determined by the state x(t), time,external input, or any combination.

• Stability of the switched systems is dependent upon theswitching signal σ.

• For a deterministic switching rule, [Branicky 1997] gives asufficient proof of the stability of switched control system.

Napoli - Dec 2006 – p. 36

Page 37: Vision-Based Control of Robot Manipulators - PRISMA Lab · The first real-time visual servo systems were reported in • Agin, 1979 • Weiss et al., 1984, 1987 • Feddema et al.,

Switched System VS for Improved Stability

IBVS and PBVS seem complementary in their strengths and weaknesses

We have designed a state-based switched-system visual servo controllerwith switching surfaces based on level sets of the two Lyapunov functions Vp

and Vi.

• These level sets are defined by the constants: γp > 0 (which defines a maximum pose error) γi > 0 (which defines a maximum feature error)

• The specific switching rule is given by: In IBVS mode, if Vp > 1

2γ2p, switch to PBVS mode

In PBVS mode, if Vi > 12γ2

i , switch to IBVS mode

Napoli - Dec 2006 – p. 37

Page 38: Vision-Based Control of Robot Manipulators - PRISMA Lab · The first real-time visual servo systems were reported in • Agin, 1979 • Weiss et al., 1984, 1987 • Feddema et al.,

Switched System Simulation

160o rotation about the optical axis

−1−0.5

00.5

11.5

2

−2

−1.5

−1

−0.5

0

0.5

1

−2

−1.5

−1

−0.5

0

0 50 100 150 200 250 300 350 400 450 500

0

50

100

150

200

250

300

350

400

450

500

Napoli - Dec 2006 – p. 38

Page 39: Vision-Based Control of Robot Manipulators - PRISMA Lab · The first real-time visual servo systems were reported in • Agin, 1979 • Weiss et al., 1984, 1987 • Feddema et al.,

Switched System Simulation

Large translation and rotation about all axes

−1−0.5

00.5

11.5

2

−2

−1.5

−1

−0.5

0

0.5

1

−2

−1.5

−1

−0.5

0

0 50 100 150 200 250 300 350 400 450 500

0

50

100

150

200

250

300

350

400

450

500

Napoli - Dec 2006 – p. 39

Page 40: Vision-Based Control of Robot Manipulators - PRISMA Lab · The first real-time visual servo systems were reported in • Agin, 1979 • Weiss et al., 1984, 1987 • Feddema et al.,

Conclusions

• Traditional visual servo schemes, while typically locally asymptoticallystable, can have performance problems.

• These problems can be ameliorated by partitioning the control systembased on its controllable degrees of freedom, or by partitioning withrespect to time using a hybrid switched approach.

• We have introduced a partitioned image-based visual servo controlsystem that also uses potential fields to keep features in the field ofview.

• We have introduced a state-based, switched-system visual servocontroller and proved its stability in both the image and pose.

Napoli - Dec 2006 – p. 40