robot modelling

46
Chapter 1 Robot Modelling 1.1 Introduction The aim of this chapter is to present a concise and short review of concepts and principles involved in modelling and motion analysis of robotic manipulators. The chapter is not a comprehensive study on the subject but has been written focusing on the application of such principles to the design of robot control systems based upon visual feedback. The first section starts with basic concepts regarding pose determination of an rigid object respect to the world coordinate frame. The concept of homogeneous transformation matrices is presented and extended to characterize the kinematic behavior of serial links conforming a robotic arm. From here, it is possible to describe the link positions respect to the time, introducing concepts such as link velocity, kinematic derivatives, Jacobian matrix, etc. Further such concepts lead naturally to define other relevant concepts such robot motion and trajectory generation. This chapter conforms a complete set of robotic tools which are also provided in a real-time library which enables the design of relevant robot control routines. The first step in the chapter is to define the notation used in robotic analysis throughout this book. 1.2 Notation and introductory remarks The aim of this section is to draw a clear and concise notation to avoid any sort of misunderstanding in the discussions ahead. This chapter is an important start- ing point because several disciplines such as image processing, computer vision, robot control and real-time control systems converge in the study of visual ser- voing schemes. Each discipline might be explained using its own symbology and notation which represent a considerable risk of losing clarity and uniformity. This section is thus devoted to define the notation and to draw clear rules regarding the meaning and coverage of each symbol. 1

Upload: gogits

Post on 21-Feb-2015

211 views

Category:

Documents


11 download

TRANSCRIPT

Page 1: Robot Modelling

Chapter 1

Robot Modelling

1.1 Introduction

The aim of this chapter is to present a concise and short review of concepts andprinciples involved in modelling and motion analysis of robotic manipulators. Thechapter is not a comprehensive study on the subject but has been written focusingon the application of such principles to the design of robot control systems basedupon visual feedback.

The first section starts with basic concepts regarding pose determination of anrigid object respect to the world coordinate frame. The concept of homogeneoustransformation matrices is presented and extended to characterize the kinematicbehavior of serial links conforming a robotic arm. From here, it is possible todescribe the link positions respect to the time, introducing concepts such as linkvelocity, kinematic derivatives, Jacobian matrix, etc.

Further such concepts lead naturally to define other relevant concepts suchrobot motion and trajectory generation. This chapter conforms a complete setof robotic tools which are also provided in a real-time library which enables thedesign of relevant robot control routines. The first step in the chapter is to definethe notation used in robotic analysis throughout this book.

1.2 Notation and introductory remarks

The aim of this section is to draw a clear and concise notation to avoid any sort ofmisunderstanding in the discussions ahead. This chapter is an important start-ing point because several disciplines such as image processing, computer vision,robot control and real-time control systems converge in the study of visual ser-voing schemes. Each discipline might be explained using its own symbology andnotation which represent a considerable risk of losing clarity and uniformity. Thissection is thus devoted to define the notation and to draw clear rules regardingthe meaning and coverage of each symbol.

1

Page 2: Robot Modelling

In this book italicized lowercase letters (x, y, z) are used to denote scalarsand bold-faced such as (x, y, z) denote vectors of any dimension. Also bold-faced uppercase letters (A,B,C) represent matrices. Estimated values of a givenvariable are denoted with a “hat” on top of the variable (x) and (x∗) stands fordesired values.

Due to the fact that robotic concepts will be extensively used, some specialnotation is used to refer to the robot’s links qn, being n the link number and theLink Cartesian position (χ) is represented by a three component position vector.Let’s develop further this concept.

In a 3-D context, the location of a point is defined in a cartesian space <3

respect to a given coordinate system . Such axis set is composed by orthogonalunit vectors i,j, k which obey the so-classic right-hand rule to define its orienta-tion.1 If we consider a plane formed by unit vectors i and j then the unit vectork points in direction normal to this plane according to the right hand convention[1]. The starting point to describe the position and orientation of a rigid bodyis the definition of a general coordinate axis attached to a fixed point labelled asthe origin.

1.3 Rigid body position and pose determination

In general, for computer vision applications and in the scope of this book, thevisual servoing task space is a subset of the tri-dimensional affine Euclidean spaceΓ ⊂ SE3 = <3 × SO3. (Special Euclidean Group) [2].

A pose of a rigid body respect to a given coordinate system implies three trans-lational and three rotational degrees of freedom and therefore a six-dimensionaldifferential manifold. The translational component is expressed in a Cartesianspace and the position of point p is then expressed in vectorial terms as:

X = px ∗ i + py ∗ j + pz ∗ k (1.1)

Very often this position vector is expressed in matrix form as

X = pT ∗C = [px py pz] ∗

ijk

(1.2)

A 3-component rotation vector is used to describe position respect to eachcomponent of a given coordinate system. This may be expressed in terms of arotational matrix R which is conformed as follows:

1This is a classic rule that has been used in many subjects. Well known examples are in thedetermination of the magnetic flow direction or in the vector analysis of static forces.

2

Page 3: Robot Modelling

R =

nx ox ax

ny oy ay

nz oz az

= [n o a] (1.3)

The column vectors n, o, and a of matrix R are mutually orthogonal sincethey represent unit vectors of an orthonormal frame. It then follows:

nT o = 0 oT a = 0 aT n = 0 (1.4)

From the same property of orthogonality, they have unit norm:

nT n = 1 oT o = 1 aT a = 1 (1.5)

Taking further the fact that R is orthogonal and multiplying TTT results in twohandy properties. The first one regarding to the unitarian matrix

RTR = I (1.6)

from where just post-multiplying both sides of 1.6 by the inverse matrix R−1 wecan get the inverse of a rotation matrix as

RT = R−1 (1.7)

This property will come at hand because the computational burden requiredto compute the inverse of the robot matrix is significantly reduced by usingsimply the transpose of such matrix. Moreover, the rotational matrix may also becomputed on-line in real-time control schemes as discussed later in this chapter.By now, let’s continue presenting the use of the rotational matrix to describerotational movement between rigid bodies.

Let’s discussing further about the components n o a of the rotational matrix,extending the discussion until its application to robot modelling.

1.3.1 Basic rotations

The use of rotation matrices allows a compact representation for the rotationalmovement of a rigid body. A coordinate frame is attached to the object of interestand its rotation R respect to a pre-defined Reference frame n can be defined asnR. Sometimes this reference frame is chosen to be named as world coordinateframe wR.

.In this book, rotations will be represented using an super-script character

representing the frame which acts as reference frame and the sub-script characterrepresenting the new frame. To say, wRr represents the rotation from frame rrespect to the world reference frame w.

3

Page 4: Robot Modelling

Let’s start by a simple but illustrative example. Suppose we rotate a givenreference frame n, respect to the world reference frame yielding to wRn. Bothframes have a common origin O and hence the relationship between them ispurely rotational. This is shown in Figure 1.1.

z

k

i j o

i'

j'

x

y

Figure 1.1: Rotation of a frame (x’,y’,z’) respect to the (x,y,z) world referenceframe

The unit vectors of the rotated frame n (x′, y′, z′) can be described respect tothe world frame w (x, y, z) through its components as follows:

x′ =

cos θsin θ

0

y′ =

− sin θcos θ

0

z′ =

001

(1.8)

(1.9)

To explain this simply remember that vector x′ can be decomposed into its com-ponents vi = x′ cos θ and vj = x′ sin θ. Then, given that each of the columnvectors in Equation 1.8 represent the projection respect to the orthogonal com-ponents (i, j, k), building the matrix for a rotation θ respect to the z axis isstraightforward, because we just gather all columns yielding to

wRn(θ)z =

cos θ − sin θ 0sin θ cos θ 0

0 0 1

(1.10)

Now it is possible to define other rotations respect to axis y and x as follows:

wRn(θ)y =

cos θ 0 sin θ0 1 0

− sin θ 0 cos θ

(1.11)

4

Page 5: Robot Modelling

wRn(θ)x =

1 0 00 cos θ − sin θ0 − sin θ cos θ

(1.12)

In the following chapters, we use these descriptions to build-up complex ro-tations relation from one frame to another. Next section reviews some methodsto express such rotations through a more compact representation.

1.3.2 Angle-Axis Representation

A different form of representing the rotation of a rigid body is often used in thetrajectory planning for the robot end-effector. Such representation is based indefining a rotation axis r respect to a given reference frame and the requiredangle of rotation. This results in a non-minimal but compact representation [3].

Let r = [rx ry rz]T be the unit vector representing such rotation axis and θ

the required rotation angle. Finding the rotation matrix may proceed as follows:

• Rotate an angle −α about axis z until the vector is coplanar to the plane(x, z) i.e. it is over axis x. Then rotate about axis y an angle of −β to alignthe vector with the z axis. This will define the value of required angles αand β.

• Rotate the recently aligned vector by angle θ about axis z.

• Finally, return the vector r to its original position, rotating the vector aboutaxis y by β and subsequently about axis z by an angle α.

If such procedure is expressed using rotation matrices as studied in last sectionwe have:

R(θ, r) = Rz(α)Ry(β)Rz(θ)Rz(−β)Rz(−α) (1.13)

Note that the rotations are named in right to left order. In order to compute therotation matrix R(θ, r) a significant simplification can be done if angles α andβ are expressed as function of the components of vector r. This results in thefollowing expressions:

sin α = ry√r2x+r2

y

cos α = rx√r2x+r2

y

sin β =√

r2x + r2

y cos β = rz

(1.14)

Then, representing cos θ as cθ and sin θ as sθ the rotation matrix for a given angleθ about an axis r can be defined as:

5

Page 6: Robot Modelling

R(θ, r) =

r2x(1− cθ) + cθ rxry(1− cθ)− rzsθ rxrz(1− cθ) + rysθ

rxry(1− cθ) + rzsθ r2y(1− cθ) + cθ ryrz(1− cθ)− rxsθ

rxrz(1− cθ)− rysθ ryrz(1− cθ) + rxsθ r2z(1− cθ) + cθ

(1.15)In robot modelling is often required to compute the inverse procedure; It is

that given a rotation matrix, define the vector and the corresponding rotationangle. Re-writing the rotation matrix 1.3 and using simple indexes for clarity wehave

R =

r11 r12 r13

r21 r22 r23

r31 r32 r33

(1.16)

from where the following expression can be better understood as:

θ = cos−1(

r11+r22+r33−12

)r = 1

2 sin θ

r32 − r23

r13 − r31

r21 − r12

(1.17)

Finally let’s review some important remarks about the angle-axis representa-tion:

• When obtaining the four parameters (three vector components and theangle value) from a rotation matrix, the three components of the resultingr vector are constrained by the condition:

r2x + r2

y + r2z = 1 (1.18)

• Also if θ = 0 then Equation 1.17 is non-determined and other procedureis required to get the inverse. This procedure implies to explore amongparticular components of matrix R in expression 1.16 and find a set ofequations to solve for θ = 0 and θ = π. Also, an important singularity inthis representation can be seen here when θ = 0 because the unit vector ris arbitrary.

• Finally, note that for a given matrix R the following property is found:

R(−θ,−r) = R(θ, r) (1.19)

It means that a rotation in opposite way (−θ) over a vector −r cannot bedistinguished from a rotation (θ) over a vector r.

6

Page 7: Robot Modelling

1.3.3 Representation of Rotation using Quaternions

Now, it is time to introduce a useful tool to overcome many of the drawbacksimposed by the the angle-axis representation reviewed in last section. Quater-nions were first introduced as a result of searching for a new way of representcomplex numbers [4]. In the 80’s, Shoemake [5] presented a more efficient methodto compute the rotation of a rigid body in computer graphics using Quaternions.Such method increases the computational efficiency because a fewer number ofadditions and products are required to define the pose of a rigid object.

Quaternions provide a more complete representation of rotations distinguish-ing between rotations of different sign and avoiding the singularity when angle θapproaches π, They later proved to be a useful tool in robotics to generate smoothtrajectories without overcome physical limits of the robot’s links. Moreover theyplay an active role in some methods of path planning and trajectory generation.Given all these reasons, they are included in this overview.

A quaternion Q is defined by 4 parameters. One scalar value ς and a threecomponent vector υ.

Q(ς, υ) = (ς, υx, υy, υz) (1.20)

Starting from the angle-axis representation studied in section 1.3.2, with θ andr being the angle and the rotation vector respectively, we can easily define thescalar and vector components of Q as:

ς = cos θ2

υ = sin θ2r = sin θ

2(rx, ry, rz)

(1.21)

By construction the scalar and vector parameters of a Quaternion are constrainedby the condition:

ς2 + υ2x + υ2

y + υ2z = 1 (1.22)

which gives origin to the unit quaternion.Let’s first present some insights into the simple Quaternion algebra to explain

how the drawbacks of the Euler’s angle-vector notation can be avoided. Defininga pair of Quaternions Q1 and Q2

Q1 = (ς1, υ1) = ( ς1, [ υ1x , υ1y , υ1z ] )

Q2 = (ς2, υ2) = ( ς2, [ υ2x , υ2y , υ2z ] )(1.23)

Quaternion Q1 is equal to Q2 if and only if all their components are equal, thatis (ς1 = ς2), (υ1x = υ2x), (υ1y = υ2y), (υ1z = υ2z). The addition and difference ofQuaternions is calculated through correspondent elements, as follows:

7

Page 8: Robot Modelling

Q1 + Q2 = (ς1 + ς2, υ1x + υ2x , υ1y + υ2y , υ1z + υ2z)Q1 −Q2 = (ς1 − ς2, υ1x − υ2x , υ1y − υ2y , υ1z − υ2z)

(1.24)

The Quaternion multiplication resembles the cross product between two vec-tors. Starting with a multiplication between two Quaternions with scalar partequal to 0, we have

Q1 = (0, υ1) Q2 = (0, υ2)

Q1 ∗Q2 = −υ1xυ2x − υ1yυ2y − υ1zυ2z +

i j kυ1x υ1y υ1z

υ2x υ2y υ2z

Q1 ∗Q2 = (−υ1υ2, υ1 × υ2)

(1.25)

The multiplication of two Quaternions is just completed with the terms corre-sponding to the scalar product ς1 ∗ ς2 and the scalar-vector product of each scalartimes the opposite vector components, i.e. ς1(υ2x , υ2y , υ2z) and ς2(υ1x , υ1y , υ1z)as follows

Q1 ∗Q2 = ς1ς2 − υ1xυ2x − υ1yυ2y − υ1zυ2z + ς1(υ2x , υ2y , υ2z) + . . .

. . . ς2(υ1x , υ1y , υ1z) +

i j kυ1x υ1y υ1z

υ2x υ2y υ2z

Q1 ∗Q2 = (ς1ς2 − υ1 · υ2, ς1υ2 + ς2υ1 + (υ1 × υ2))

(1.26)

It is time to define how a rotation matrix can be computed from Quaternioncomponents. Getting the results in Equation 1.21, the rotation matrix takes theform

R(ς, υ) =

2(ς2 + υ2x)− 1 2(υxυy − ςυx) 2(υxυz − ςυy)

2(υxυy − ςυz) 2(ς2 + υ2y)− 1 2(υyυz − ςυx)

2(υxυz − ςυy) 2(υyυz − ςυx) 2(ς2 + υ2z)− 1

(1.27)

As usual in this study, we also provide the inverse computation: given a Ro-tational matrix R as in Equation 1.16 compute the correspondent Quaternionas

8

Page 9: Robot Modelling

ς = 12

√r11 + r22 + r33 + 1

υ = 12

sgn(r32 − r23)√

r11 − r22 − r33 + 1sgn(r13 − r31)

√r22 − r33 − r11 + 1

sgn(r21 − r12)√

r33 − r11 − r22 + 1

with :

R =

r11 r12 r13

r21 r22 r23

r31 r32 r33

sgn(x) =

{1 x ≥ 0

−1 x < 0

(1.28)

Finally, some important remarks about Quaternions. First notice in last ex-pression that ς ≥ 0 corresponds to an angle within the interval θ ∈ [−π π]. Thismeans that through Quaternions, any rotation can be represented and thereforethe singularity found in Equation 1.17 is overcome.

Second, in spite of the conveniences of using Quaternions as elemental oper-ator to characterize rotations, they had not been a popular choice until recentyears. Actually Shoemake reckoned that perhaps this happens as a result of notincluding Quaternion in standard Maths and Physics curricula [5].

Finally there are abundant sources to expand Quaternion theory. Definitely ahandy overview is presented by Shoemake in [5]. Some textbooks include usefulreviews of Quaternions applied to Robotics such as Brady [6], Crane [7] and amore updated by Siciliano [3]. Also Funda and Paul published in [8] a comprehen-sive comparison of Quaternion and Homogeneous Transformations in Robotics.

It is time to use the concepts presented before to introduce the concept ofHomogeneous transforms in next section to clear the way for a complete kinematicmodelling of a robotic arm later in this chapter.

1.3.4 Homogeneous Transform Representation

The concept of homogeneous transformation naturally emerges from expressingrotation and traslation in one entity. This study begins by analyzing an orthonor-mal coordinate system which can be attached to a point laying on the surfaceof an object. The object’s position and orientation are thus defined upon therelationship between this coordinate frame and some other coordinate frame.

The description of the position and orientation of such object is commonlycalled the object’s pose. Here the pose can be defined respect to different coordi-nate systems such as the world reference frame w by means of a three-componentposition vector t defined as in Equation 1.1 and one rotation matrix as that inEquation 1.3. If they are integrated inside the same mathematical expressionthen one homogeneous transform matrix as that in Equation 1.29 emerges.

9

Page 10: Robot Modelling

mTn =

[mRn

mpn

0 1

]=

nx ox ax px

ny oy ay py

nz oz az pz

0 0 0 1

(1.29)

Notice again that one homogeneous transformation matrix is conformed bythree orthonormal vectors describing rotation (n, o, a) and a positional vector(p). As mentioned earlier, the superscript m and subscript n of the transforma-tion variable T denote the new frame on which the resultant pose is defined andthe original frame respectively.

The last row has nearly all the elements equal to zero. As clearly explainedby Paul in his well-known book [9], this row may be used to produce scaling towhich the homogeneous transformations are invariant2.

They are commonly known as transformations because the object’s positioncan be transported or “transform” to its equivalent respect to a different coordi-nate system. In terms of matrix algebra, it is possible to express the transforma-tion from a point Op inscribed in the object coordinate system O, respect to theworld3 reference frame W as follows.

wP = wRo · oP + wto (1.30)

It is possible to catch a better image of this transformation if these expressionsare expanded as follows (allowing an abuse of notation for clarity in the result)

wpoxwpoywpoz

=

nx ox ax

ny oy ay

nz oz az

pox

poy

poz

+

pwx

pwy

pwz

(1.31)

It is important at this point to present the inverse transform matrix becauseit is often used in subsequent sections. To calculate such inverse, we start con-sidering again Equation 1.7, where the inverse of the rotation matrix R wasintroduced. Notice that to compute R−1 from the rotational component R isstraightforward, but some calculations need to be done to the traslation vectorp. The whole expression to compute the inverse transform is shown in Equation1.32.

2It is too early to discuss about projections but notice that this term is also used in projectivegeometry due to the fact that a transformation in Euclidean space is actually a subset of othertransformations in projective geometry. This will be further discussed in following chapters

3In this book, terms like world coordinate system and world reference frame refer to thesame concept. They are used indistinctly.

10

Page 11: Robot Modelling

mT−1n =

nx ny nz −p · nox oy oz −p · oax ay az −p · a0 0 0 1

=

nx ny nz −pxnx − pyny − pznz

ox oy oz −pxox − pyoy − pzoz

ax ay az −pxax − pyay − pzaz

0 0 0 1

(1.32)Before carrying on studying the use of homogeneous transforms to describe

robotic chains, we need to clarify some points. Matrix transforms are frequentlyused to express sequential rotations among several frames by just constructing amatrix chain product, representing each frame with the correspondent transformmatrix. Equation 1.33 shows a multiple transformation to “transport” a givenpoint np defined respect to the 5th coordinate frame to a new point defined interms of the coordinate base 0th frame (sometimes the 0th coordinate system isassigned to the world coordinate frame W ). In this case we have numbered eachframe with a simple number. The transformation can thus be expressed as

0p = 0T1 · 1T2 · 2T33T4 · 4T5 · 5p (1.33)

The transformation can be compacted through a simple matrix product ofthe correspondent set of homogeneous transforms, as shown in Equation 1.33.A useful extension of this idea is used to manipulate the factors of this productthrough the inverse transform so as to obtain the values of one unknown matrixin the chain. For instance, suppose that the chain of transforms required to takea point 3p from coordinate system 3 to frame 0 is equal to 0T3 as illustratedbelow.

0T3 = 1T2 · 2T3 (1.34)

Now, imagine that for some reason we do not know the value of 1T2. Then, usingsimple matrix algebra and computing the inverse transform matrix as before, ityields

0T1 = 1T2 · 2T3

0T1 (2T3)−1 = 1T2 · 2T3 · (2T3)

−1 = 1T2 · I1T2 = 0T1 (2T3)

−1

(1.35)

This simple procedure will be often used to manipulate the kinematic model ofthe robot. The nature of homogeneous transforms leads intuitively to use them tomodel robotic chains attaching a coordinate frame to each robotic link and thenexpressing their relationships through transform matrices. This concept is foun-dation of well-known disciplines such as Robot Kinematic and Robot Dynamics.Next section is devoted to discuss the kinematic analysis of robotic chains.

11

Page 12: Robot Modelling

1.4 Robot Kinematics

Last section presented several methods to describe the pose of a rigid body.This section develops the mathematical framework for using such representationsto describe the position, velocity and acceleration of the links conforming onerobot. In general, the study of relationships between links with no regard aboutthe forces which cause such motion is known as Robot Kinematic [10].

Although there exist many different kind of robots, this section focuses inserial revolute robotic manipulators. The kinematics of any other kind of roboticelements should be referred to other sources such as [7].

This study starts describing the most fundamental components of a roboticstructure which are the links and joints, whereas more advanced kinematics prop-erties are considered in subsequent sections.

1.4.1 Robot Links and Joints

The links of a robotic structure are considered to be rigid bodies conforming amechanical chain or assemblage which lays attached in one end to the groundand holding an end-effector tool or gripper in the other. Usually this kind ofsetup is named as serial manipulator and also as open chain manipulator. Inthis manipulator class, n + 1 links are connected through n joints where link 0 isusually firmly attached to the ground.

A joint is the mechanical unit which holds the mechanical linkage betweentwo links. The nature of the joint determines the type of connecting point be-tween those links. There are several kinds of joints usually considered: revolute,prismatic, cylindric and screw. For practical reasons, only revolute and prismaticjoint are considered in this work. Figure 1.2 sketches the representation used inthis work for each kind of these links.

Figure 1.2: Representation of Revolute and Prismatic Links

A revolute joint is one of the simplest and most common. Two configurationsare commonly found among revolute joints: collinear and orthogonal. In the first,the axis of rotation of the link is coincident with the central axis of the proximallink as shown in Figure 1.3(a). In the orthogonal revolute joint, the rotation axisor principal joint axis is normal to the proximal link (See Figure 1.3(b)).

On the other hand, a prismatic joint can only provide sliding motion, withthe joint axis being coincident to the sliding section of the link. As with revolutejoint, there are collinear and orthonormal prismatic joints based on the same

12

Page 13: Robot Modelling

(a) (b)

Figure 1.3: Revolute Joints, (a) Collinear, (b) Orthonormal

principle (see Figure 1.4). Prismatic joints are not further developed in this bookgiven that the robot used in our research is considered within the revolute class.

(a) (b)

Figure 1.4: Prismatic Joints, (a) Collinear, (b) Orthonormal

These arrangements provide therefore only one degree of freedom (DOF ) inthe robotic chain. Therefore the overall DOF in a serial-link robot is consider tobe equal to the number of joints n usually represented as n-DOF. Then, everysingle degree of mobility provided by each revolute joint can be captured througha joint variable θn, where n stands for the n-th joint.

It is time to study how the position of a robotic link can be defined in terms ofthe homogeneous transform matrices presented before. The main goal is to definethe position and orientation of the robot’s last actuator in a compact way respectto a given reference coordinate frame. This is also named as Direct Kinematics.

1.4.2 Kinematics Analysis

The aim of kinematics analysis is to determine the pose, i.e. location and ori-entation, of the robot’s end-effector as a function of the joint variables. Let’sanalyze Figure 1.5 to discuss about the desired final pose of the end-effector. Inthe graph, albeit non-visible, a robotic chain lays fixed to the ground. The ori-gin is the world coordinate axis W (x, y, z). Another coordinate frame E(x′, y′z′)is attached to the end-effector. This frame E may be expressed respect to theworld coordinate frame through a positional vector wpe and a rotation matrixwRe which specifies the orientation. This can be compacted into a transformmatrix as explained in section 1.3.4 to define the relationship between frames Wand E.

It is important to define a standard frame orientation in the end-effector asshown in Figure 1.5. A common practice (see [9]) is to define the tool orientation

13

Page 14: Robot Modelling

x 0

y 0

z 0

E

x e

y e

z e

(n)

(o)

(a)

O

w P e

Figure 1.5: End-effector Coordinate frame

with the three unit vectors directed as follows: The z vector points to the interestobject signaling for the movement required to reach such object. It is thereforeknown as approaching vector a. The component in direction y is known as theorientation vector o and usually is set to point from fingertip to fingertip definingthe last actuator orientation. Finally the vector in direction normal to the othertwo is known as the normal vector n. Given the orthogonality property, thisvector may be calculated as a cross-product given by n = o× a. Notice also thatfor practical reasons the same notation n,o, a was used for identifying each col-umn of the homogeneous transformation matrix; this facilitates the interpretationthrough homogeneous matrices.

1.4.3 Denavit-Hartenberg Parameters

Figure 1.5 illustrates the last step in the kinematic analysis whose aim is todefine the relation between the world frame and the end-effector frame. This willenable us to design specific controllers to drive the arm over a set of points or adesired trajectory which may be specified in terms of directions, velocities andaccelerations for each robot link. From here that the Kinematic analysis gets itsrelevance as a way to analyze intermediate links and joints in a compact way.

The first part of the method is based on the classical paper of Denavit andHartenberg which discusses about Kinematic notation for mechanisms [11]. Thisis a well-known standard which is explained in almost every book of roboticengineering (For instance see [12, 13, 14]). Essentially, this method describesthe motion of each robot link as a rigid body whose interaction with other linksis characterized using the homogeneous transforms reviewed in the last section.The first step is to assign a coordinate system to each of the links conforming therobotic arm. The overall arm motion is expressed as the homogeneous relation

14

Page 15: Robot Modelling

in sequence between all frames previously assigned to each link. The motion of aframe attached to a given link (n) is defined respect to the motion of the frameattached to the previous link (n− 1).

Basically the method requires to assign one vector, called the Joint Vector, toeach joint in the chain. Such vector is normally oriented along the joint axis, i.e.the axis of rotation in a revolute joint. When the direction for each joint vectoris established, one coordinate frame may be attached to the joint considering thejoint vector as the third component of the coordinate frame (usually identified asz) following the right-hand rule.

Considering that the robotic mechanism is attached in one end to the groundand has an end-effector device in the other end, the method should be developedlink by link following the mechanical chain, starting from the ground-fixed link.Obviously, this is only valid for robots formed by serial links. For a wider discus-sion including different kinds of robotic elements, an interested reader can review[9, 15, 3, 7].

Definitely, this short description does not account for all insights into themethod. Let’s now reviewing the method in more detail through a step-by-stepbasis following the scheme sketched in [7].

Step 1: Define Joint Vectors

Given that the first link is usually attached to the ground and each joint shouldhave a coordinate frame attached to it, the DH convention establishes that thejoint vector in revolute joints usually matches with the line of rotation.4 Someauthors refer to this component as axis vector but both terms are indistinctivelyuse in this text.

Normally, this vector is taken as the z axis of the coordinate frame. In Figure1.6, the joint vector is shown for two revolute joints.

Figure 1.6: Joint vector in two revolute joints

Step 2: Define Link Vectors

The link vector is commonly defined as the longitude of the normal line drawnbetween two adjacent joint’s axis, those defined in step one. In the special case

4The same applies for cylindric and screw-pair joints, not reviewed here.

15

Page 16: Robot Modelling

that the two joint axes are parallel, the location but not the direction of thisvector is chosen arbitrarily.

As shown in Figure 1.7, for a simple planar revolute joint, the direction of thelink vector determines θn that accounts for the Joint Angle measured respectto the x axis of the attached frame. Actually the x axis shall be rotated bythe angle θ generating x′ which determines the analytical expression relating acoordinate frame with the following frame attached to the next robot’s link. Thisis the aim of the next step.

x n-1

x n 0 n

0 n-1 z n-1

z n

Figure 1.7: Joint vector in two revolute joints

Step 3: Name Joint and Twist Angles

This step is crucial in the proper definition of attached frames. First we havealready seen that the Joint Angle is defined as the angle measured betweenthe link vector and the x axis of the attached frame following the right-handconvention. In typical revolute joints, this is the variable to be controlled. Recallthat in order to proceed to the next link, the frame should be rotated over the zaxis by an angle θ. This will match the xn−1 axis in link n− 1 to the link vectorxn in the next link n.

So far these conventions will work because in Figure 1.7 two links with paralleljoint axes are considered, but this is not normally the case with real industrialrobot links. The parameter which accounts for such displacement angle betweentwo consecutive joint axes, -after the initial rotation of x axis (generating x′ axis),is called Twist Angle. It is defined as the angle required to match the joint axiszn−1 in link n− 1, to that in joint n named as zn by means of a rotation over thedisplaced axis x (renamed as x′). This can be better understood in Figure 1.8where a Twist Link is represented by the variable αn−1. Notice that the twistangle αn is a rotation over axis xn and refers to the next joint (n + 1) non-visiblein the image. From the sketch, it is evident that link n + 1 which can be seenincomplete in the figure, has its joint vector pointing vertically as the first jointn− 1 does.

16

Page 17: Robot Modelling

x n-1

x n

0 n-1

z n-1

z n

n-1

n 0 n

Figure 1.8: Joint vector in two revolute joints

Before continuing to next section, we should gather some remarks. As in-dicated in graphs, the axis pointing through the center of rotation is known asJoint Axis and is usually assigned to the frame component z. Then, the x axispoints normal to the z axis by definition and together with the Link Vectorforms the Joint Angle. Axis y is located in the normal to the x − z planefollowing the right hand convention.

Step 4: Determine offset and link length

This step finishes the kinematic parameter definition. Figure 1.9 shows three linksof the mechanism where the Joint vector (z) and Link vector (x) are assignedaccording to the rules discussed before. To complete the kinematic description, itis required to define the Link Offset (d) as the distance between Link vectorsin two subsequent links (for instance n− 1 and n). This parameter is calculatedas the length of the normal vector between axis xn−1 and nn, after applying therotation specified by the Twist Angle to match the direction of both z axes inconsecutive links.

The Link Length (a) is defined easily as the normal distance between twoconsecutive Joint Vectors, for instance zn−1 and zn, along the link vector x′,i.e. the magnitude of such vector is considered the link length.

Step 5: Build the table of Kinematic parameters

This is the conclusive step in the model construction. The aim is to gatherall the parameters obtained from previous steps in one parameter set. It is acommon practice among the robotic community to use this format to compact thekinematic information. Ever further, most of the commercial software availableto simulate kinematic arrangements requires to be fed with the robot parametersin this way. Usually the parameters are organized in a format similar to Table1.1.

17

Page 18: Robot Modelling

ParametersLenght Twist Angle Offset Variable

Link ai αi di θi/φi

L1 a1 α1 d1 θ1

L2 a2 α2 d2 θ2

......

......

...Ln an αn dn θn

Table 1.1: Denavit-Hertenberg Parameter Table

It is important to notice that these parameters are essential in obtaining theKinematic expressions. For instance in the next section, the DH conventionnaturally supports the computation of the homogeneous matrices representingthose frames attached to the each of the robot links.

To close the description of the DH convention, Table 1.2 describe a shortform of every step in the method. The determination of the kinematic matrices,called forward kinematics is the matter of the next section.

x n

z n

x n+1

z n+1

n

x n-1

z n-1

n+1

Joint n-1

Joint n

Joint n+1

Link n-1

Link n

Link n+1

d n

d n+1

0 n

0 n+1

y n+1

a n-1

a n

0 n-1

n+1

n

Figure 1.9: Denavit-Hertenberg Parameters defined for three links of a serialmanipulator

1.4.4 Direct Kinematics

As discussed in section 1.4.2, the aim of the kinematic analysis is to determinethe pose of the robot’s last actuator. By attaching a coordinate frame to each

18

Page 19: Robot Modelling

Step Description

I Assign the Joint Vectors matching with each joint axis.II Define the Link Vectors between every two consecutive linksIII Name each joint in the robotic chain (θn)IV Find the Twist Angle (αn)V Determine the Offset (dn) and Link Length (an)VI Complete Table 1.1 with all Kinematic parameters

Table 1.2: Denavit-Hertenberg Method, Steps Overview

robot’s joint according to the DH convention, it is possible to define the overalldirect kinematic relationship between the base link and end-effector.

This section presents the mathematical expression which relates the coordi-nate frame attached to a given link to those coordinate frames attached to thelinks behind and after it.5 The step-by-step procedure was presented in lastsection but the aim of this section is to take such assumptions and build thekinematic matrix. Therefore, this discussion assumes that the kinematics param-eters have been fully determined and the correspondent link’s transform matrixhas been built for each link.

Kinematics Matrix

Taking a second look in Figure 1.8, let’s assume that the assignation of framesstarts on link n−1. Following the DH convention, the z axis is pointing followingthe Joint Axis and the x axis is simply normal to this vector. As discussed, afirst rotation over the newly defined axis z is required to match the axis x to thelink vector. Recalling the notation presented in [9], this can be written as Rot(z,θ), where the first parameter indicates the axis of rotation and the second therotation’s angle represented by θ.

Now, displacing the attached frame in direction of the rotated z axis by thedistance defined in parameter offset d, the “height” from one frame to the nextone is matched. Using again the notation from [9] and changing the operation toa traslation in the third axis (z) this can be replaced by Trans(0,0,d).

Next step is to displace the resulting frame through the link vector, bythe longitude defined in the parameter an−1, which accounts for the length ofthe link (see step 4). Using again the pseudo-notation, this can be written asTrans(a,0,0).

Finally, the last step is to rotate the frame over the x axis to match the axis zof both frames. Such required angle is specified by the twist angle (α) and canbe also represented by Rot(x, α). A graphical view of these steps applied to a

5They can also be named as distal and proximal links respectively.

19

Page 20: Robot Modelling

generic serial anthropomorphic arm of 3-DOF is presented in Figure 1.10 whileTable 1.3 shows the correspondent kinematic parameters.

270 o

270 o

270 o

x 0

z 0

y 0

x 1

z 1

y 1

x

z

y

1

d 1

x 3

z 3

y 3

a 2

2

0 1

0 2

0 3

x 3

z 3

y 3

3

0 3

a 3

Figure 1.10: Graphic example to show how to locate and displace the coordinateaxes for a generic 3-DOF Robot

Back into modelling, each of the steps can be compacted into an equationdescribing the relationship between two consecutive frames. If such frames arenamed as n and m, we can express that as

nTm = Rot(zm, θm)Trans(0, 0, dm)Trans(am, 0, 0)Rot(xm, αm) (1.36)

20

Page 21: Robot Modelling

ParametersLenght Twist Angle Offset Variable

Link ai αi di θi/φi

L1 0 π/2 d1 θ1

L2 a2 0 0 θ2

L3 a3 0 0 θ3

Table 1.3: DH Parameters for the Anthropomorphic arm shown in Figure 1.10

Expressing each component in terms of homogeneous transforms as presented insection 1.3.4, it yields

mTn =

cos θ − sin θ 0 0sin θ cos θ 0 0

0 0 1 00 0 0 1

1 0 0 a0 1 0 00 0 1 d0 0 0 1

1 0 0 00 cos α − sin α 00 sin α cos α 00 0 0 1

(1.37)from which emerges the classical homogeneous transform relating the coordinateframe from link n to that in link m as follows

mTn =

cos θ − sin θ cos α sin θ sin α a cos θsin θ cos θ cos α − cos θ sin α a sin θ

0 sin α cos α d0 0 0 1

(1.38)

Given that the robotic arm is formed by a serial chain of links, it is possibleto represent the relationship between the end-effector frame and the base framethrough a chain of homogeneous transforms of the form of 1.38 as follows 6

0T6 = 0T11T2

2T33T4

4T55T6 (1.39)

Notice that in this example the robot has 6 transforms, one per each jointwhich means that each joint can be represented by a independent variable θn aslong as such joint is actuated, i.e. has its own actuator.

Given that a robot manipulator is usually an open kinematic chain, whosejoint positions are defined by a vector of single variables θn, the number of jointsgenerally equals the number of degrees of freedom.

The number of Degrees of Freedom (DOF) in a robot manipulator is thenumber of independent position variables that should be specified in order todefine a location for all the links in the arm. Generally speaking, it defines inhow many ways the manipulator is able to move.

6There exists a well-known modified DH convention presented by Craig in [16], which resultsin a different matrix. For more information, refer to final remarks on this section

21

Page 22: Robot Modelling

In order to cover the entire 3D workspace with an 3-component orientationvector, six degrees of freedom are thus sufficient.

For a given robot, developing a given task, if there are more degrees of mobilityin the robot than degrees of freedom required by such task, then the manipulatoris said to be kinematically redundant for that task.

It is too early in this work to discuss about redundancy, but for now noticethat this concept will be recalled later to develop more advanced ideas regardinginverse robot kinematics and motion control.

Although the idea of expressing the whole robot kinematics in just one entityis attractive, generally such matrix is formed by elements with a considerablenumber of terms, which make it difficult to visualize and understand. Hence it isusual to find that for most robot models, the kinematics is shown separately foreach of the transform matrices such as Equation 1.39.

Notice here the relevance of the kinematics parameter table which ease theprocess to define each link’s transform. Taking the data from Table 1.3, it rela-tively easy to build each transform as follows

0T1 =

cos θ1 0 sin θ1 0sin θ1 0 − cos θ1 0

0 0 1 d1

0 0 0 1

(1.40)

1T2 =

cos θ2 − sin θ2 0 a2 cos θ2

sin θ2 cos θ2 0 a2 sin θ2

0 0 1 00 0 0 1

(1.41)

2T3 =

cos θ3 − sin θ3 0 a3 cos θ3

sin θ3 cos θ3 0 a3 sin θ3

0 0 1 00 0 0 1

(1.42)

Considering just these three links is still possible to visualize the whole transformas follows:

0T3 = 0T11T2

2T3 =

c1c23 −c1s23 s1 c1(a2c2 + a3c23)s1c23 −s1s23 −c1 s1(a2c2 + a3c23)s23 c23 0 a2s2 + a3s23

0 0 0 1

(1.43)

where a simplified notation is used as follows: c1 = cos θ1, c2 = cos θ2, s1 = sin θ1,s2 = sin θ2, c23 = cos(θ2 + θ3), and s23 = sin(θ2 + θ3).

Notice that in the case of revolute joints, θ1, θ2 · · · θ3 are the independentvariables in the matrix given that all other are completely defined in the param-eter table. We can thus express this as:

22

Page 23: Robot Modelling

ParametersLenght Twist Angle Offset Variable

Link ai αi di θi/φi

L1 0 π/2 0.26 θ1

L2 -0.23 0 -0.013 θ2

L3 -0.24 0 0 θ3

L4 0 −π/2 0.075 θ3

L5 0 π/2 0.044 θ5

L6 0 0 0.080 θ6

Table 1.4: Complete Denavit-Hartenberg parameter table for the TQ MA20006-DOF manipulator

χ =0 T6(q) being q =

θ1

θ2...θn

(1.44)

The roll of vector q here is similar to that played by a state-space vector inlinear systems. This concept is the main subject discussed below which developsthe idea of the kinematic spaces.

Kinematic Spaces

As mentioned earlier, the aim of robot kinematics is to determine the pose of therobot’s end-effector using such information to perform several robotic tasks suchas trajectory following, picking-up objects, welding, tracking, etc.

The kinematics information is concentrated in the 0T6 transform matrix whichtakes the current state of each link q and determines the end-effector’s cartesianposition and orientation (χ). Expressing this in mathematical terms, it yields

χ =0 T6(q) = K(q) (1.45)

Notice that the joint vector (q) represents each link’s current state and can beconformed by either a set of θn or dn values depending on the nature of eachlink, i.e. revolute or prismatic. In the scope of this book such vector is alwayscomposed by angle values θn because all experiments run upon a 6-DOF revoluterobot manipulator, conforming q as

q =[θ1 θ2 θ3 θ4 θ5 θ6

]T(1.46)

Henceforth this vector is denominated Joint Space or configuration space,because it denotes the space over which each joint variable is defined.

23

Page 24: Robot Modelling

On the other hand, recall that the direct kinematic equation nTm expressesthe end-effector’s pose respect to the base. If such pose is defined as a function oftime then it becomes a trajectory. Expressing such trajectory in terms of completetransform matrix implies to assure the orthogonality of vectorial components n,o, and a for every step. An option is to use a minimal representation as thosediscussed in sections 1.3.2 and 1.3.3 or the so-called Euler representation, notreviewed here but very similar to the angle-axis scheme (See Paul [9] for details).

Therefore it is possible to express the robot posture using a m × 1 vector,being m ≤ n by means of a minimal representation which includes positional androtational elements as follows:

χ =

[pw

]=

[px py pz Wx Wy Wz

]T(1.47)

In this representation, the parameters are naturally independent and definedin the space over which the task is performed; hence this space is commonlynamed as the Operational Space.

Final Remarks

Before closing this section, it is important to discuss two relevent issues. Oneregarding the modelling of prismatic joints and a second one about the ModifiedDenavit-Hertenberg convention.

Transformation Matrix for Prismatic Joints

In case of revolute joints, it is assumed that θ is the controlled variable while inprismatic joints, this job is taken by the offset variable d given that the motionis performed in the direction pointed by the z axis or axis vector. Parameters θand α remain in the matrix to account for the link’s geometry while positionalvector components such as px = a · cos θ and py = a · sin θ become zero in theTransformation matrix (see matrix 1.48 below and compare to that in 1.38).

mTn =

cos θ − sin θ cos α sin θ sin α 0sin θ cos θ cos α − cos θ sin α 0

0 sin α cos α d0 0 0 1

(1.48)

Modified Denavit-Hartenberg convention

There exist a Modified Denavit-Hartenberg convention. In spite this fact is notexplicitly discussed in textbooks, it is often used for instance in [17], [16], [18] and[7], This method results in quite different matrix values and therefore attentionshould be given to this fact. In the scope of this book, only the classic DH

24

Page 25: Robot Modelling

convention is used but it is worth to mention that the robotics toolbox for Matlabused in the simulations can cope effectively with both conventions [19].

1.4.5 Inverse Kinematics

The inverse kinematic problem consists on the computation of the Joint Spacevector (χ) given the end-effector’s position and orientation 0T6.

Solving this problem is of special importance because the manipulator motioncan be thus specified in terms of the operational space, which defines trajectory.The joint values required to follow such trajectory are thus calculated from theinverse kinematic solution.

The inverse kinematic may be solved either by finding the analytical equa-tions for each link’s variable or through an iterative method. Both methods arediscussed in this book with a simple example to illustrate the analytical methodand a practical implementation of one iterative solution.

Analytical Solution

The problem can be solved analytically for a simple manipulator but the solutionfor some 6-DOF robotic structures may be fairly complex or difficult. Somereferences present the analytical solution for classical robot examples such as thePuma Robot [14] or the Stanford manipulator [15].

But the inverse kinematic problem is not an easy problem given that multiplessolutions may exist, in special for kinematic redundant robots which are ableto reach any point inside the workspace. Also the kinematic expressions are ingeneral non-linear which implies that a closed-form solution is not always possible.Moreover, the solution could be no admissible because it might violate the robot’sphysical constraints such joint operational ranges or mechanical postures.

The problem of multiple solution depends basically on the number of degreesof mobility and on the non-null Denavit-Hartenberg parameters. It is to say thatthe number of solutions grows as the number of non-null kinematic parametersincreases. Of course a solution exists as long as the end-effector’s pose belongsto the reachable dexterous robot’s workspace.

In the case of a 6-DOF robot with no physical or mechanical limitations, whichis clearly an ideal case, it is possible to find at least 16 admissible solutions for agiven location and orientation [3]. Evidently the existence of physical constraintsreduces the number of possible solutions.

To calculate the analytical solution for a given robot, some sorts of geometricand algebraic intuition are required. On one side, an skillful algebraic analysiscould eventually lead to significant equations. On the other, some geometricassumptions may be really useful to determine handy points and robot’s poseswhich successfully solve practical problems.

25

Page 26: Robot Modelling

Just as an example, the inverse kinematic solution for a 3-link generic robotlike that presented in last section is illustrated below. This is a simple but illus-trative example with just the final expression included in this work. A detailedprocedure is clearly explained in several sources such as [9] and [3].

The inverse kinematic problem is solved following the method proposed in [14],where starting from the desired Td, it is possible to apply some matrix algebrato find some equivalences from which we get the resultant joint’s angles values asfollows:

0T3 = 0T1 · 1T2 · 2T3

(0T1)−1 · 0T3 = (0T1)

−1 · 0T1 · 1T2 · 2T3 = I · 1T2 · 2T3

(0T1)−1 · 0T3 = 1T3

(1.49)

In order to build the last expression, we calculate the inverse7 of expression1.40 and the matrix 1T3. Considering the elements of 0T3 it is possible to conform

c1 s1 0 00 0 1 −d1

s1 −c1 0 00 0 0 1

nx ox ax px

ny oy ay py

nz oz az pz

0 0 0 1

=

c23 s23 0 l3c23 + l2c2

s23 −c23 0 l3s23 + l2s2

0 0 −1 d2

0 0 0 1

(1.50)

Focus the attention in element [3,3] which corresponds to the resultant elementd2. It thus easy to verify that

s1px − c1py = d2 (1.51)

Giving the fact that s1px and c1py are orthogonal it is possible to introduce anew variable r to express them as

px = r cos φ py = r sin φ

tan φ = py

pxφ = arctanpx

py

(1.52)

with r = +(p2x + p2

y)12 and θ1 can be calculated as

θ1 = arctan

(d2√

r2 − d22

)+ arctan

(py√

r2 − d22

)(1.53)

A detailed determination of these expressions is carefully exposed in [9] wherefollowing the same procedure it is possible to determine the expressions for anglesθ2 and θ3 as

7Recall that in case of orthogonal matrices, inverse is equivalent to the transpose.

26

Page 27: Robot Modelling

θ2 = arcsin

(Pz − d1 − l3nz

l2

)(1.54)

θ3 = arcsin(nz)− arcsin

(Pz − d1 − l3nz

l2

)(1.55)

Iterative Inverse Kinematic Solution

One iterative method to solve the inverse kinematic problem presented by Chieaveriniet al. in [20] is discussed in this section. This method requires the current end-effector’s pose (0T6) to calculate the correspondent joint vector q.

The solution is completely general and much less efficient than a specifickinematic solutions derived analytically. Although the method requires elementswhich have not been reviewed so far, it is presented to preserve the continuity inthis chapter. All new computational elements, such as the Jacobian matrix andits pseudo-inverse are thoroughly discussed in next sections.

The method receives the homogeneous transform Td for which the joint vectorhave to be determined. Given the fact that an inverse kinematics problem mayhave multiple solutions, an optional initial state q0 for the joint vector may alsobe supplied as a second input. Such initial state allows a shorter search forthe solution, because if not given, the algorithm starts from a vector with allcomponents equal to zero.

The algorithm searches to minimize the difference between the given desiredtransform Td and the transformation resulting from the forward kinematics com-putation F (q0). This can be expressed as

e = F (q)− Td (1.56)

The algorithm basically resembles a control algorithm introduced in [13] andextended in [20] to minimize iteratively the error expression in Equation 1.56. Ba-sically the error expression is multiplied by the Jacobian matrix which representsthe first time derivative of the transformation matrices, yielding

q = J+ · (F (q)− Td) (1.57)

Finally the Joint State vector (q) is calculated by integrating the resultantvector of joint velocities (q) in expression 1.57. The algorithm operates untilthe maximum iteration number is reached or until the error e in expression 1.57becomes less than a given threshold ε.

Due to the iteratively nature of the algorithm, the returned angles (consideringall joints like revolute) may be in a non-minimum form, i.e. (q + 2nπ). Alsoconsiders that this algorithm obtains a solution even at singularities points. Amore complete discussion about the minimization algorithm is reviewed later

27

Page 28: Robot Modelling

when the kinematic motion and the Jacobian are analyzed, characterized andimplemented in real-time.

Final remarks

Two remarks close this section. First it is important to notice that the cal-culation of direct and inverse kinematics may imply a big computational costincluding when only an analytical solution is used. Considerable research effortswere focused in minimize the amount of required mathematical operations. Forinstance, in [21] Goldenberg et al. presented a complete generalized solutionto the inverse kinematics of robots with arbitrary number of DOF’s using theconcept of residuals. Wang et al. used in [22] vector equations combined withrecursive computations which result in computational advantages over other ex-isting methods for calculating kinematic and dynamic expressions.

Finally a careful attention deserves the fact that the dimensional units usedfor the last column of the T matrix must agree with those units used in the robotdefinition. This is a very important issue in order to keep consistency in thekinematic model.

1.4.6 Motion analysis

In the last section, the kinematics equations to define the relationship betweenthe joint state and the end-effector’s pose were developed in terms of the directand inverse kinematics solutions. This section takes this study one step further byanalyzing the mapping between the robot’s joint velocities and the correspondingend-effector translational and angular velocities.

As in forward kinematics, such relation may be expressed by a Jacobian ma-trix. It is one of the most important tools for robot modelling because is used toanalyze redundancies, detect singularities and as an active component in someiterative inverse kinematics algorithms as the one introduced in Section 1.4.5.The discussion which follows mainly focuses in a numerical solution to obtain theJacobian, given our interest in a real-time visual servoing schemes. However itis also possible to derive such relationship upon analytical equations as shown in[23].

The determination of the elements conforming the Jacobian matrix startsfrom the direct kinematics expression in Equation 1.45, but using q as defined inexpression 1.46. Given that the aim of differential kinematics is to define the map-ping from joint velocities to the end-effector’s motion, a minimal representationfor velocity can be used expressing χ = υ yielding

υ =

[pω

](1.58)

28

Page 29: Robot Modelling

with the cartesian positional vector p = [px, py, pz] embedded in the upper halfand the angular velocity vector ω = [wx, wy, wz] in the lower half. This vectoris commonly known as the Screw vector.

It is easy to see that the Jacobian J relates the translational motion and theangular velocity of each link. Hence the Jacobian matrix may be split into twocomponents

υ =

[Jp

]q (1.59)

which decomposes the matrix into two parts, yielding p = Jp(q) q and ω = Jω(q) q.Notice the use of q which indicates the Jacobian depends on the joint space vec-tor.

Let’s study the nature of the differentials generated from a rotational matrixand a positional vector, which in principle gives origin to the translational androtational differential motion. From section 1.3.1, it is been discussed that arotation can be expressed with respect to each axis x, y and z through matrixtransformations similar to

Rot(x, θ) =

1 0 0 00 cos θ − sin θ 00 − sin θ cos θ 00 0 0 1

(1.60)

which just accounts for the axis-x rotation (see Equation 1.12). In order to definea differential motion, it is assumed that differential changes imply that sin θ → dθand cos θ → 1, hence the derivative matrices corresponding to Rx,θ, Ry,θ and Rz,θ

are

Rx,θ =

1 0 0 00 1 −δx 00 δx 1 00 0 0 1

Ry,θ =

1 0 δy 00 1 0 0−δy 0 1 00 0 0 1

Rz,θ =

1 −δz 0 0δz 1 0 00 0 1 00 0 0 1

(1.61)Hence an important Skew-symmetric matrix ∆(q)R can be calculated as the

product of the matrices in expression 1.61. Notice that this result is independentfrom the fact of which differential rotation is performed first.

∆(q)R =

0 −δz δy

δz 0 −δx

−δy δx 0

→ ∆(q) =

0 −δz δy dx

δz 0 −δx dy

−δy δx 0 dz

0 0 0 0

(1.62)

29

Page 30: Robot Modelling

On the other hand, a differential translation is simply expressed by meansof the derivatives at each coordinate direction such as [dpx, dpy, dpz], whichcompletes the ∆(q) matrix shown in the right side of 1.62.

Notice that this matrix is entirely form by elements which belong to theoperational space χ. Therefore we can make use of the minimal representationvector from Equation 1.58, the so-called screw vector ∆(q) = [dx dy dz δx δy δz]

T .The Jacobian is determined by the expressions in 1.59, which analyze separatelythe translational and rotational components of the differential motion.

Differential rotational motion

Let’s now determine the derivative of the rotation matrix as a way of computingthe differential rotational motion ω. First, assume an orthogonal rotational ma-trix which implies that R(t) · RT (t) = I and calculating the first derivative, ityields R(t) ·RT (t) + RT (t) · R(t) = 0. Now focus on the first term R(t) ·RT (t).As elegantly shown in [3], this product generates an skew-symmetric matrixnamed here as S(t). It is possible to extract R(t) by post-multiplying bothsides by R(t), and a very important expression emerges

R(t) = S(t) ·RT (t) (1.63)

which allows to express the derivative of R(t) in terms of itself. This can alsobe explained from a geometric point of view which supplies relevant physicalinsights, see for instance [3]. Do not lose of sight the fact that S(t) is a Skew-symmetric matrix which represents rotational differentials. Relating this fact withmechanics, it is known that the skew-symmetric matrix S(t) can be representedas a set of angular velocities ω yielding

p(t) = ω(t)×R(t) p′ (1.64)

where ω(t) corresponds to the rotational part of the screw vector. Notice thatsometimes this is also represented by S(t) = S(ω(t)) [3].

Translational differential motion

It is time to complete this study by analyzing the translational differential motionas well. Considering again a point p which is to be transformed from frame i toanother frame i− 1. It is possible to use expression 1.30 as follows

i−1p = i−1Riip + i−1oi (1.65)

with i−1oi being the positional component for the transformation. Calculatingthe derivative respect to time, it yields

i−1p =i−1 Riip +i−1 Ri

ip + i−1oi (1.66)

30

Page 31: Robot Modelling

Recalling expression 1.63, we have

i−1p = i−1Riip + S(i−1ωi) · i−1Ri

ip + i−1o1 (1.67)

Introducing variable i−1ri to represent the transformation i−1ri = i−1Riip,

the expression is changed to

i−1p = i−1Riip + S(i−1ωi) · i−1ri + i−1oi (1.68)

From linear algebra we know that a Skew-symmetric matrix multiplied by avector can be expressed in terms of a cross-product yielding8

i−1p = i−1Riip + i−1ωi × i−1ri + i−1oi (1.69)

which considering that ip is static in frame 1, then the term (i−1Riip) is naught

and confirms Equation 1.64, recalling that 1.64 considers only rotational termsand the term i−1oi incorporates the translational component.

1.4.7 Robot’s link motion

The relationship of translational and rotational velocities inside a robotic chaincan now be calculated by considering the Denavit-Hertenberg parameters. Inorder to do so, let’s consider two vectors locating the origin of two consecutiveframes (i− 1, i) attached to one particular robot’s link as shown in Figure 1.11.Such vectors are named as xi−1 and xi for each frame respectively.

x 0

y 0

y i-1

O

0 x i-1

Joint i-1

z 0

Joint i

Link i

i-1 x i

0 x i-1,i

y i

z i x i

Figure 1.11: Modelling the position of a kinematic robot’s link.

• Translational Velocity. Once again let’s define i−1ri−1,i = i−1Ri−1,i · ixas the vector accounting for the origin of Frame i respect to Frame i − 1,expressing this result in the same frame i− 1. Thus a point respect to thebase frame can be expressed as follows

8As shown later, this is possible if and only if both elements in the cross product are definedin the same coordinate frame.

31

Page 32: Robot Modelling

0xi = 0Ri−1i−1xi−1,i + 0xi−1 (1.70)

and recalling expression 1.66, its derivative can be defined as

0xi = 0Ri−1 (i−1xi−1,i) + 0Ri−1i−1xi−1,i + 0xi−1 (1.71)

using 1.69, this last expression can expressed as

0xi = 0Ri−1 (i−1xi−1,i) + 0ωi−1 × 0ri−1,i + 0xi−1 (1.72)

The first term 0Ri−1 (i−1xi−1,i) represents the translational velocity of theframe i attached to link i, as a function of the translational and rotationalvelocities of link i− 1. Substituting by 0vi−1,i for clarity, this yields

0xi = 0vi−1,i + 0ωi−1 × 0ri−1,i + 0xi−1 (1.73)

• Angular Velocity On the other hand, it can be shown that an angularvelocity 0ωi is calculated from the simple addition of the angular velocitiesin other links if and only if they are expressed with respect to a commonframe as shown below.

0ωi = 0ωi−1 + 0Ri−1i−1ωi−1,i

0ωi = 0ωi−1 + 0ωi−1,i

(1.74)

In different words, this is equivalent to define the angular velocity of link ias a function of the angular velocities on link i−1 plus the angular velocityof link i respect to link i− 1.

Let’s gather the results from expressions 1.73 and 1.74 with some DH consid-erations for revolute joints only. The study of prismatic joints is not consideredto keep this overview as brief as possible.

Revolute joints

In the case of a revolute joint, the angular velocity directly depends on the θi

angle. Such velocity is initially expressed in the local frame i and the relationshipwith the distal link i− 1 should be determined. Following the criteria from lastsection, the angular velocity produced by Link i, can be thus expressed in termsof the i− 1 frame but respect to the base frame as follows

0ωi−1,i = θi zi−1 (1.75)

32

Page 33: Robot Modelling

From the DH convention we know that the z axis points through the JointAxis and therefore zi−1 represents the unit vector in the direction of such axis.On the other hand the linear velocity can thus be obtained from Equation 1.64as

0vi−1,i = 0ωi−1,i × 0ri−1,i (1.76)

This last two expressions for 0ωi−1,i and 0vi−1,i can be used in Equations1.74 and 1.73 to obtain the value of the rotational and translational velocities asfollows. First the angular velocity respect to the base frame can be calculated asin Equation 1.74 as

0ωi = 0ωi−1 + θi zi−1 (1.77)

In the case of the translational velocity, a general expression can be deductedsubstituting Equation 1.76 in expression 1.73 yielding

0xi = 0ωi−1,i × 0ri−1,i + 0ωi−1 × 0ri−1,i + 0xi−1 (1.78)

from which applying some matrix algebra we get

0xi = (0ωi−1,i + 0ωi−1)× 0ri−1,i + 0xi−1 (1.79)

and finally, considering again Equation 1.74, we conclude that

0xi = 0ωi × 0ri−1,i + 0xi−1 (1.80)

Expressions 1.80 and 1.77 are of particular importance in next section whereone iterative method to calculate the Jacobian matrix is presented.

1.4.8 Jacobian computation

Let’s now proceed to determine a consistent method to compute the Jacobianmatrix given the current robot’s joint vector q. Considering that in this book themain interest focuses on revolute planar manipulators, the analysis of prismaticjoints is not further documented. However an interested reader can review thistopic in many sources such as [7, 15, 3, 24].

Let’s define the linear velocity first. Recall expression 1.76 from last section,and the term qJLi

which represents the generic input variable q and the ithcolumn of the Jacobian which in this case interacts only with the three uppercomponents of the input vector q, given that only the linear motion is understudy at the moment.

qJLi= 0ωi−1,i × 0ri−1,i (1.81)

Including Equation 1.75 in this expression, it is not difficult to show that

33

Page 34: Robot Modelling

0ωi−1,i = θi zi−1

qJLi= θizi−1 × (p − pi−1)

(1.82)

which includes the vectorial difference between vectors p and pi−1. To understandthe nature of this vectorial difference, let’s consider again the components ofi−1ri−1,i. From the development of expression 1.70, it is possible to rewrite

0ri−1,i = 0Ri−1,i · ix (1.83)

Taking a second look on Figure 1.11, it is easy to explain that 0ri−1,i representsthe point ix, which accounts for the origin of the frame attached to joint i.Such point is expressed respect to the base frame but considering the velocitycontribution in terms of the frame attached to the i − 1 joint. For a betterunderstanding, analyze the vectorial triangle in the first graph of Figure 1.12,which is similar to that in Figure 1.11. It is easy to see that

0ri−1,i = 0xi−1 + i−1xi (1.84)

and therefore

i−1xi = 0ri−1,i − 0xi−1

i−1xi = p − pi−1

(1.85)

where i−1xi represents the virtual link going from each joint to the end-effectorand upon which is possible to calculate the contribution of each link to the motionof the end-effector. Thus the vectorial difference can be graphically explained inboth Figures 1.11 and 1.12, considering vector p as vector 0ri−1,i and vector pi−1

as vector 0xi−1.Do not forget that such analysis should be performed in every link with respect

to the robot’s end-effector. Take a second look on Figure 1.12 where a graphicalview of the Jacobian computation is shown. Although Figure 1.12 only shows the6th, 5th and 4th link in an hypothetical 6-DOF robot, it is easy to appreciate howthese ideas can be transported to more complex robotic chains. This is preciselythe principle behind the Paul’s Jacobian computation method [23], presented innext section.

It is already known that the vector q is an input to the algorithm and thatin case of revolute joints, θi performs the roll of q. Therefore the translationalsection of the Jacobian can be computed as follows.

JLi= zi−1 × (p − pi−1) (1.86)

On the other hand, the angular motion is easy to deduct because the onlyrequirement is to consider the lower three rows of the Jacobian, named here as

34

Page 35: Robot Modelling

x 0

y 0 O

0 x i-1 z 0

x 0

y 0 O

0 x i-1

z 0

x 0

y 0 O

0 x i-1

z 0

Joint 6

Joint 5

Joint 6

Joint 5

Joint 6

Joint 5

End Effector

End Effector

End Effector

Joint 4

i x i-1

i x i-1

i x i-1

0 x i-1

0 x i-1

Figure 1.12: Graphic representation of the recursive algorithm to determine theJacobian matrix.

JAI, which account for the angular motion for each link. From expression 1.75

and redefining ωi−1,i as qJAiwe have

qJAi= θzi−1 (1.87)

which clearly accounts for the differential rotation in the zi−1 direction or JointAxis direction. Excluding qi and θi which represent the input vectors, the lowerthree-component vector for each column in the Jacobian will be formed by theangular component Ai as follows

JAi= zi−1 (1.88)

Now it is time to formalize these concepts into a proper method to computethe Jacobian matrix.

1.4.9 Jacobian computation in real time

One method to compute the elements of the Jacobian matrix is developed in thissection. Such algorithm is widely used in different robot control applications and

35

Page 36: Robot Modelling

has proved to perform acceptably. It is an algorithm easy to implement but needsa good amount of calculations including matrix computations such as productsand inverses. This is not a relevant issue, given that an average computer caneffectively cope with this requirements.

Equations 1.86 and 1.88 are the foundation to develop the method. Figure1.12 suggests that in order to calculate each column of the Jacobian matrixapplying the concepts from last section, an iterative method to calculate newvector positions is required. Starting from the end-effector’s frame, each columnof the Jacobian is determined following the chain backwards until the base linkis reached. The idea is to calculate the differential motion recursively creatingthe special vectors 0ri−1,i and 0xi−1 as presented previously in Figure 1.12. Themethod is based on the fact that the relationship between each link and the end-effector can be completely defined by the DH parameters. Each column is thuscalculated sequentially until the whole matrix is determined when the last jointis reached. Let’s take a closer view of this method which was elegantly developedby Paul in [23].

In the case of a manipulator, the direct kinematics can be expressed in terms ofthe components of the general transform 0T6. From Equation 1.63 is clear that itis possible to express a differential motion R(t) in terms of the product betweena skew-symmetric matrix and the transform itself. This is easily extended tofull defined transforms, considering the ∆(q) matrix as in Equation 1.62 thusdT = ∆(q) ∗ mTn. Let’s suppose the computation for the nth-link belongs to a6-DOF manipulator. The differential can be thus calculated as dT = ∆n ∗ nT6.The method in [23] begins by expressing the differential with respect to the baseframe as

0T6 ∗ 6∆ = 0Tn−1 ∗ ∆n ∗ nT6

= (0T1 ∗ 1T2 . . .Tn−1) ∗ ∆n ∗ (Tn . . . 5T6)(1.89)

where 6∆ represents the differential change respect to the end-effector’s frame.Thus simplifying by pre-multiplying times the inverse (0T6)

−1, it is possible to

define

6∆ = 0T6 ∗ 0Tn−1 ∗ ∆n ∗ nT6

= (0T1 ∗ 1T2 . . . 5T6)−1 ∗ (0T1 ∗ 1T2 . . .Tn−1) ∗ ∆n ∗ (Tn . . . 5T6)

(1.90)

which yields (Tn . . . 5T6)−1 ∗ ∆n ∗ (Tn . . . 5T6). Naming the new set of trans-

forms as Un = (Tn . . . 5T6), the whole expression can be rewritten as 6∆ =U−1

n ∗ ∆n ∗Un which is the main foundation of the algorithm. Considering ∆n

as a differential rotation in the joint axis direction, it is easy to see that

36

Page 37: Robot Modelling

∆n =

0 −dθn 0 0dθn 0 0 00 0 0 00 0 0 0

(1.91)

Using the same structure for U−1n as for a standard homogeneous transform

in Equation 1.29, 6∆ can be built according to 6∆ = U−1n ∗ ∆n ∗Un which

yields

6∆ =

0 oxny − oynx axny − aynx pxny − pynx

nxoy − nyox 0 axoy − ayox pxoy − pyox

nxay − nyax oxay − oyax 0 pxay − pyax

0 0 0 0

dθn (1.92)

The elements in this last matrix are identified to be equivalent to the z compo-nent in the cross product operation. Considering also the orthogonality propertyamong vectors n, o and a, it is possible to conclude that

0T6∆ =

0 (o× n)z (a× n)z (p× n)z

(n× o)z 0 (a× o)z (p× o)z

(n× a)z (o× a)z 0 (p× a)z

0 0 0 0

dθn

0T6∆ =

0 −az oz (p× n)z

az 0 −nz (p× o)z

−oz nz 0 (p× a)z

0 0 0 0

dθn

(1.93)

The last matrix is skew symmetric and can be re-written in a minimal formthrough a screw vector of the form

∆Rn =[(p× n)z (p× o)z (p× a)z nz oz az

]T(1.94)

The subindex z refers to the component in z direction which confirms thefindings of our study in section 1.4.8 in Equations 1.88 and 1.86 where zi−1 standsfor the same reason. The Jacobian matrix consists of columns of the form of 1.94,one per each iteration, starting from the end-effector and going backwards untilthe base joint is reached. The elements nz, oz, az and pz are the column vectorsin the matrix Un defined at each step in the algorithm.

Final remarks

Notice that Equations 1.86 and 1.88 are satisfied by the method. The vectordifference in expression 1.86 is implemented iteratively by the column components

37

Page 38: Robot Modelling

of matrix Un. On the other hand, Equation 1.88 is satisfied by considering onlythe z component in the cross product result, i.e. that in the joint axis direction.Finally notice that the algorithm changes if prismatic joints are considered. Insuch a case matrix ∆Pn is equal to ∆Pn = [nz oz az 0 0 0]T .

1.4.10 Transforming the Jacobian between frames

The algorithm in the last section calculates the Jacobian components with re-spect to the end-effector frame. However sometimes the solution to very commonproblems in robot engineering requires the Jacobian matrix in terms of a differentcoordinate frame, frequently the base frame. The new frame can be representedby a transformation matrix Tnew and the problem consists in defining the cor-respondent Jacobian components respect to this transform. The demonstrationof this method can be found in many robotic textbooks such as [9, 16, 3]. In [9],the problem is approached in terms of the screw vector. Considering that eachcolumn of the Jacobian matrix represent the motion contribution from each linkto the end-effector frame, the Jacobian itself is just a collection of differentialvectors, one per column. So the method can be directly applied to transform thewhole Jacobian to a new coordinate frame.

With RT being the rotational sub-matrix of the transform matrix Tnew andJ(q) the Jacobian matrix respect to the end-effector, the Jacobian matrix ex-pressed in terms of Tnew is calculated as follows

TJ =

[RT 00 RT

]∗ J (1.95)

Expressing the result in terms of Equation 1.58, the whole transformation canbe finally expressed as

[TpTω

]=

[RT 00 RT

]J(q)

[pω

](1.96)

The real-time controller written for this book provides the computation of therobot’s Jacobian with respect to the end-effector by means of a useful dependencyon the DH parameters. The transformation of the Jacobian with respect toa different frame is also provided in the toolbox. This sort of function comesvery useful when kinematic resolved-rate controllers are employed in the controlstructure as discussed below.

1.5 Resolved-rate methods

Robot control is commonly divided into kinematic and dynamic control. Therobot kinematics gathers all the algorithms related to the regulation of geomet-rical relationships between all the links conforming a robot in a way that desired

38

Page 39: Robot Modelling

trajectories can be executed to complete a given task. But in order to generategenuine controlled motion, a realistic regulator for the highly coupled chain ofactuated links is required. This set of control algorithms is concerned with thedynamics of such links in special with respect to the actuator, usually a servomotor, which is responsible for the movement of each link. The problem of con-trolling a robot can be also seen as the determination of the time history ofgeneralized torques or forces to be developed by each joint actuator in order toguarantee the proper execution of the commanded task, while satisfying somedynamic constraints such as transients and steady state requirements.

As discussed in Section 1.4, the end-effector motion and forces are speci-fied in the operational spaces whereas control actions are performed in the jointspace. The inverse Kinematic problem consists of computing the joint motioncorresponding to a given end-effector motion. Recall Equation 1.45 in which qrepresents the vector of joint variables and χ the vector of task variables, of-ten expressed as Euler angles or position coordinates, with K being the directkinematic function.

The application of the direct Kinematics function was already presented insection 1.4.5, where by using the Jacobian of the robot links, it is possible tofind an approximate solution to the inverse Kinematics problem, with sufficientprecision. As point out by Corke in [19], the Jacobian can be also understood asa weak Kinematics model after all. This idea is precisely the foundation of theResolved-rate method introduced by Whitney in [13].

The robot Jacobian matrix relates velocities in the joint space to their corre-sponding counterpart with respect to the end-effector in the operational space.Such velocity can be expressed with respect to the base frame with 0ve = 0Je · qor with respect to the end-effector coordinate frame with ev = eJ · q, with q be-ing the joint velocities. It is known already that for a 6-DOF manipulator, theJacobian is a square matrix which can be inverted if the robot is not around asingular state. The motion can be therefore “resolved” by inverting the Jacobianand multiplying it by the required velocity vector. So the corresponding jointvelocities vector is computed as follows.

q = (0Je)−1 · 0ve (1.97)

The required joint velocities vector can thus be integrated using any of theclassical available methods to solve ODEs such as the Euler method or a higherprocedure such as the 4th order Runge-Kutta. These last two methods are in-cluded in the real-time toolkit of the CSC VS library.

1.6 The TQ MA2000 robot: joint analysis.

Following the overview of robotic concepts, this section presents some technicaldetails about the interface to each joint of the TQ MA2000 robot. These param-

39

Page 40: Robot Modelling

eters are fundamental in the computation of the subsequent algorithms in thisbook. As this information is not provided by the manufacturer in the standarddocumentation [25], it has to be determined experimentally.

0 o to 45 o

-180 o to 45 o

135 o to 180 o

Figure 1.13: Operative ranges of the second link of the TQ MA2000 robot.

Each joint is actuated by means of a DC-servo motor sensed by a separateresistive positional sensor which implies considerable noise level in the data. Thescale on each potentiometer is fixed by mechanical stops at each end. A digitalsystem [26] is in charge of sensing the resistive value on each pot relating itsvalue to the link’s angular position. This therefore requires a calibration stageto adjust the range of digital values in the electronic interface to match with thephysical limits of each potentiometer. This was achieved by using an old firmwareinterface provided by the manufacturer together with some other special tools toadjust internal mechanical parts in the robot.

Apart from the low-level interface, the digital value of each potentiometershould be transferred to the main computer which hosts the regulation algo-rithms in order to exert real-time control. This process implies the use of othersoftware to translate the potentiometer value which has been scaled to encodersteps, into radians. This are the units used by the control algorithm. Howeverencoder-step counts are also used alternatively in this book to refer directly tothe potentiometer value.

The required conversion into radians was not supplied by the manufacturerand had to be determined experimentally by defining the real operative rangesfor each link. From these ranges, a proper conversion to radians can be foundby scaling the encoder steps into its equivalent angle value in radians. Table

40

Page 41: Robot Modelling

Link Zerosteps Step Range (Rad)

Θ1 5000.004712

0.27◦−2.3562 ≤ θ1 ≤ 2.3562−135◦ ≤ θ1 ≤ 135◦

Θ2 1690.004712

0.27◦

−3.14159 ≤ θ2 ≤ 0.78525−180◦ ≤ θ2 ≤ 45◦

2.35634 ≤ θ2 ≤ 3.14159135◦ ≤ θ2 ≤ 180◦

Θ3 5000.004712

0.27◦−2.35634 ≥ θ3 ≥ 2.35634−135◦ ≥ θ3 ≥ 135◦

Θ4 5000.003141

0.18◦−1.57079 ≤ θ4 ≤ 1.57079

−90◦ ≤ θ4 ≤ 90◦

Θ5 5000.003141

0.18◦−3.14159 ≤ θ5 ≤ 0.00000−180◦ ≤ θ5 ≤ 0.00◦

Θ6 5000.003141

0.18◦−3.14159 ≤ θ6 ≤ 0.00000−180◦ ≤ θ6 ≤ 0.00◦

Table 1.5: TQ MA2000 Robot: Zero Positions, Step/Angle and OperationalRanges

1.5 presents the zero positions, the value of one step in radians-degrees and theoperational ranges determined experimentally for each link.

Notice the special case of the second link, whose initial position has been seton 45o below the horizontal line as shown be Figure 1.13. While the rotationof other links runs on only one interval, Link 2 operates in tow non-continuousintervals which require the use of three numeric intervals to perform the step-to-radian conversion. The exact conversion expressions are detailed below. Theyare included in the real-time toolkit developed in this book.

MA2000-Simulink Conversion Expressions

Link 1

θ1 = (0.0047115)(500− E1) (1.98)

E1 = 500− θ1

0.0047115(1.99)

41

Page 42: Robot Modelling

Link 2

0 ≤ E2 ≤ 169 θ2 = (E2)(−0.00464) + 0.78525169 < E2 < 836 θ2 = −((E2 − 169)(0.00471))836 ≤ E2 ≤ 1000 θ2 = π − (E2 − 836)(0.00471)

(1.100)

0 ≤ θ2 ≤ 0.78539 E2 = (θ2−0.78525)−0.00464

−π ≤ θ2 < 0 E2 = −θ2

0.00471+ 169

0.78539 ≤ θ2 ≤ π E2 = 836− (θ2−π)0.00479

(1.101)

Link 3

0 ≤ E3 ≤ 500 θ3 = (500− E3)(−0.00471)500 < E3 ≤ 1000 θ3 = (E3 − 500)(0.00471)

(1.102)

E3 = θ3

0.00471+ 500 (1.103)

Link 4

0 ≤ E4 < 500 θ4 = (500− E4)(−0.00314)500 ≤ E4 ≤ 1000 θ4 = (E4 − 500)(0.00314)

(1.104)

0 ≤ θ4 E4 = θ4

0.00314+ 500

0 > θ4 E4 = θ4

0.00314+ 500

(1.105)

Link 5

θ5 = E5(−0.00314) (1.106)

E5 =θ5

−0.00314(1.107)

Link 6

θ6 = E6(−0.00314) (1.108)

E6 =θ6

−0.00314(1.109)

42

Page 43: Robot Modelling

ParametersLenght Twist Angle Offset Variable

Link ai αi di θi/φi

L4 0 −π/2 0 θ4

L5 0 π/2 0 θ5

L6 0 0 d6 θ6

Table 1.6: DH Parameters for the last three links of an Anthropomorphic arm.

1.7 Chapter overview

In this chapter the position and orientation of a rigid body respect to a givencoordinate frame has been introduced. Different pose representation conventionsare discussed to introduce the homogeneous transforms. Such representation isused to initiate the analysis of robot Kinematics. The most relevant conceptsregarding motion relationships between the links are discussed by means of theDenavit-Hartenberg convention. The discussion also portraits the resolved-ratecontrol and the chapter finishes by presenting some information regarding theinterface to the links in the TQ MA2000 robot. All the topics are sorted insuch a way that the discussion of visual control of robots in subsequent chaptersnaturally matches the concepts discussed in this chapter. Relevant tools are alsomention in this chapter such as the Robot Toolbox for Matlab written by Corke[19] and the real-time CSC Visual Servoing library (CSC VS) which is formallypresented as one of the most relevant contributions of this book in followingChapters. The CSC VS library is used to implement all the visually-guidedrobotic experiments developed in the next chapters of this book.

43

Page 44: Robot Modelling

Bibliography

[1] P.J. McKerrow. Introduction to Robotics. Addison-Wesley, 1990. ISBN0201182408.

[2] B. Espiau, Chaumette F., and P. Rives. A new approach to visual servoingin robotics. IEEE Transactions on Robotics and Automation, 8(3):313–326,June 1992.

[3] Sciavicco L. and Siciliano B. Modelling and Control of Robot Manipulators.Advanced textbooks in control and signal processing. Springer-Verlag, 2ndedition, 2002. ISBN 1852332212.

[4] W.R. Hamilton. On quaternions or on a new system of imaginaries in algebra.Philosophical Magazine, XXV:10–13, July 1844.

[5] Ken Shoemake. Animating rotation with quaternion curves. ACM SIG-GRAPH, vol. 19(3):245–254, 1985.

[6] Michael Brady. Robot Motion: Planning and Control, chapter TrajectoryPlanning, pages 221–243. Series in Artifitial Intelligence. The MIT Press,1982. ISBN 0-262-02182-X.

[7] Crane C.D. and Duffy Joseph. Kinematic Analysis of Robot Manipulators.Cambridge University Press, 1998.

[8] J. Funda and R.P. Paul. A comparison of transforms and quaternions inrobotics. Proceedings of the IEEE International Conference on Robotics andAutomation, 2:886–891, April 1988.

[9] Richard P. Paul. Robot Manipulators, Mathematics, Programming and Con-trol. MIT Press, 1981. ISBN 026216082X.

[10] Peter Corke. Visual Control of Robots: High Performance Visual Servoing.Research Studies Press, 1996. ISBN 0863802079.

[11] J. Denavit and R.S. Hartenberg. A kinematic notation for lower pair mech-anisms based on matrices. Journal of Applied Mechanics, 22:215–221, June1955.

44

Page 45: Robot Modelling

[12] C.S.G. Lee. Robot arm kinematics, dynamics and control. Computer,15(12):62–80, 1982.

[13] D.E. Whitney. The mathematics of coordinated control of prosthetic armsand manipulators. ASME Journal of Dynamic Systems, Measurement andControl, 94:303–309, 1972.

[14] R.P. Paul, B.E. Shimano, and G. Mayer. Kinematic control equations forsimple manipulators. IEEE Transactions on Systems, Man and Cybernetics,11:449–455, 1981.

[15] M. Spong and M. Vidyasagar. Robot Dynamics and Control. John Wiley,1989. ISBN 047161243X.

[16] J.J. Craig. Introduction to Robotics: Mechanics and Control. Addison-Wesley Pub. Co., 1989. ISBN 0201095289.

[17] C. Samson, M. Le Borgne, and Espiau B. Robot Control, a task functionapproach. Oxford Engineering Series, 22. Oxford Science Publications, 1990.ISBN 0-198-53805-7.

[18] Wolfram Stadler. Analytical Robotics and Mechatronics. Series on Electricaland Computer Engineering. McGraw-Hill, 1995. ISBN 0070606080.

[19] P.I. Corke. A robotics toolbox for MATLAB. IEEE Robotics and AutomationMagazine, 3(1):24–32, March 1996.

[20] S. Chiaverini, L. Sciavicco, and B. Siciliano. Control of robotic systemsthrough singularities. International Workshop on Adaptive and NonlinearControl: Issues in Robotics, Grenoble, France, 1990.

[21] A.A. Goldenberg, Benhabib B., and R.G. Fenton. A complete generalizedsolution to the inverse kinematics of robots. IEEE Journal of Robotics andAutomation, RA-1(1):14–20, March 1985.

[22] L.T. Wang and B. Ravani. Recursive computations of kinematic and dy-namic equations for mechanical manipulators. IEEE Journal of Roboticsand Automation, RA-1(3):124–131, September 1985.

[23] R.P. Paul, B.E. Shimano, and G. Mayer. Differential kinematic controlequations for simple manipulators. IEEE Transactions on Systems, Manand Cybernetics, 11:456–460, 1981.

[24] Yoram Koren. Robotics for Engineers. McGraw-Hill, 1985. ISBN0070353999.

45

Page 46: Robot Modelling

[25] TQ equipment. TQ MA2000 six-DOF robot, user manual. Nottingham, UK,1985.

[26] TQ equipment. TQ MA2000 robot, digital electronics schematic. Notting-ham, UK, 1985.

46