workspace generation for multifingered manipulation

Upload: ivan-avramov

Post on 02-Apr-2018

219 views

Category:

Documents


0 download

TRANSCRIPT

  • 7/27/2019 Workspace Generation for Multifingered Manipulation

    1/27

    This article was downloaded by: [81.161.248.93]On: 13 August 2013, At: 10:47Publisher: Taylor & FrancisInforma Ltd Registered in England and Wales Registered Number: 1072954Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK

    Advanced RoboticsPublication details, including instructions for authors and

    subscription information:

    http://www.tandfonline.com/loi/tadr20

    Workspace Generation for

    Multifingered ManipulationYisheng Guan

    a, Hong Zhang

    b, Xianmin Zhang

    c&

    Zhangjie Guan da

    School of Mechanical and Automotive Engineering,

    South China University of Technology, Guangzhou,

    Guangdong 510640, P. R. China;, Email:

    [email protected]

    School of Mechanical and Automotive Engineering,

    South China University of Technology, Guangzhou,

    Guangdong 510640, P. R. China, Department of

    Computing Science, University of Alberta, Edmonton, AB,

    Canadac

    School of Mechanical and Automotive Engineering,

    South China University of Technology, Guangzhou,

    Guangdong 510640, P. R. Chinad

    School of Aeronautics Science and Engineering, Beijing

    University of Aeronautics and Astronautics, Beijing, P. R.

    China

    Published online: 02 Apr 2012.

    To cite this article: Yisheng Guan , Hong Zhang , Xianmin Zhang & Zhangjie Guan

    (2011) Workspace Generation for Multifingered Manipulation, Advanced Robotics, 25:18,

    2293-2317, DOI: 10.1163/016918611X603837

    To link to this article: http://dx.doi.org/10.1163/016918611X603837

    PLEASE SCROLL DOWN FOR ARTICLE

    Taylor & Francis makes every effort to ensure the accuracy of all the information(the Content) contained in the publications on our platform. However, Taylor& Francis, our agents, and our licensors make no representations or warrantieswhatsoever as to the accuracy, completeness, or suitability for any purposeof the Content. Any opinions and views expressed in this publication are theopinions and views of the authors, and are not the views of or endorsed by

    http://dx.doi.org/10.1163/016918611X603837http://dx.doi.org/10.1163/016918611X603837http://www.tandfonline.com/action/showCitFormats?doi=10.1163/016918611X603837http://www.tandfonline.com/loi/tadr20
  • 7/27/2019 Workspace Generation for Multifingered Manipulation

    2/27

    Taylor & Francis. The accuracy of the Content should not be relied upon andshould be independently verified with primary sources of information. Taylor andFrancis shall not be liable for any losses, actions, claims, proceedings, demands,costs, expenses, damages, and other liabilities whatsoever or howsoever causedarising directly or indirectly in connection with, in relation to or arising out of theuse of the Content.

    This article may be used for research, teaching, and private study purposes.

    Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expresslyforbidden. Terms & Conditions of access and use can be found at http://www.tandfonline.com/page/terms-and-conditions

    D

    ownloadedby[81.1

    61.2

    48.9

    3]at10:4713August2013

    http://www.tandfonline.com/page/terms-and-conditionshttp://www.tandfonline.com/page/terms-and-conditions
  • 7/27/2019 Workspace Generation for Multifingered Manipulation

    3/27

    Advanced Robotics 25 (2011) 22932317brill.nl/ar

    Full paper

    Workspace Generation for Multifingered Manipulation

    Yisheng Guan a,, Hong Zhang a,b, Xianmin Zhang a and Zhangjie Guan c

    aSchool of Mechanical and Automotive Engineering, South China University of Technology,

    Guangzhou, Guangdong 510640, P. R. Chinab

    Department of Computing Science, University of Alberta, Edmonton, AB, Canadac

    School of Aeronautics Science and Engineering, Beijing University of Aeronautics and

    Astronautics, Beijing, P. R. China

    Received 30 November 2010; accepted 11 January 2011

    Abstract

    In this paper, we propose a novel numerical approach and algorithm to compute and visualize the workspace

    of a multifingered hand manipulating an object. Based on feasibility analysis of grasps, the proposed ap-

    proach uses an optimization technique to first compute discretely the position boundary of the grasped object

    and then calculate the rotation ranges of the object at specified positions within the boundary. In other words,

    workspace generation with the approach is fulfilled by obtaining reachable boundaries of the grasped object

    in the sense of both position and orientation, and the discrete boundary points are computed by a series of

    optimization models. Unlike in workspace generation of other robotic systems where only geometric and

    kinematic parameters of the robots are considered, all factors including geometric, kinematic and force-

    related factors that affect the workspace of a handobject system can be taken into account in our approach

    to generate the workspace of multifingered manipulation. Since various constraints can be integrated into

    the optimization models, our method is general and complete, with adaptability to various grasps and ma-

    nipulations. Workspace generation with the approach in both planar and spatial cases are illustrated with

    examples. The approach provides an effective and general solution to the long-term open and challenging

    problem of workspace generation of multifingered manipulation. Part of the work has been published in the

    Proceedings of IEEE/RSJ IROS2008 and IEEE/ASME AIM2008. Koninklijke Brill NV, Leiden and The Robotics Society of Japan, 2011

    Keywords

    Workspace generation, reachable boundary, multifingered manipulation, grasp feasibility

    1. Introduction

    Knowledge about the workspace of a robot is crucial in motion planning and control

    of the robot in various tasks. The importance of the workspace was first realized in

    * To whom correspondence should be addressed. E-mail: [email protected]

    Koninklijke Brill NV, Leiden and The Robotics Society of Japan, 2011 DOI:10.1163/016918611X603837

    D

    ownloadedby[81.1

    61.2

    48.9

    3]at10:4713August2013

  • 7/27/2019 Workspace Generation for Multifingered Manipulation

    4/27

    2294 Y. Guan et al. / Advanced Robotics 25 (2011) 22932317

    the early robotics research for manipulators, which can be traced back to 20 years

    ago. Many methods have been proposed for the workspace of conventional robots,

    among which are geometric analysis [1, 2], random search by the Monte Carlo

    method [3, 4] and polynomial discriminant [5, 6]. Research on the workspace of aparallel manipulator can also be found in the literature [79]. The workspace of a

    space manipulator was studied in Ref. [10] using the conventional analytic or geo-

    metric method based on the concept of a virtual manipulator for a space robotic

    system consisting of a manipulator and a carrying vehicle. The convolution product

    of real-valued functions on the special Euclidean group was introduced and applied

    to the determination of the workspace of manipulators that have a finite number of

    joint states [11]. A diffusion-based algorithm was developed for snake-like hyper-

    redundant robots in Ref. [12]. Actually, the importance of the workspace lies not

    only in the planning and control of robotic motion and manipulation, but also inrobot design where the workspace is used as a criterion so that the designed robot

    has maximum workspace [1315]. In addition to robotic systems, workspace anal-

    ysis has also been carried out for human systems. The reachable envelope of the

    human upper extremity was studied in Ref. [16] where the upper extremity was

    modeled as a 9-d.o.f. system and its reachable envelope was generated by analyti-

    cal formulation. Since there are a total of 232 singular surfaces for the workspace

    of the 7-d.o.f. arm/hand, it is impossible or impractical (if applicable) to use this

    method for complex robots with more d.o.f. and three-dimensional (3-D) multifin-

    gered hands grasping an object which is a complicated system usually with more

    than 9-d.o.f. and a lot of constraints.

    Although workspaces of many robots have been extensively studied, little work

    has been done on those of multifingered hands or multiple robot systems. This may

    be in part because of the difficulty in this case. The earliest and most notable work

    on the workspace of a multifingered hand should be credited to Ref. [17], in which

    the workspace of a restricted set of hands was analyzed using geometric meth-

    ods. The workspace of each finger was assumed and the relationship between them

    was considered in the derivation of the workspace of the hand. Unfortunately, theproposed solution has certain critical limitations, including some restricted assump-

    tions such as fixed-point contact, not considering rotation d.o.f. for each position in

    the workspace, and, especially, not taking into account collisions or intersections

    between the object and the fingers and those between the finger links themselves.

    Due to so many affecting factors, the workspace of a multifingered hand manipu-

    lating an object is not the intersection or union of the workspaces of fingers in the

    hand. Therefore, geometric, analytic or other conventional methods or algorithms

    presented in the literature could not provide a good solution to this problem. Instead,

    as pointed out in Ref. [17], for actual hands with irregularly shaped workspaces,it is likely that numerical methods will be the best way to determine the total

    workspace.

    In this paper, we propose exactly a numerical or computational method to ob-

    tain the workspace of a multifingered manipulation in static or quasi-static state.

    D

    ownloadedby[81.1

    61.2

    48.9

    3]at10:4713August2013

  • 7/27/2019 Workspace Generation for Multifingered Manipulation

    5/27

    Y. Guan et al. / Advanced Robotics 25 (2011) 22932317 2295

    The presented method originates from our research on feasibility analysis of mul-

    tifingered grasps [1820]. The method uses a numerical optimization technique to

    first find the boundary of the object position without restrictions on its orientation

    and then to compute the rotation ranges of the grasped object at specific feasiblepositions within the boundary, from which the workspaces are built. Both the 2-D

    or planar and 3-D or spatial cases of multifingered manipulation are addressed for

    workspace generation. The evaluation of grasp constraints, the optimization mod-

    els, visualization of the workspaces and, hence, the corresponding algorithms are

    similar, but different. While the workspace of 2-D multifingered manipulation is

    visualized in a 3-D coordinate frame, the one of 3-D multifingered manipulation

    is decomposed into two categories position workspace, meaning the reachable

    space of the grasped object, and orientation workspace, representing the rotational

    angles of the object at a fixed position within the position workspace about vari-ous axes through the specific position and depicted in different 3-D coordinate

    frames. Separation of the workspace is a common method for visualization of high-

    dimensional space, and has also been used for workspace analysis of parallel robots

    [9], a double parallel manipulator using an analytic method [8] and in the Piano

    Movers Problem [21, 22].

    Constraints in the force domain, including force equilibrium, limits of joint

    torques and frictional constraints, are crucial and essential in a multifingered hand

    object system. The workspace of such a system hence has quite different character-

    istics from those of other conventional robots and, hence, needs a novel method to

    obtain it. Since our optimization technique for workspace generation can incorpo-

    rate various constraints on multifingered grasps, all factors that affect the workspace

    of a robotic handobject system (including geometry of the object, kinematics of

    the multifingered hand and force-related factors, such as force equilibrium of the

    grasped object, grasp force, the limits of driving torques of the hand and frictional

    constraint) can be taken into account in the workspace generation of multifin-

    gered manipulation. The presented method takes into account the factors ignored

    in Ref. [17] and, thus, is more general. It is adaptive to various polygonal objectsto be manipulated, to various contact states between the fingers and the object, and

    to any number of fingers in the hand. As a result, the proposed numerical method

    provides a good and complete solution to the problem of the workspace of mul-

    tifingered manipulation or coordination of multiple robots. We also point out that

    the optimization-based numerical framework or methodology for workspace gen-

    eration may be widely applied to parallel robots and other sophisticated robotic

    systems, including humanoid robots [23].

    2. Algorithm

    2.1. Definitions and Notations

    Consider the grasp of an object using a multifingered hand (Fig. 1). Assume that

    the hand has m fingers, with n joints in total. Denote the joint angles by a vector

    D

    ownloadedby[81.1

    61.2

    48.9

    3]at10:4713August2013

  • 7/27/2019 Workspace Generation for Multifingered Manipulation

    6/27

    2296 Y. Guan et al. / Advanced Robotics 25 (2011) 22932317

    (a) (b) (c)

    Figure 1. Multifingered hands and objects: (a) 2-D case, (b) 3-D case and (c) hand grasping an box.

    = (1, 2, . . . , n). is the parameter describing the hand configuration. The

    hand structurally consists of tips and links of the fingers. Topologically, we regard

    a fingertip as a point t and a link as a line segment l, and call them the topological

    features of the hand. Then, the set of topological features for the hand is defined as

    H = {l1, l2, . . . , ln, t1, t2, . . . , t m}, where li (i = 1, 2, . . . , n) and tj (j = 1, 2, . . . , m )

    represent the ith link and the jth fingertip, respectively.

    In the 2-D or planar case (Fig. 1a), the configuration of an object may be de-

    scribed by the position (x,y) and orientational angle of the object frame O, with

    respect to the palm frame P. Topologically, a polygonal object consists of features

    including vertices and edges. Denote the sets of vertices and edges of the object by

    V = {vi } and E = {ej}, respectively, where vi and ej represent the ith vertex and

    jth edge in turn. Each edge has two vertices. We use ei (vj, vk) to denote the edge

    formed by two vertices vj and vk . The set of topological features for the object istherefore O = {V , E}.

    In the 3-D or spatial case (Fig. 1b), the configuration of the object is described by

    the position (x,y,z) and orientation (,,), in RPY or Euler angles. A polyhedral

    object consists topologically of faces in addition to vertices and edges. Similar to

    the sets of edges and vertices (E and V), the set of faces of the object is denoted

    by U = {uk }, where uk represents the kth face. (We do not use F and fk here for

    face feature, since we reserve them for force notation later on.) A face contains

    several vertices and edges. We use ui (v1, v2, . . . , vk) to denote the face containing

    vertices v1, v2, . . . , vk . Therefore, the set of topological features for the object isO = {V , E , U }.

    Define a contact pair, c = (h,o), between the hand and the object to be a set

    of two topological features in contact, one from the hand and the other from the

    object, where h H, and o O. In the 3-D case there are six typical contact pairs

    between the hand and the object, which are: (t,u), tipface; (l,u), linkface; (l,e),

    linkedge; (t,v), tipvertex; (t,e), tipedge; and (l,v), linkvertex. We consider

    only the first three types of contact pairs, and discard the last three types, since they

    are degenerate cases and seldom take place in a grasp. In the 2-D case, there are

    four typical contact pairs, without (t,u) and (l,u) in the above list, and the pair(t,v), which is a kind of pointpoint or fixed-point contact, seldom takes place.

    The state of contact in a grasp can be defined by a set of contact pairs, and a

    canonical grasp is defined as a set of contact pairs, g = {c1, c2, . . . , cnc }, which

    involves at least two fingers and can be achieved kinematically by the hand. With

    D

    ownloadedby[81.1

    61.2

    48.9

    3]at10:4713August2013

  • 7/27/2019 Workspace Generation for Multifingered Manipulation

    7/27

    Y. Guan et al. / Advanced Robotics 25 (2011) 22932317 2297

    the above definitions and notations, we now define the workspace of a multifingered

    hand as follows:

    Workspace For a given grasp, the workspace of a multifingered manipulation is

    the range of possible motion in both position and orientation of the object

    that is being manipulated by the hand maintaining the grasp.

    Note that in the definition, the given grasp should be kept, which means that gaits

    are not involved in the workspace of a multifingered manipulation. The workspaces

    of multifingered manipulation have quite different characteristics from those of ma-

    nipulators or other conventional robots and depend essentially on many affecting

    factors, including the geometry (sizes and shapes) of the grasped object, the kine-

    matic properties of the hand, the rolling and sliding that occur at the contacts, and

    the locations of the contacts on the object and the hand [17]. To emphasize these fac-tors, we prefer the term workspace of multifingered manipulation to workspace

    of a multifingered hand.

    2.2. Algorithm Description

    Workspace generation is to find the reachable range of the object motion in manipu-

    lation by a multifingered hand maintaining a given grasp. Using the above notations,

    we now propose the algorithm to obtain the workspace. This algorithm is based on

    kinematic feasibility analysis in our previous work [1820], which uses a method

    of global optimization modeling. The ideas of the algorithms for 2-D and 3-D casesare the same, but the steps are different with different visualization due to different

    dimensions.

    2.2.1. In the 2-D Case

    As stated before, in the 2-D case, the configuration of an object is described by

    a linear vector (x,y) and an angular scale ; the workspace is therefore of three

    dimensions and hence can be depicted in a 3-D coordinate frame, although the

    units of the three components are not the same. The algorithm (Algorithm 1) can be

    described as follows:

    (i) Maintaining the given grasp, find the maxima and minima of the object posi-

    tion in X and Y directions xmax, xmin and ymax, ymin with respect to the hand

    coordinate frame.

    (ii) Construct the rectangle in the XY plane bounded by xmax, xmin and ymax, yminobtained in Step (i), and divide it into a grid with reasonable resolution using

    horizontal and vertical lines, as shown in Fig. 2a.

    (iii) Along the horizontal or vertical grid lines, find the boundary points of theworkspace. Connect adjacent boundary points in turn to form boundary curves

    of the workspace.

    (iv) For each node within the workspace boundary, find the maximum and mini-

    mum of the object orientation angle maintaining the grasp feasibility, and

    D

    ownloadedby[81.1

    61.2

    48.9

    3]at10:4713August2013

  • 7/27/2019 Workspace Generation for Multifingered Manipulation

    8/27

    2298 Y. Guan et al. / Advanced Robotics 25 (2011) 22932317

    (a) (b) (c) (d)

    Figure 2. Local coordinates and their slicing/griding for workspace generation: (a) 2-D case,

    (b) (,,), (c) local coordinate grid and (d) 3-D case.

    check whether the grasp remains feasible for the intermediate values of be-

    tween the extrema.

    (v) Upon completing the above two steps for all the nodes in the grid, render the

    workspace graphically in a 3-D frame with coordinates (x,y,).

    2.2.2. In the 3-D Case

    Algorithm 1 can be extended to the 3-D case. In the 3-D case, since the configura-

    tion of an object is described by a position vector P(x,y,z) and an angular vector

    (,,), as stated previously, the workspace is 6-D, and, hence, cannot be drawnin one 3-D coordinate frame. Nevertheless, we can visualize the ranges of posi-

    tion and orientation of the manipulated object separately in different frames. For

    convenience, we call the space of the possible object position and that of the possi-

    ble object orientation at a specific position the position workspace and orientation

    workspace, respectively. Note that, unlike the workspace with constant orientation

    as defined in Ref. [9], the orientation of the grasped object is not fixed or prede-

    fined in the generation of the position workspace, although the orientation can be

    fixed to get a subset of the position workspace. However, the orientation workspace

    depends on the object location; that is, only at one specified point in the positionworkspace is the orientation workspace computed and visualized, although the to-

    tal orientation workspace can be also generated and visualized in a similar manner,

    without specifying or fixing the object location in the position workspace. For the

    convenience of visualization, we redefine the orientation parameter (,,) as

    follows. Let (,) be the local coordinates of a unit spherical surface de-

    termining a unique axis about which the object is to rotate by , as shown in

    Fig. 2b, where = {(,): < , /2 < /2} and 0 < 2

    or

    <

    [25].The algorithm (Algorithm 2) for the 3-D case then consists of the following steps:

    (i) Find the extreme (maximal and minimal) positions of the original point O of

    the object frame in X, Y and Z directions, xmin, xmax, ymin, ymax, zmin and

    zmax, respectively, with respect to the hand base frame.

    D

    ownloadedby[81.1

    61.2

    48.9

    3]at10:4713August2013

  • 7/27/2019 Workspace Generation for Multifingered Manipulation

    9/27

    Y. Guan et al. / Advanced Robotics 25 (2011) 22932317 2299

    (ii) Use the extreme positions obtained in Step (i) to construct a box whose six

    faces are defined by x = xmin, x = xmax, y = ymin, y = ymax, z = zmin and

    z = zmax. The workspace is clearly enclosed by this box, as shown in Fig. 2d.

    (iii) Uniformly slice the above box along the Z direction using horizontal planes

    with a reasonable resolution determined according to the size of the box.

    (iv) Uniformly slice the box along the Y direction using a series of vertical planes

    parallel to the plane XOZ with a reasonable resolution determined by the box

    size.

    (v) The horizontal and vertical slices obtained above intersect to give rise to hor-

    izontal line segments parallel to the X-axis of the world frame, as illustrated

    by ab in Fig. 2d. On these segments, we find the maximal and minimal values

    of the x coordinate of the reference point, if they exist. Thus, we obtain two

    points of the reachable boundary on each intersected line segment. All such

    points form a boundary curve on one horizontal slice.

    (vi) Similarly, the extreme points on each vertical slice form a boundary curve on

    that slice. All these horizontal and vertical boundary curves form the position

    workspace.

    (vii) At a specified point in the position workspace, there is a corresponding rangeof possible rotation of the object. To obtain this rotation range, divide into a

    grid the rectangular area of the local coordinates with a reasonable reso-

    lution to obtain a set of nodes, as shown in Fig. 2c. At each node, obtain its

    corresponding rotation vector.

    (viii) For each rotation vector obtained in Step (vii), calculate the maximal rota-

    tion angle max about it, using the optimization model based on feasibility

    analysis.

    (ix) On the rotation axis, plot a point whose distance from the origin of object

    local frame is equal to the maximum obtained in Step (viii).

    (x) Every three adjacent points on the axes drawn in Step (ix) form a facet. All

    such facets form approximately the orientational workspace at the given ob-

    ject position.

    (xi) For the rotation workspace at other points within the position workspace,

    repeat Steps (vii)(x).

    Note that in Step (viii) it is not necessary to calculate the minimal rotation angle

    min about the rotation vector, since to do so is in fact to obtain the maximal one

    about the inverse rotation vector. The local coordinate plane covers the whole

    surface of a sphere and, equivalently, all rotation vectors.

    D

    ownloadedby[81.1

    61.2

    48.9

    3]at10:4713August2013

  • 7/27/2019 Workspace Generation for Multifingered Manipulation

    10/27

    2300 Y. Guan et al. / Advanced Robotics 25 (2011) 22932317

    2.2.3. Remarks on the Algorithms

    In the algorithms, we use an optimization technique to get the extreme points of the

    workspace based on the kinematic feasibility of the given grasp. If the position of

    the object is out of the position workspace, or if an orientation is out of the orienta-tion workspace, then the grasp is not feasible. In other words, for any configuration

    of the object within its workspace, the grasp is feasible. Therefore, the feasibility

    analysis and the associated optimization model play a key role in the algorithms.

    They take into account the following factors for the workspace of a multifingered

    manipulation, as will be illustrated in the following sections:

    Hand kinematics, such as link lengths, locations distribution of fingers in the

    hand.

    Object geometry, including the shape and size of the object.

    The grasp itself, including the grasp type (e.g., fixed-point grasp) and contact

    positions.

    Force constraints, including force equilibrium, frictional forces, grasp forces

    and joint torques.

    Other constraints, including joint limits, contact constraints and collision-free

    constraints.

    In determining the kinematic feasibility of a grasp, there are two central con-

    straints contact constraints, which describe the contact between two topological

    features of the hand and the object, and collision-free constraints, which are re-

    quired for other topological features not to make contacts. They are the key to the

    optimization models. In the following sections, we first formulate them and then

    set-up optimization models used in the algorithms.

    3. Constraint Formulation

    Since we use points/vertices, segments/edges and faces as the topological features

    of the hand and the object, a contact constraint or a collision-free constraint can be

    evaluated by the spatial relationship among them. It was found that these constraints

    can be conveniently and concisely evaluated by the computation of signed triangular

    areas and tetrahedral volume from computational geometry [24]. In this section, we

    review these constraint formulation. The details can be found in our previous work

    on the feasibility analysis of multifingered grasps [1820].

    3.1. Basic Lemmas

    It is known from computational geometry that the relationship between three points

    or a point and a directed line segment can be completely described by the signed

    area of the triangle formed by them, with the following lemma.

    D

    ownloadedby[81.1

    61.2

    48.9

    3]at10:4713August2013

  • 7/27/2019 Workspace Generation for Multifingered Manipulation

    11/27

    Y. Guan et al. / Advanced Robotics 25 (2011) 22932317 2301

    Figure 3. Signed triangular area and tetrahedral volume, and their application in constraints: (a) 2-D

    area, (b) 3-D area, (c) tetrahedral volume and (d) collision avoidance.

    Lemma 1. Given three points p1, p2 andp3, p3 is to the left (right) of the directed

    line p1p2 if and only ifAp1p2p3 > 0 (< 0); p3 is collinear withp1p2 if and only if

    Ap1p2p3 = 0,

    where Ap1p2p3 is the area of the triangle formed by the three ordered points

    p1(x1, y1), p2(x2, y2) and p3(x3, y3), and is calculated easily as Ap1p2p3 =12

    (x1(y2 y3) + x2(y3 y1) + x3(y1 y2)). This area may be positive, zero or

    negative, depending on the relative positions of these three ordered points, as shown

    in Fig. 3a.

    Correspondingly, the spatial relationship between three points p1, p2 and p3 in

    the 3-D case can be determined by the cross product: sp1p2p3 = (p2 p1) (p3

    p1). The magnitude of the vector sp1p2p3 is equal to twice the area A(p1, p2, p3)

    of the triangle formed by p1, p2 and p3, and the direction of sp1p2p3 indicates theorientation of the plane in which these points lie and the location (the left or right)

    of p3 relative to the vectorp1p2, as illustrated in Fig. 3b. The volume of the tetra-

    hedron formed by four ordered points p1(x1, y1, z1), p2(x2, y2, z2), p3(x3, y3, z3)

    and p4(x4, y4, z4) can be calculated by:

    Vp1p2p3p4 =1

    6

    x1 y1 z1 1

    x2 y2 z2 1

    x3 y3 z3 1

    x4 y4 z4 1

    .

    This volume may be positive, negative or zero, depending on the order of points

    in the calculation. We define the above side of a plane p1p2p3 to be along (p2

    p1) (p3 p1), and the below side is the opposite. Then the relationship among

    four points or a point and an oriented plane can be described by the following lemma

    (Fig. 3c).

    Lemma 2. Given four points p1, p2, p3 and p4, p4 is below (above) to the plane

    p1p2p3 if and only ifVp1p2p3p4 > 0 (< 0). p4 is coplanar with p1, p2 andp3 if andonly ifVp1p2p3p4 = 0.

    Based on above lemmas, contact constraints and collision-free constraints can be

    easily evaluated, as shown in the following subsections.

    D

    ownloadedby[81.1

    61.2

    48.9

    3]at10:4713August2013

  • 7/27/2019 Workspace Generation for Multifingered Manipulation

    12/27

    2302 Y. Guan et al. / Advanced Robotics 25 (2011) 22932317

    Figure 4. Evaluation of contact and collision avoidance in the 2-D case.

    3.2. Contact and Collision-Free Constraints in the 2-D Case

    3.2.1. VertexLink Contact Constraint

    For a vertex (point) vi to make contact with a link (line segment) lk(pk, pk+1)

    (Fig. 4a), it must be satisfied that: (i) vi and pk, pk+1 are collinear and (ii) vi is

    within the segment pkpk+1. These conditions can be formulated as Apk pk+1vi =

    0 & max(Avi1vi pk , Avi vi+1pk+1 ) < 0, where max( ) is the function finding the

    maximum among its input parameters, used here to describe the conjunction of

    several conditions. The second formula also guarantees that the line segment would

    not penetrate into the object (the penetration case is shown in Fig. 4b).

    3.2.2. EdgeLink Contact Constraint

    The condition that an edge ei (vi , vi+1) makes contact with a link lk(pk, pk+1) can

    be similarly derived and translated into the following set of equalities and inequali-

    ties (Fig. 4c):

    Apk

    pk+1

    vi

    = 0 & Apk

    pk+1

    vi+1

    = 0 &

    min{Avi1vi pk Avi1vi pk+1 , Avi+1vi+2pk Avi+1vi+2pk+1 } < 0,

    where min( ) is the function to find the minimum among its input parameters,

    used here to describe the disjunction of several conditions and used here to describe

    the conjunction of some conditions.

    3.2.3. Collision-Free Constraint

    Collision-free constraints can be modeled in a similar way. For example, the con-

    dition for two line segments not to intersect is that at least one segment is on the

    same side of another one, as shown in Fig. 4d. Therefore, the condition so that alinkp1p2 and an edge v1v2 do not collide with each other is:

    max(Ap1p2v1 Ap1p2v2 , Av1v2p1 Av1v2p2 ) > 0.

    3.3. Contact and Collision-Free Constraints in the 3-D Case

    3.3.1. PointFace Contact Constraint

    Pointface contact occurs when a fingertip keeps in touch with a surface of the

    object. The necessary and sufficient condition for a point t to be on a convex face

    u(v1, v2, . . . , vk) can be described as

    Vv1v2v3t = 0 & min(sv1v2t sv2v3t, . . . , sv1v2t svk v1t) > 0,

    where the first formulation is to guarantee that the point is on the plane in which

    the face lies and the second guarantees that the point is furthermore within the face.

    The minimum function min( ) is used here to the conjunction of some conditions.

    D

    ownloadedby[81.1

    61.2

    48.9

    3]at10:4713August2013

  • 7/27/2019 Workspace Generation for Multifingered Manipulation

    13/27

    Y. Guan et al. / Advanced Robotics 25 (2011) 22932317 2303

    3.3.2. Two-Segment Contact Constraint

    For two line segments contact each other it means that they are coplanar and in-

    tersect each other, which can be described by the proposition: two line segments

    l(p1, p2) and e(v1, v2) intersect if and only if:

    Vp1p2v1v2 = 0 & max(sp1p2v1 sp1p2v2 , sv1v2p1 sv1v2p2 ) < 0.

    This proposition can be used to describe the contact between a finger link and an

    edge of the object.

    3.3.3. Segment Collision-Free Constraint

    When any of the conditions in the above proposition is false, there is no collision

    between two line segments. That is, the constraint for two segments not to intersect

    is (i) Vp1p2v1v2 = 0, i.e., |Vp1p2v1v2 | > 0, or (ii) sp1p2v1 sp1p2v2 > 0 or (iii) sv1v2p1 sv1v2p2 > 0. The disjunction of these conditions can be expressed as the corollary:

    two line segments l(p1, p2) and e(v1, v2) do not intersect if and only if:

    max(|Vp1p2v1v2 |, sp1p2v1 sp1p2v2 , sv1v2p1 sv1v2p2 ) > 0.

    3.3.4. SegmentFace Contact Constraint

    The conditions for a line segment l(p1, p2) to make contact with one face

    u(v1, v2, . . . , vk) are that (i) the line segment and the face are coplanar, and that

    (ii) at least one endpoint of the line segment lies on the face or (iii) the segment

    intersects at least one edge of the face. The conditions are described as:

    Vv1v2v3p1 = 0 & Vv1v2v3p2 = 0 & min{ max(minp1u, minp2u), min(maxei l )} < 0.

    The first two formulae are equivalent to condition (i) above. In the last formula,

    max(minp1u, minp2u) is for condition (ii), and min(maxei l ) for condition (iii),

    where minpju (j = 1, 2) and maxei l (i = 1, 2, . . . , k) are defined as:

    min{sv1v2pj svi vi+1pj (i = 2, . . . , k )}

    and

    max(sp1p2vi sp1p2vi+1 , svi vi+1p1 svi vi+1p2 ).

    3.3.5. SegmentFace Collision-Free Constraint

    The satisfaction of the conditions in the preceding subsection guarantees the

    segmentface contact; however, the invalidity of any of them does not mean that

    the segment and the face do not collide with each other. For example, even if they

    are not coplanar, the link may also penetrates the face.

    The sufficient condition for collision avoidance between a face u(v1, v2, . . . , vk)

    and a line segment l(p1, p2) can be expressed as:

    max{Vv1v2v3p1 Vv1v2v3p2 , Vv1v2v3p1 Vvi vi+1p1p2 (i = 1, 2, . . . , k )} > 0,

    where Vv1v2v3p1 Vv1v2v3p2 > 0 means the two endpoints of the segment l are on the

    same side of the plane v1v2v3 in which the face u lies. If the two endpoints of l

    are on different sides of, Vv1v2v3p1 Vvi vi+1p1p2 > 0 guarantees that the point where

    D

    ownloadedby[81.1

    61.2

    48.9

    3]at10:4713August2013

  • 7/27/2019 Workspace Generation for Multifingered Manipulation

    14/27

    2304 Y. Guan et al. / Advanced Robotics 25 (2011) 22932317

    l penetrates is to the right side the edge ei (vi , vi+1) and, therefore, outside of u

    (Fig. 3d). The above condition and dissatisfaction of those in the previous sub-

    section form the sufficient and necessary conditions of segmentface collision-free

    constraints.

    3.4. Constraints in the Force Domain

    Under the static or quasi-static condition, the force constraints include the force

    equilibrium, contact forces without slipping and joint torque limits. Suppose that

    the handobject is at static or quasi-static state, then all forces acting on the object

    must sum up to be zero and all forces applied to each finger must be balanced by

    finger joint torques, which are always bounded. In addition, with frictional contact,

    the normal force must be sufficient to counter tangential force in order for slipping

    not to occur.The static equilibrium of the object means

    GF+ W = 0,

    where G = [G1 Gnc ] is the grasp matrix of dimension 3 k (2-D case) or 6 k

    (3-D case), nc is the number of contacts on the object, k = k1 + + knc , and ki is

    the dimension of the wrench basis or frictional cone at the ith contact. For friction-

    less point contact ki = 1; for point contact with friction, ki = 2 (2-D case) or ki = 3

    (3-D case); and for a soft finger in the 3-D case, ki

    = 4. W = [w1

    w2

    w3

    ]T 3

    (2-D case) or W = [w1 w6]T 6 (3-D case) is the external wrench (including

    the gravity) exerted on the object by the environment, F = [f1 fnc ]T k is the

    set of contact forces generated by the hand, fi ki is the ith contact force [25].

    With a frictional point contact (vertexlink or fingertipedge contacts), to avoid

    slipping the contact force must be in its friction cone, i.e., fti fn

    i , where is

    the coefficient of friction, fti 1 (2-D case) or fti

    2 (3-D case) and fni 1 are

    the normal and tangential components of the contact force fi exerted at the ith con-

    tact position. The contact force is unilateral, i.e., the normal force must be compres-

    sive, fn

    i > 0. With linkedge contact, the grasp forces are indeterminate and it isreported that some of them are unfeasible. They may be equivalent to a few of grasp

    forces with frictional point contacts. An comprehensive analysis of the indetermi-

    nate grasp forces in power grasps with a rigid-body model can be found in Ref. [26].

    The condition of static equilibrium of the hand requires that the contact forces

    applied to the hand be balanced by joint torques, which can be described as JTh F =

    , where Jh is the hand Jacobian, consisting of as many columns as joint torques and

    as many rows as the contact forces, both normal and tangential. Jh depends on the

    locations of contact on the hand and is a function of the hand configuration . F is

    the reaction of F, i.e., the force exerted on the hand by the object, but described inthe palm frame. is the set of joint torques of the hand. In practice, joint torques

    are limited, i.e., [TL, TU], where TL and TU are the lower and upper bounds

    of , respectively. Considering torque limits and zero lower boundary TL = 0, we

    have the constraint 0 JTh F TU.

    D

    ownloadedby[81.1

    61.2

    48.9

    3]at10:4713August2013

  • 7/27/2019 Workspace Generation for Multifingered Manipulation

    15/27

    Y. Guan et al. / Advanced Robotics 25 (2011) 22932317 2305

    3.5. Other Constraints

    When a contact requires the exact locations or positions of contacts between two

    topological features (e.g., a fixed point contact), only two points are involved, the

    constraint is trial:

    (xi xi )

    2 + (yi yi )

    2 = 0 or (xi xi )

    2 + (yi yi )

    2 + (zi zi )

    2 = 0,

    in both the 2-D and 3-D cases, where (xi , yi ) or (xi , yi , zi ) and (xi , y

    i ) or (x

    i , y

    i , z

    i )

    are the coordinates of the desired locations of contact points on the hand and on the

    object, respectively. All coordinates are with respect to a common coordinate frame.

    The condition implies that each term in the above equations be zero and hence each

    pair of corresponding coordinate components are equal, i.e., xi = xi , and so on.

    For joint limits of the hand, let

    L = [L1 L2

    Ln ]

    T and U = [U1 U2

    Un ]

    T n

    be the lower and upper bounds of joint limits, respectively. Then the constraint on

    joint angle is L U.

    4. Optimization Models

    The workspace of a robot may be formed and discretely described by its boundarysurfaces or curves. The target of workspace generation is hence to obtain these

    boundaries, which can be performed by optimization technique as follows.

    4.1. In the 2-D Case

    In Step (i) of Algorithm 1, we need to find the extrema of the object position x

    and y, considering all constraints in a grasp. This can be fulfilled by a problem of

    constrained optimization. Based on the constraint evaluation in the previous section,

    the global optimization model is in the form:

    max (or min )

    s.t.

    L U

    L U

    Ai = 0 (i = 1, 2, . . . )

    Aj > 0 (< 0) (j = 1, 2, . . . )

    mink {} > 0 (< 0) (k = 1, 2, . . . )

    maxl {} > 0 (< 0) (l = 1, 2, . . . )GF+ W = 0

    0 JTh F TU

    fni ft

    i fn

    i

    fni > 0 (i = 1, 2, . . . , nc),

    (1)

    D

    ownloadedby[81.1

    61.2

    48.9

    3]at10:4713August2013

  • 7/27/2019 Workspace Generation for Multifingered Manipulation

    16/27

    2306 Y. Guan et al. / Advanced Robotics 25 (2011) 22932317

    where {x, y} represents the position component to be maximized or minimized,

    {x, y} {}, i.e., x = y, y = x (when the extreme value of one position coordi-

    nate is computed, the constraint on another coordinate should be considered). The

    superscripts L and U mean the lower and upper bounds of the corresponding vari-able, respectively. Ai is a signed triangular area coming from contact constraints

    in the grasp. maxk( ) and minl ( ) in the inequalities come from contact con-

    straints and/or collision-free constraints. They and Ai , Aj are functions of joint

    angles , object position (x,y) and orientation . The four formulas in the lower

    part of the above model are constraints in the force domain, including force equilib-

    rium of the object, driving torque limits, slipping avoidance, and unilateral contact

    force.

    In Step (iii) of Algorithm 1, the search for the maxima or minima of one position

    variable (x or y) is performed along a horizontal or vertical griding line. This isdone in the optimization by fixing one position variable at some value. Using = a

    (where a is the value associated to the searching line) to replace the first constraint

    in (1), we get the required optimization model in this step.

    In Step (iv) of Algorithm 1, at each node (within the boundary) of the grid where

    the location of the object is given, we need to determine the range of object ori-

    entation that maintains the feasibility of the grasp. The objective function in the

    optimization model to fulfil this is max (or min ) and the constraints function is

    almost the same as in (1), but with (x xd

    )2 + (y yd

    )2 = 0 to replace the first

    one in (1) (i.e., L U). (xd, yd) is the coordinate of a grid node within the

    boundary of workspace obtained in Step (iii) of Algorithm 1.

    4.2. In the 3-D Case

    In Step (i) of Algorithm 2, we need to find the maxima and the minima of the object

    position x, y and z, respectively, considering the constraints on contact, collision

    avoidance and joint limits. This can be realized by a problem of constrained global

    optimization. Based on the constraint evaluation in previous section, the optimiza-

    tion model can be described as:

    max w or min w

    s.t.

    Pw

    PLw , PU

    w

    [L, U]

    [L, U]

    Vi = 0 (i = 1, 2, . . . )

    maxj( ) > 0 (< 0) (j = 1, 2, . . . )

    mink( ) > 0 (< 0) (k = 1, 2, . . . ),

    (2)

    where w {x , y , z} represents the position variable to be maximized or minimized,

    w {x , y , z} {w} and then Pw stands for the position coordinates excluding w,

    e.g., Px = (y,z). L and U are vectors of the lower and upper bounds of joint

    angles, respectively; PLw , PU

    w and L, U are the constraints (lower and upper

    D

    ownloadedby[81.1

    61.2

    48.9

    3]at10:4713August2013

  • 7/27/2019 Workspace Generation for Multifingered Manipulation

    17/27

    Y. Guan et al. / Advanced Robotics 25 (2011) 22932317 2307

    bounds) of the object position and orientation with respect to the palm frame, re-

    spectively. Vi is a tetrahedral volume and comes from contact constraints in the

    grasp. j and k mean a number of inequalities with maxj( ) and mink( ) for

    contact constraints and/or collision-free constraints. They and Vi are functions ofand/or P, . Note that the constraints in the force domain described in Section 3.4

    can be directly incorporated into the optimization models in the same manner as in

    the 2-D case.

    In Step (v) of Algorithm 2, the search for the maxima or minima of one position

    variable (x, y or z) is performed along a line (the intersection of two perpendicular

    slices). This is done in the optimization by fixing the other two variables at the

    values associated to the line.

    In Step (viii), at each node of the grid where the location of the object is given,

    we need to determine the range of the object rotation, which can be fulfilled using

    the optimization model:

    max or min

    s.t.

    (,)

    [L, U]

    (x xd)2 + (y yd)

    2 + (z zd)2 = 0

    Vi = 0 (i = 1, 2, . . . )maxj( ) > 0 (< 0) (j = 1, 2, . . . )

    mink( ) > 0 (< 0) (k = 1, 2, . . . ),

    (3)

    where (xd, yd, zd) is the position of the given grid node. If these constraints are vio-

    lated in the computation, then the grasp is not kinematically feasible at this position

    and this means that the node is out of the manipulation workspace. Otherwise, the

    node is within the workspace and the maximum or minimum of can be obtained

    at this node with the given grasp. Optimization model (3) is also employed in the

    late steps of the algorithm to find the rotation range of the object at a given positionin the workspace, but with a specified value of (,) (a node in ) in the model.

    Note that, in a canonical grasp, the grasp positions are not specified and the pre-

    vious optimization models are amendable to this situation. If the grasp positions are

    defined (e.g., in a fixed-point grasp), then the following contact constraint should

    be incorporated in the optimization models:

    (xp xq )2 + (yp yq )

    2 + (zp zq )2 = 0, (4)

    where p(xp, yp, zp) and q(xq , yq , zq ) are the contact points of the hand and theobject, respectively.

    It can be seen that the optimization models for workspace computation are very

    complicated with a number of nonlinear functions. An efficient algorithm or soft-

    ware tool is required to solve them.

    D

    ownloadedby[81.1

    61.2

    48.9

    3]at10:4713August2013

  • 7/27/2019 Workspace Generation for Multifingered Manipulation

    18/27

    2308 Y. Guan et al. / Advanced Robotics 25 (2011) 22932317

    5. Illustrative Examples

    To illustrate and verify the effectiveness of the proposed approach and the cor-

    responding algorithms, we provide several examples. In the examples, the opti-

    mization models are solved with the commercial software tool LGO (GUI version),

    which is a model development system for Lipschitz-continuous global optimization

    [27], developed by Pinter Consulting Services. The computation time with LGO

    depends on the box size (intervals) of variables and the algorithms used in the

    optimization solution (two algorithms, branch-and-bound and random search, are

    available in LGO), and it may vary a little bit in different runs for the same model.

    It is possible to develop a system with the core of LGO to automatically solve all

    optimization models and visualize the workspaces. On a PC with an Intel Core2

    CPU (1.66 GHz) and 1 GB memory, the typical running time with LGO for eachoptimization model is 0.20 s in the 2-D case and 0.7 s in the 3-D case including I/O

    actions. Since the analysis is off-line, this time is quite acceptable.

    5.1. In the 2-D Case

    In the example, we generate the workspace of a hand grasping and manipulating

    a rectangular object of size 70 50 mm2. Provided that the handobject system

    is in a vertical plane, the object weight must be balanced. The hand is assumed to

    consist of two fingers with 2-d.o.f. each. Suppose the lengths of links are l1 = 50,

    l2 = 40, l3 = 50, l4 = 40 and d = 60 (d is the distance between the two finger basepoints) (all in mm). Let 1, 3 [/6, 2/3] and 2, 4 [0, 5/12] (in rad); their

    positive directions are shown in Fig. 1a. Assume that the limits of driving torques of

    the hand are T1 = 400, T2 = 500, T3 = 500 and T4 = 400 (all in Nmm); the normal

    contact forces are fn1 10 N, fn

    2 10 N; the coefficient of friction is = 0.3. In

    the manipulation, the two fingertips t1 and t2 contact with the two middle points,

    q1 and q2, of the left and right short edges, respectively, no sliding is allowed. Then

    the fixed-point contact constraints can be expressed as (xt1 xq1 )2 + (yt1 yq1 )

    2 +

    (xt2 xq2 )2 + (yt2 yq2 )

    2 = 0, where the coordinates can be easily obtained by

    finger kinematics and object configuration.From feasibility analysis, it is found that the maximum weight of the object that

    can be grasped by the hand is wmax = 4.11 N. Now suppose that the object weight is

    wg = 3.5 N. From the optimization models in the form of (1), the extreme positions

    of the object are found, as listed in the second row of Table 1, and the correspond-

    Table 1.

    Extreme positions of the grasped object

    Weight xmin xmax ymin ymax min,max

    wg = 4.0 8.80 8.80 70.94 73.51 1.36

    wg = 3.5 42.89 42.89 64.41 84.23 18.23

    wg = 0.0 47.52 47.52 64.41 89.75 18.58

    D

    ownloadedby[81.1

    61.2

    48.9

    3]at10:4713August2013

  • 7/27/2019 Workspace Generation for Multifingered Manipulation

    19/27

    Y. Guan et al. / Advanced Robotics 25 (2011) 22932317 2309

    (a) (b)

    (c) (d)

    Figure 5. Hand-object configurations at the four extreme positions: (a) xmin, (b) xmax, (c) ymin and

    (d) ymax.

    ing configurations of the handobject systems are shown in Fig. 5, where one of the

    base joints reaches its limit (Fig. 5ac) or the base joints reach their driving capa-

    bility (Fig. 5d, in other words, the base joints need larger driving torques to lift the

    object higher without slipping). It can be seen from Fig. 5 that the configurations

    at xmin, xmax are symmetric about the Y-axis, which is due to the symmetry of the

    handobject system about this axis.

    The enclosing rectangle bounded by xmin, xmax, ymin and ymax is divided into a

    grid of 84 20 nodes (with intervals of 1 mm). Along the vertical griding lines,a series of boundary points are obtained from the models in the form of (1), from

    which the boundary curve in the XY plane is plotted, as in Fig. 6a. At each node

    within the boundary curve, we then further calculate the minimum and maximum

    of object orientation using models similar to (1), and check the grasp feasibility

    for the intermediate values of between its two extrema at a 1 resolution. In this

    example, the workspace along the -axis is continuous for each (x, y) node, as is

    intuitively obvious. Finally, the complete workspace is visualized in a 3-D frame,

    as shown in Fig. 6c, whose coordinates are position (x,y) and orientation . From

    Fig. 6, it can be seen that the projection of the workspace on the XY plane issymmetric and the visualized 3-D workspace is axis-symmetric about the Y-axis

    (due to the symmetry of the handobject system).

    For comparison, more calculations are performed for the workspaces of objects

    with the same geometry (size and shape), but different weights. The extreme posi-

    D

    ownloadedby[81.1

    61.2

    48.9

    3]at10:4713August2013

  • 7/27/2019 Workspace Generation for Multifingered Manipulation

    20/27

    2310 Y. Guan et al. / Advanced Robotics 25 (2011) 22932317

    (a) (b)

    (c) (d)

    Figure 6. Workspace of a rectangular object with different weights (wg = 3.5 N and wg = 0): (a) top

    view (wg = 3.5 N), (b) top view (wg = 0), (c) 3-D (x,y,) frame (wg = 3.5 N) and (d) 3-D (x,y,)

    frame (wg = 0).

    tions of the object with weight 4.0 N (near the maximum 4.11 N that can be held by

    the hand) are listed in the first row of Table 1. It can be induced that the workspaceis much smaller than the previous one. Consider further manipulation of an ob-

    ject without weight (i.e., wg = 0; say, the handobject system is on a friction-free

    horizontal plane, the force of the hand is still under consideration). The extreme po-

    sitions of the object are computed as listed in the third row of Table 1. The enclosed

    box formed by xmin, xmax, ymin and ymax is divided into a grid of 102 26 nodes

    (the intervals of x and y coordinates are 0.95 and 1.01 mm, respectively). The cor-

    responding workspace is shown in Fig. 6b and d. It is the same as that with only

    kinematic constraints in the grasp taken into account (constraints in the force do-

    main are ignored). It is clearly observed that the workspace of the manipulation withfull constraints is smaller than that with only kinematic constraints considered it

    is a subset of the latter.

    It can be seen from Fig. 6c and d that the part of the workspace near the position

    boundary becomes thinner. This indicates that when the object is near the boundary

    curve in the XY plane, the range of rotation becomes small and on the curve no

    rotation range is allowed with the grasp; in other words, the orientation of the object

    is unique at one specified position on the boundary.

    As stated before, the formulae and algorithm for workspace generation are gen-

    eral. They are adaptive to various contact types, not limited to fixed point contactsin the preceding example. We have obtained the workspace in the case that sliding

    is allowed, i.e., the grasp is defined by two contact pairs: g = {(t1, e1),(t2, e3)},

    without specifying the contact locations. It was found that the workspace in this

    case is much larger than those with fixed-point contacts [28]. The result is consis-

    D

    ownloadedby[81.1

    61.2

    48.9

    3]at10:4713August2013

  • 7/27/2019 Workspace Generation for Multifingered Manipulation

    21/27

    Y. Guan et al. / Advanced Robotics 25 (2011) 22932317 2311

    tent with the prediction in Ref. [17] that if the contact points are allowed to move

    across the surfaces through rolling or sliding, it is possible to increase the size of

    the workspace greatly.

    5.2. In the 3-D Case

    The example is modeled according to a three-fingered hand manipulating a box of

    35 25 75 mm3, as shown in Fig. 1c. The hand consists of three identical fin-

    gers with 3 d.o.f. each (finger f1 is hidden by f2 in Fig. 1c). The axes of the two

    outer joints in one finger are parallel and perpendicular to that of the first joint.

    The link lengths of one finger are 18, 50 and 40 mm, respectively. The finger co-

    ordinate frames are located at (7.2, 30, 19.2), (7.2, 30, 19.2) and (7.2, 0, 19.2),

    respectively, in the palm frame OXYZ, with angles of 22.5 between their Xi -

    axes and the palm X-axis. The joint ranges of the first joints of the fingers are:1 [45, 60], 4 [60, 45] and 7 [60, 60], and those of the second and

    third joints are 2, 5, 8 [60, 90] and 3, 6, 9 [0, 90] (all in deg).

    We consider the manipulation with fixed-point contact: {(t1, q1),(t2, q2),(t3,

    q3)}, where t1, t2 and t3 are the fingertips of the three fingers, q1(0, 30, 12.5) and

    q2(0, 30, 12.5) are two points on the top face of the box (defined in object frame),

    and q3(0, 0, 12.5) is the central point on the bottom face. The contact constraint

    is in the form of (4), and the collision-free constraints are mainly of two segments

    and segmentface. For the sake of computation simplicity, we treat each link as

    an ideal line segment with zero cross-sectional area and do not consider the fac-

    tors in force domain. Even in this simplified case, the optimization models have

    15 variables (nine for joint angles and six for object position and orientation), and

    up to 55 constraints in total (one equality for the contact constraints and 54 inequal-

    ities for the collision-free constraints). From the optimization models in the form

    of (2), the extreme object positions are obtained as: xmin = 56.59, xmax = 113.67,

    ymin = 78.23, ymax = 78.26, zmin = 57.38 and zmax = 57.38 (all in mm). The

    corresponding configurations of the handobject system are shown in Fig. 7. It is

    found that ymax, ymin, zmax and zmin are approximately equal, respectively; xmax,zmax and zmin are on the plane XOZof the palm frame. This observation can also be

    derived from the symmetry of the handobject system about the plane XOZ. xmin,

    ymax and ymin are neither on the plane XOZ nor on the plane XOY, which is also

    reasonable, as will be seen soon in the position workspace.

    We use horizontal planes z = 0, 15, 30 and 45 and vertical planes y = 0,

    15, 30, 45 and 60 (parallel to the XOY and XOZ plane of the palm frame,

    respectively) to slice the enclosing box constructed by planes x = xmin, x = xmax,

    y = ymin, y = ymax, z = zmin and z = zmax. On each slicing plane, we use the opti-

    mization model in a form similar to (2) to get extreme points by fixing two positionvariables and calculating the extreme values of the third position variable (e.g., on

    horizontal plane z = 30, set y = i y (i = 1, 2, . . . , 27), to solve for the min-

    imal and maximal values of x, respectively, where y = 3 mm is the resolution

    for the boundary point; in other words, we are searching for the boundary points

    D

    ownloadedby[81.1

    61.2

    48.9

    3]at10:4713August2013

  • 7/27/2019 Workspace Generation for Multifingered Manipulation

    22/27

    2312 Y. Guan et al. / Advanced Robotics 25 (2011) 22932317

    (a) (b) (b)

    (c) (d) (d)

    Figure 7. Grasp configurations at the extreme positions: (a) xmax (113.7, 0.0, 0.0, 38.5, 37.8, 0.0),

    (b) ymax (74.0, 78.3, 22.3, 138.6, 24.0, 22.6), (c) zmax (85.1, 0.4, 57.4, 90.0, 0.0, 7.0), (d) xmin

    (56.6, 60.6, 27.6, 142.5, 34.5, 17.3), (e) ymin (73.7, 78.2, 23.2, 41.8, 24.5, 21.6) and (f) zmin(85.1, 0.4, 57.4, 88.9, 1.0, 7.0).

    along lines such as ab in Fig. 2 and the distance between two adjacent searching

    lines is 3 mm). Connecting adjacent boundary points, we obtain boundary curves

    on these slicing planes, as shown in Fig. 8a and b. These horizontal and vertical

    curves roughly construct the position workspace, as shown in Fig. 8c. The part of

    the position workspace in the Y direction is obtained according to the symmet-

    rical property of the handobject system about the vertical plane XOZ of the palm

    frame.For orientation workspace, we discretize the local coordinate plane of a sphere

    by using a grid with a resolution of 0.2904 rad (12) to get a series of nodes

    (31 16 = 496 nodes in total) corresponding to rotation vectors (Fig. 2b and c).

    The orientation workspace depends on the object position. As an example, let the

    object be at position (95.0, 10.0, 0.0) in the palm frame. At each node, from the op-

    timization model in the form of (3), we solve for the maximal rotation angle max.

    In a 3-D frame with (,,) as three coordinates where (,) define a rotation

    axis and indicates the maximal angle that the object can rotate about the rotation

    axis, the rotation range is depicted in Fig. 9a. For better intuition, the rotation vec-tor corresponding to each (,,) is depicted in a coordinate frame, as shown in

    Fig. 9b, where the rotation direction is shown by that of the vector (ray segment)

    and the maximal rotation angle by the magnitude of the vector. All endpoints of the

    vectors span roughly the orientation workspace, as shown in Fig. 9c and d. Note

    D

    ownloadedby[81.1

    61.2

    48.9

    3]at10:4713August2013

  • 7/27/2019 Workspace Generation for Multifingered Manipulation

    23/27

    Y. Guan et al. / Advanced Robotics 25 (2011) 22932317 2313

    (a) (b) (c)

    Figure 8. Position workspace. (a) On horizontal planes. (b) On vertical planes. (c) In the 3-D palm

    coordinate frame.

    (a) (b)

    Figure 9. Orientation workspaces at positions (95.0, 10.0, 0.0) (top) and (115.0, 25.0, 10.0) (bottom).

    (a) In (,,) frame. (b) Rotation vectors. (c) Rotation vectors (views 1 and 2).

    that, the better the griding resolution is, the smoother the rotation workspace looks.

    D

    ownloadedby[81.1

    61.2

    48.9

    3]at10:4713August2013

  • 7/27/2019 Workspace Generation for Multifingered Manipulation

    24/27

    2314 Y. Guan et al. / Advanced Robotics 25 (2011) 22932317

    (c)

    Figure 9. (Continued.)

    Each point on the surface of the rotation workspace corresponds to a rotation axis

    and the maximal rotation around it.

    For comparison, consider the orientation workspace at position (105.0, 20.0,

    15.0), which is nearer the boundary of the position workspace than at (95.0, 10.0,

    0.0). The corresponding orientation workspace is also shown in Fig. 9 (bottom row).

    From the results, we see that the shapes and volumes of orientation workspaces atdifferent positions are quite different. The results show, and it can be imagined, that

    with the object approaching the boundary of the position workspace, the orientation

    workspace becomes smaller and smaller or thinner and thinner. On the boundary,

    the orientation workspaces degenerate to vectors corresponding to the orientations

    at those positions (Fig. 7).

    The examples clearly illustrate the algorithms and verify the effectiveness of the

    proposed method, although the finger links and tips are regarded ideally as line

    segments and points, respectively, for the sake of simplicity. In a realistic hand, the

    actual shapes and sizes of the finger links and tips cannot be ignored. In this case, thelinks and fingertips maybe approximated by polyhedrons. The contact and collision-

    free constraints are then evaluated between these approximated polyhedrons and the

    grasped object using their corresponding topological features, and the algorithm

    can still be applied in the same manner. However, the number of constraints will

    D

    ownloadedby[81.1

    61.2

    48.9

    3]at10:4713August2013

  • 7/27/2019 Workspace Generation for Multifingered Manipulation

    25/27

    Y. Guan et al. / Advanced Robotics 25 (2011) 22932317 2315

    increase greatly, resulting in much larger models and demanding more powerful

    software tools to solve them.

    6. Conclusions

    In this paper, we have systematically and thoroughly studied the problem of gen-

    erating the workspace of a multifingered hand manipulating an object, which is a

    difficult problem when using analytical or geometric methods for the solution. Us-

    ing an optimization technique, we have presented a novel numerical approach to and

    algorithms for this problem in both planar and spatial cases. All factors that affect

    the workspace have been taken into account, which include the kinematics of the

    hand, geometry of the object to be manipulated, force equilibrium, frictional force

    and driving torque limits. The approach is based on feasibility analysis of grasps,where the central constraints, i.e., contact constraints, collision-free constraints and

    force constraints, play a key role. We have formulated optimization models for the

    computation of the workspace and then visualized the workspace in 3-D coordinate

    frames.

    It has been observed from the examples in the 2-D case that the factors or con-

    straints in the force domain play an important role in the workspace generation of

    multifingered manipulation. This is an important characteristics and is quite dif-

    ferent from the workspaces of other robots where only geometric and kinematic

    factors are evolved. The workspace with all constraints taken into account is a sub-set of that with only kinematic constraints considered and is more practical. It has

    been verified that the more and stricter the constraints on a grasp, the smaller the

    workspace of the manipulation.

    Our approach provides an effective, general and complete solution to the long-

    term open problem of the workspace of multifingered manipulation or coordination

    of multiple robots in which analytical methods cannot be effectively applied. With

    consideration of full constraints in a grasp and its generality, the presented approach

    represents a significant improvement over previous methods. Although only exam-

    ples of manipulating rectangular and cubic objects are provided, our method is not

    limited to this. It is adaptive to a variety of handobject systems it can be used

    to generate the workspace of a multifingered hand with any number of fingers ma-

    nipulating various polygonal objects in various grasp configurations with various

    contact types, owing to the adaptation of our feasibility analysis.

    Acknowledgement

    The work is in part supported by the National Science Fund for Distinguished

    Young Scholars of China (50825504).

    References

    1. K. Gupta, On the nature of robot workspace, Int. J. Robotics Res. 5, 112121 (1986).

    D

    ownloadedby[81.1

    61.2

    48.9

    3]at10:4713August2013

  • 7/27/2019 Workspace Generation for Multifingered Manipulation

    26/27

    2316 Y. Guan et al. / Advanced Robotics 25 (2011) 22932317

    2. Z. Lai and C. Meng, The dexterous workspace of simple manipulators, IEEE Trans. Robotics

    Automat. 4, 99103 (1988).

    3. D. Alciatore and C. Ng, Determining manipulator workspace boundaries using the Monte Carlo

    method and least squares segmentation, ASME Robotics Kinemat. Dyn. Control DE-72, 141146(1994).

    4. J. Rastegar and D. Perel, Generation of manipulator workspace boundary geometry using the

    Monte Carlo method and interactive computer graphics, ASME Trends Dev. Mech. Mach. Robotics

    3, 299305 (1988).

    5. D. Kohli and J. Spanos, Workspace analysis of mechanical manipulators using polynomial dis-

    criminants, ASME J. Mech. Trans. Automat. Des. 107, 209215 (1985).

    6. H. Zhang, Efficient evaluation of the feasibility of robot displacement trajectories, IEEE Trans.

    Syst. Man Cybernet. 23, 324330 (1993).

    7. V. Kumar, Characterization of workspaces of parallel manipulators, ASME J. Mech. Des. 114,

    368375 (1992).

    8. M. Lee and K. Park, Workspace and singularity analysis of a double parallel manipulator,

    IEEE/ASME Trans. Mechatron. 5, 367374 (2000).

    9. J. P. Merlet, Parallel Robots. Kluwer, Dordrecht (2000).

    10. Z. Vafa and S. Dubowsky, The kinematics and dynamics of space manipulators: the virtual ma-

    nipulator approach, Int. J. Robotics Res. 9, 321 (1990).

    11. G. Chirikjian and I. Ebert-Uphoff, Numerical convolution on the euclidean group with applica-

    tions to workspace generation, IEEE Trans. Robotics Automat. 14, 123136 (1998).

    12. Y. Wang and G. Chirikjian, Workspace generation of hyper-redundant manipulators as a diffusion

    process on SE(N), IEEE Trans. Robotics Automat. 20, 399408 (2004).13. C. Gosselin and J. Angeles, A global performance index for the kinematic optimization of robotic

    manipulators, ASME J. Mech. Des. 113, 220226 (1991).

    14. B. Monsarrat and C. Gosselin, Workspace analysis and optimal design of a 3-leg 6-d.o.f. parallel

    platform mechanism, IEEE Trans. Robotics Automat. 19, 954966 (2003).

    15. G. Yang, C. Pham and S. Yeo, Workspace performance optimization of fully restrained cable-

    driven parallel manipulators, in: Proc. IEEE/RSJ Int. Conf. on Intelligent Robots and Systems,

    Beijing, pp. 8590 (2006).

    16. J. Yang, K. Abdel-Malek and K. Nebel, Reach envelope of a 9-degree-of-freedom model of the

    upper extremity, Int. J. Robotics Automat. 20, 240259 (2005).

    17. J. Kerr and B. Roth, Analysis of multifingered hands, Int. J. Robotics Res. 4, 317 (1986).18. Y. Guan and H. Zhang, Kinematic graspability of a 2D multifingered hand, in: Proc. IEEE Int.

    Conf. on Robotics and Automation, San Francisco, CA, pp. 35913596 (2000).

    19. Y. Guan and H. Zhang, Kinematic feasibility of 3D multifingered grasps, IEEE Trans. Robotics

    Automat. 19, 507513 (2003).

    20. Y. Guan and H. Zhang, Feasibility of 2-d multifingered grasps, Int. J. Robotics Automat. 20, 271

    280 (2005).

    21. J. Schwartz and M. Sharir, On the piano movers problem 1: the case of a two-dimensional

    rigid polygonal body moving admist polygonal barriers, Commun. Pure Appl. Math. 36, 345398

    (1983).

    22. J. Schwartz and M. Sharir, A survey of motion planning and related geometric algorithms, Artif.

    Intell. 37, 157169 (1988).

    23. Y. Guan, Y. Yokoi and X. Zhang, Numerical methods for reachable space generation of humanoid

    robots, Int. J. Robotics Res. 27, 935950 (2008).

    24. J. ORourke, Computational Geometry in C. Cambridge University Press, Cambridge (1993).

    D

    ownloadedby[81.1

    61.2

    48.9

    3]at10:4713August2013

  • 7/27/2019 Workspace Generation for Multifingered Manipulation

    27/27

    Y. Guan et al. / Advanced Robotics 25 (2011) 22932317 2317

    25. R. Murray, Z. Li and S. Sastry, A Mathematical Introduction to Robotic Manipulation. CRC Press,

    Boca Raton, FL (1994).

    26. T. Omata and K. Nagata, Rigid body analysis of the indeterminate grasp force in power grasps,

    IEEE Trans. Robotics Automat. 16, 4654 (2000).27. J. Pinter, Global Optimization in Action. Kluwer, Dordrecht (1996).

    28. Y. Guan and H. Zhang, Workspace of 2D multifingered manipulation, in: Proc. IEEE/RSJ Int.

    Conf. on Intelligent Robots and Systems, Las Vegas, NV, pp. 37053710 (2003).

    About the Authors

    Yisheng Guan received his BS degree from Hunan University of Agriculture, in

    1987, his MS from Harbin Institute of Technology, in 1990, and his PhD from

    Beijing University of Aeronautics and Astronautics, in 1998, all in P. R. China.

    He conducted research in the Department of Computing Science, University ofAlberta, Canada, as a Postdoctoral Fellow, from 1998 to 2000, and in Intelligent

    Systems Institute, AIST, Japan, with a Fellowship of the Japan Society for the

    Promotion of Science (JSPS), from 2003 to 2005. He is currently a Professor in

    the School of Mechanical and Automotive Engineering, South China University of

    Technology, Guangzhou, P. R. China. His research interests include humanoid robotics, multifingered

    hands, biomimetic robotics and medical robotics.

    Hong Zhang received his BS degree from Northeastern University, Boston, MA,

    USA, in 1982, and his PhD degree from Purdue University, West Lafayette, IN,

    USA, in 1986, both in Electrical and Computer Engineering. He is currently aProfessor in the Department of Computing Science, University of Alberta, and

    the Director of the Centre for Intelligent Mining Systems. He is the holder of

    the NSERC Industrial Research Chair in Intelligent Sensing Systems. His current

    research interests include robotics, computer vision and image processing.

    Xianmin Zhang received his PhD degree from Beijing University of Aeronau-

    tics and Astronautics, P. R. China, in 1993. After that he conducted research as

    a Postdoctorial Fellow at Northwestern Polytechnic University, P. R. China, and

    Minnesota University, USA. He is currently is a Professor and a Vice Dean of

    the School of Mechanic and Automotive Engineering, South China University of

    Technology, Guangzhou, P. R. China. He has authored or coauthored over 150

    papers, and successfully applied for more than 20 patents. His current research

    interests lie in elastic and compliant mechanisms, topological optimization, dy-

    namics and vibration control of mechanical systems, precision manufacturing, and automation.

    Zhangjie Guan received his BS degree from Hunan Institute of Science and Tech-

    nology, P. R. China, in 2009. He is currently pursuing a Masters degree in the

    School of Aeronautics Science and Engineering, Beijing University of Aeronau-

    tics and Astronautics.

    D

    ownloadedby[81.1

    61.2

    48.9

    3]at10:4713August2013