mapping synergies from human to robotic hands with ...sirslab.dii.unisi.it › papers › 2013 ›...

13
Mapping Synergies from Human to Robotic Hands with Dissimilar Kinematics: an Approach in the Object Domain G. Gioioso Student Member, IEEE, G. Salvietti Member, IEEE, M. Malvezzi Member, IEEE, and D. Prattichizzo Member, IEEE Abstract—One of the major limitations to the use of advanced robotic hands in industries is the complexity of the control system design due to the large number of motors needed to actuate their degrees of freedom (DoFs). It is our belief that the development of a unified control framework for robotic hands will allow to extend the use of these devices in many areas. Borrowing the terminology from software engineering, there is a need for middleware solutions to control the robotic hands independently from their specific kinematics, and focusing only on the manipulation tasks. To simplify and generalize the control of robotic hands we take inspiration from studies in neuroscience concerning the sensorimotor organization of the human hand. These studies demonstrated that, notwithstanding the complexity of the hand, a few variables are able to account for most of the variance in the patterns of configurations and movements. The reduced set of parameters that humans effectively use to control their hands, known in the literature as synergies, can represent the set of words for the unified control language of robotic hands provided that we solve the problem of mapping human hand synergies to actions of the robotic hands. In this work we propose a mapping designed in the manipulated object domain in order to ensure a high level of generality with respect to the many dissimilar kinematics of robotic hands. The role of the object is played by a virtual sphere, whose radius and center position change dynamically, and the role of the human hand is played by a hand model referred to as “paradigmatic hand,” able to capture the idea of synergies in human hands. Index Terms—Robotic grasping, sensory and motor human hand synergies, dexterous robotic hands, human skill transfer. I. I NTRODUCTION A. Motivation and objective R OBOTIC hands have many degrees of freedom (DoFs) distributed among several kinematic chains: the fingers. The complexity of the mechanical design is needed to adapt hands to the many kinds of tasks required in unstructured environments, such as surgical rooms, industry, house, space and other domains, where robotic grasping and manipulation have become crucial. Some remarkable example of robotic hand design are the UTAH/MIT hand [1] with 16 actuated joints, 4 per each finger, the Shadow hand, with 20 actuated joints [2], and the DLR hand II with 12 actuated joints [3]. One of the main issues in designing and controlling robotic G. Gioioso, M. Malvezzi and D. Prattichizzo are with Department of Information Engineering and Mathematics, University of Siena, via Roma 56, 53100 Siena, Italy [gioioso, malvezzi, prattichizzo]@dii.unisi.it G. Gioioso, G. Salvietti and D. Prattichizzo are with Dept. of Advanced Robotics, Istituto Italiano di Tecnologia, via Morego 30, 16163 Genova, Italy [email protected] Manuscript received April 02, 2012; revised 01 February, 2013. N. of DoFs N. of apps in industries Fig. 1. The larger is the number of DoFs of the device, the lower is the number of industrial applications where these devices are used. hands is that a large number of motors is needed to fully actuate the DoFs. This makes both the mechanical and the control system design of robotic hands dramatically more complex when compared to simple grippers often used in industrial applications [4], as pictorially represented in Fig. 1: the larger is the number of DoFs, the lower is the number of possible applications of the robotic hands in industries. This is one of the major limitations to the use of advanced robotic hands in flexible automation together with the rise of costs and the decrease of the robustness of such devices. As far as control is concerned, it is our belief that the development of a unified framework for programming and controlling of robotic hands will allow to extend the use of these devices in many areas. Borrowing the terminology of software engineering, we believe that there is a need for middleware solutions for manipulation and grasping tasks to seamlessly integrate robotic hands in flexible cells and in service robot applications. This paper is a first contribution in the direction of devel- oping a unified framework for programming and controlling robotic hands based on a number of fundamental primitives, and abstracting, to the extent possible, from the specifics of their kinematics and mechanical construction. Finally, the ultimate goal of our approach is to define a device independent control framework for robotic hands, based on synergies of human hands, and this work presents a possible solution to project the synergy based control of human hands to robotic hands with dissimilar kinematics. B. Synergies This work is inspired and supported by neuroscience studies which have shown that the description of how the human hand performs grasping actions is dominated by postures in a configuration space of much smaller dimension than the kinematic structure would suggest. Such configuration space

Upload: others

Post on 10-Jun-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Mapping Synergies from Human to Robotic Hands with ...sirslab.dii.unisi.it › papers › 2013 › Gioioso.TRO.2013.Grasping.Fin.pdf · between a human hand and a robotic hand is

Mapping Synergies from Human to Robotic Handswith Dissimilar Kinematics: an Approach in the Object Domain

G. Gioioso Student Member, IEEE, G. Salvietti Member, IEEE,M. Malvezzi Member, IEEE, and D. Prattichizzo Member, IEEE

Abstract—One of the major limitations to the use of advancedrobotic hands in industries is the complexity of the controlsystem design due to the large number of motors needed toactuate their degrees of freedom (DoFs). It is our belief that thedevelopment of a unified control framework for robotic handswill allow to extend the use of these devices in many areas.Borrowing the terminology from software engineering, there isa need for middleware solutions to control the robotic handsindependently from their specific kinematics, and focusing onlyon the manipulation tasks. To simplify and generalize the controlof robotic hands we take inspiration from studies in neuroscienceconcerning the sensorimotor organization of the human hand.These studies demonstrated that, notwithstanding the complexityof the hand, a few variables are able to account for most ofthe variance in the patterns of configurations and movements.The reduced set of parameters that humans effectively use tocontrol their hands, known in the literature as synergies, canrepresent the set of words for the unified control language ofrobotic hands provided that we solve the problem of mappinghuman hand synergies to actions of the robotic hands. In thiswork we propose a mapping designed in the manipulated objectdomain in order to ensure a high level of generality with respectto the many dissimilar kinematics of robotic hands. The role ofthe object is played by a virtual sphere, whose radius and centerposition change dynamically, and the role of the human handis played by a hand model referred to as “paradigmatic hand,”able to capture the idea of synergies in human hands.

Index Terms—Robotic grasping, sensory and motor humanhand synergies, dexterous robotic hands, human skill transfer.

I. INTRODUCTION

A. Motivation and objective

ROBOTIC hands have many degrees of freedom (DoFs)distributed among several kinematic chains: the fingers.

The complexity of the mechanical design is needed to adapthands to the many kinds of tasks required in unstructuredenvironments, such as surgical rooms, industry, house, spaceand other domains, where robotic grasping and manipulationhave become crucial. Some remarkable example of robotichand design are the UTAH/MIT hand [1] with 16 actuatedjoints, 4 per each finger, the Shadow hand, with 20 actuatedjoints [2], and the DLR hand II with 12 actuated joints [3].One of the main issues in designing and controlling robotic

G. Gioioso, M. Malvezzi and D. Prattichizzo are with Departmentof Information Engineering and Mathematics, University of Siena,via Roma 56, 53100 Siena, Italy [gioioso, malvezzi,prattichizzo]@dii.unisi.it

G. Gioioso, G. Salvietti and D. Prattichizzo are with Dept. of AdvancedRobotics, Istituto Italiano di Tecnologia, via Morego 30, 16163 Genova, [email protected]

Manuscript received April 02, 2012; revised 01 February, 2013.

N. of DoFs

N. of apps in industries

Fig. 1. The larger is the number of DoFs of the device, the lower is thenumber of industrial applications where these devices are used.

hands is that a large number of motors is needed to fullyactuate the DoFs. This makes both the mechanical and thecontrol system design of robotic hands dramatically morecomplex when compared to simple grippers often used inindustrial applications [4], as pictorially represented in Fig. 1:the larger is the number of DoFs, the lower is the number ofpossible applications of the robotic hands in industries. Thisis one of the major limitations to the use of advanced robotichands in flexible automation together with the rise of costsand the decrease of the robustness of such devices.

As far as control is concerned, it is our belief that thedevelopment of a unified framework for programming andcontrolling of robotic hands will allow to extend the use ofthese devices in many areas. Borrowing the terminologyof software engineering, we believe that there is a need formiddleware solutions for manipulation and grasping tasks toseamlessly integrate robotic hands in flexible cells and inservice robot applications.

This paper is a first contribution in the direction of devel-oping a unified framework for programming and controllingrobotic hands based on a number of fundamental primitives,and abstracting, to the extent possible, from the specificsof their kinematics and mechanical construction. Finally, theultimate goal of our approach is to define a device independentcontrol framework for robotic hands, based on synergies ofhuman hands, and this work presents a possible solution toproject the synergy based control of human hands to robotichands with dissimilar kinematics.

B. Synergies

This work is inspired and supported by neuroscience studieswhich have shown that the description of how the humanhand performs grasping actions is dominated by postures ina configuration space of much smaller dimension than thekinematic structure would suggest. Such configuration space

Page 2: Mapping Synergies from Human to Robotic Hands with ...sirslab.dii.unisi.it › papers › 2013 › Gioioso.TRO.2013.Grasping.Fin.pdf · between a human hand and a robotic hand is

Paradigmatic hand Synergy control

Robotic hands Virtual object

Fig. 2. Mapping between human hand motions to motions of robotic handswith dissimilar kinematics using a virtual object.

is referred to as the space of postural synergies. Santello etal. [5] investigated this hypothesis by collecting a large setof data containing grasping poses from subjects that wereasked to shape their hands in order to mime grasps for alarge set (N = 57) of familiar objects. Principal ComponentsAnalysis (PCA) of this data revealed that the first two principalcomponents account for more than 80% of the variance,suggesting that a satisfying characterization of the recordeddata can be obtained using a much lower-dimensional subspaceof the hand DoF space. These and similar results seem tosuggest that, out of the more than 20 DoFs of a human hand,only two or three combinations can be used to shape thehand for basic grasps used in everyday life [6]. These ideascan be brought to use in robotics, since they suggest a newand principled way of simplifying the design and analysisof hands different from other more empirical, sometimesarbitrary design attempts, which has been the main roadblockfor research in artificial hands in the past [4]. The applicationof synergy concepts has been pioneered in robotics by [7], [8].In [7], and later on in [9], the idea has been exploited in thedimensionality reduction of the search space in problems ofautomated grasp synthesis, and has been applied effectivelyto derive pre-grasp shapes for a number of complex robotichands. In [8], authors designed a robotic hand consisting ofgroups of mechanically interconnected joints, with a priorityinspired by resemblance to postural synergies observed inhuman hands. More recently, in [10] a synergy impedancecontroller was derived and implemented on the DLR Hand II.

The synergy approach to analysis and design of artificialhands is the focus of some recent works on grasp theory.In [11] the authors investigated to what extent a hand withmany DoFs can exploit postural synergies to control force andmotion of the grasped object. In [12] numerical results werepresented showing quantitatively the role played by differentsynergies (from the most fundamental to those of higher-order) in making possible a number of different grasps. In[13] the analysis of underactuated hands was integrated byextending existing manipulability definition [14]–[16]. Themain aspect considered there is that in underactuated handsoften the force problem cannot be univocally solved within arigid-body framework, because of static indeterminacy [11],[12]. This problem can be solved considering the hand andthe contact compliance, as discussed in [17], [18].

C. Mapping

The main contribution of this work is the study of a mappingfunction between the postural synergies of the human handand synergistic control action in robotic devices. This mappingleads to an interesting scenario, where control algorithms aredesigned considering a paradigmatic hand model, and withoutreferring to the kinematic of the specific robotic hand. Theproposed approach represents a possible control paradigm tobe used to robustly solve manipulation tasks for parallel jawgrippers that are simple to program but also limited in function.The paradigmatic hand [12], [19] is a model inspired by thehuman hand that does not closely copy the kinematical anddynamical properties of the human hand, but rather representsa trade-off between the complexity of the human hand model,accounting for the synergistic organization of the sensorimotorsystem, and the simplicity and accessibility of the models ofthe robotic available hands.

This paper focuses on the mapping of human synergies ontorobotic hands by using a virtual object method, as shown inFig. 2. Different mapping procedures have been developed,mainly for tele-manipulation and learning by demonstrationtasks. They are summarized in the following section, and canbe substantially divided in two approaches: Joint space andCartesian space. Both of them are focused on the hand andnot on the task. They impose constraints on the choice of thereference joints or points, thus their applicability to hands withdissimilar kinematics is not intuitive.

The mapping method proposed in this work consists ofreplicating, on the robotic hand, the effects in terms of motionsand deformations that the human reference hand would per-form on a virtual object whose geometry is defined by the handposture itself. This allows to work directly on the task spaceavoiding a specific projection between different kinematics.

The paper is organized as it follows. In Section II areview of the literature related to the mapping between humanand robotic hand is presented. Section III summarizes themain results concerning grasp properties in synergy actuatedhands. Section IV describes the mapping method while inSection V some numerical simulations are shown, to confirmthe proposed approach effectiveness and the applicability ofthe method is tested in a manipulation task. Section VIdiscusses how the proposed mapping procedure can be usedto define a two–layer control strategy, in which the high levelcontroller is substantially independent from the robotic handand depends only on the specific operation to be performed.At the end, in Section VII, the advantages and drawbacks ofthe proposed method are summarized, conclusions and futureworks are outlined.

II. RELATED WORK

In the literature there are several examples where a mappingbetween a human hand and a robotic hand is required and theseexamples belong usually to two different categories: telema-nipulation and learning by demonstration. In the former case,data gloves are typically used to capture human hand motionto move robotic hands. In [20], a DLR Hand is controlledusing a CyberGlove. In the learning by demonstration case,

Page 3: Mapping Synergies from Human to Robotic Hands with ...sirslab.dii.unisi.it › papers › 2013 › Gioioso.TRO.2013.Grasping.Fin.pdf · between a human hand and a robotic hand is

human data are used to improve the grasping performances byteaching to the robot the correct posture necessary to obtainstable grasps. In [21], authors evaluated and modelled humangrasps during the arm transportation sequence in order tolearn and represent grasp strategies for different robotic hands,while in [22] Do et al. proposed a system for vision-basedgrasp recognition, mapping and execution on a humanoid robotto provide an intuitive and natural communication channelbetween humans and humanoids.

Since the kinematics and configuration spaces of a humanhand and an artificial robotic hand are typically different,especially when the robotic hand is not anthropomorphic, amapping function is needed. The main approaches that havebeen used in the past to deal with this problem are three:joint to joint mapping, fingertip mapping and pose mapping.The first method consists on a direct association betweenjoints on the human hand and joints on the robotic hand.Intuitively, this solution is quite efficient for anthropomorphichands, while some empirical solutions have to be adopted withnon-anthropomorphic devices. An example were joint valuesof a human hand model are directly used to move joints ofa robotic hand can be found in [23]. Although it representsthe simplest way to map movements between hands, the jointcorrespondence mapping is set according to some heuristicsdepending on the kinematics of the hands and it is difficult togeneralize the approach to non-anthropomorphic structures.

Cartesian space mappings focus on the relation between thetwo different workspaces. This solution is more suitable forrepresenting the fingertip positions and it is a natural approachwhen, for example, precision grasps are considered. In [24]a point-to-point mapping algorithm is presented for a multi-fingered telemanipulation system where fingertip motion of thehuman hand is reproduced with a three-finger robotic gripper.In [25] authors use a virtual finger solution to map movementsof the human hand onto a four-fingered robotic hand. In [26]authors propose a mapping approach divided in three differentsteps. In the first step, a virtual finger approach is used toreduce the number of fingers. Then, an adjustable or grossphysical mapping is carried out to get an approximate graspof the object which is geometrically feasible. Finally, a fine-tuning or local adjustment of grasp is performed. Even if thismethod presents some advantages with respect to the joint tojoint mapping, it is still not enough general to guarantee acorrect mapping in terms of forces and movements exerted bythe robotic hand on a grasped object.

The pose mapping can be considered as a particular way ofindirect joint angle mapping. The basic idea of pose mappingis to try to establish a correlation between human hand posesand robot hand poses. For example, Pao and Speeter [27] de-veloped an algorithm that tries to translate human hand posesto corresponding robotic hand configurations, without loss offunctional information and without the overhead of kinematiccalculations, while in [28] neural networks are used to learnthe hand grasping posture. Anyway, the proposed solutionsin some conditions can produce unpredictable motions of therobot hand, and thus in our opinion are only exploitable incases where basic grasp postures are required.

Besides the above mentioned methods, Griffin et al. pro-

qr = Sz

zl

qr

qir

qi

cj{B}

{N}

τk

nj^

λj

w�

Fig. 3. Main parameters a synergy actuated hand with compliant joints andcompliant contacts as defined in [11], [12].

posed in [29] a 2D virtual object based mapping for telema-nipulation operations. The object based scheme assumes thata virtual sphere is held between the thumb of the user andthe index finger. Important parameters of the virtual object(the size, position and orientation) are scaled independentlyand non-linearly to create a transformed virtual object in therobotic hand workspace. This modified virtual object is thenused to compute the fingertip locations of the robotic hand,which in this case is a two-finger, four DoFs gripper.

A 3D extension of the last method is presented in [30]. Evenif this extension allows to analyse more cases, this method isstill not enough general for our purposes. In particular, it isconstrained by the kinematics of the master and slave hand,the number of contact points (three) and their locations (thefingertips) which have to be the same for both the hands. Thenit can be used only for a given pair of human and robotic handsand for precision grasp operations.

The approach proposed in this paper is inspired by thelast two mentioned methods. The main contributions of ourwork with respect to the methods proposed in [29], [30] is ageneralization to a generic number of contact points that canbe different in the human and robotic hands and the removalof constraints on positions of contact points on the master andon the slave hand. We will describe in detail these aspects inthe following.

III. GRASP MODEL AND SOFT SYNERGIES

This section summarizes the main equations necessary tostudy hands controlled using synergies. Further details onsynergies and grasp models can be found in [11], [31].

Consider a generic hand grasping an object as sketchedin Fig. 3. The hand and the object have nl contact points.Let {N} indicate a reference frame on the hand palm, and {B}a reference frame on the object. Let o∈ℜ3 denote the positionof {B} origin with respect to {N}, and let φ ∈ ℜ3 a vectordescribing the relative orientation between the frames (e.g.

Page 4: Mapping Synergies from Human to Robotic Hands with ...sirslab.dii.unisi.it › papers › 2013 › Gioioso.TRO.2013.Grasping.Fin.pdf · between a human hand and a robotic hand is

Euler angles). Let furthermore u = [oT φ T]T ∈ ℜ6 collectinformation on position and orientation between the abovementioned frames.

Compliance can be considered at different levels on the sys-tem: the contact between the object and the hand is generallynot stiff and the contact induces a local deformation of thesurfaces. Furthermore, often the hand links and joint actuatorshave compliance levels that have to be considered [32]. In [12],the compliance was introduced also at the synergy level,however, we did not consider it in this study.

In a quasi-static condition, the force and moment balancefor the object can be described by

w =−Gλ (1)

where w∈ℜ6 is the external load wrench applied to the object,λ ∈ ℜnc is the contact force vector, G ∈ ℜ6×nc is the graspmatrix. In this paper, for the sake of simplicity, we considera single point with friction contact model, often referred toas hard finger contact model [31]. With this type of contactmodel, at each contact point the force λi has three componentsand no moments are transmitted. Then the dimension of thecontact force vector λ is nc = 3nl .

By applying the static-kinematic duality relationship, we canexpress the velocities po ∈ ℜnc of the contact points on theobject as a function of the object twist1 νo ∈ℜ6

p0 = GTν

o. (2)

If a hard finger contact model is assumed, the transpose of thegrasp matrix can be expressed as [31]

GT =

I −[p1−o]×· · · · · ·I −[pi−o]×· · · · · ·I −[pnl −o]×

(3)

where [v]× refers to the skew matrix used to evaluate the crossproduct of v.

Solving eq. (1) for the contact forces introduces the defini-tion of internal forces, i.e. the contact forces included in thenullspace of matrix G. In particular

λ =−G#w+Aξ

where G# is the pseudoinverse of grasp matrix, A ∈ ℜnc×h

is a matrix whose columns form a basis for the nullspaceof G (N (G)) and ξ ∈ ℜh is a vector parametrizing thehomogeneous part of the solution to eq. (1). The genericsolution of the homogeneous part λo = Aξ , represents a setof contact forces whose resultant force and moment are zero,referred to as internal forces.

The relationship between hand joint torques τ ∈ℜnq , wherenq is the number of actuated joints, and contact forces is:

τ = JTλ (4)

where J ∈ℜnc×nq is the hand Jacobian matrix [31]. relating thecontact point velocities on the hand ph to the joint velocities q:

ph = Jq. (5)

1νo = [oT ωT]T, where o is the object centre linear velocity, while ω isthe object angular velocity.

We suppose that the hand is actuated using a number ofinputs whose dimension is lower than the number of handjoints. These inputs are then collected in a vector z ∈ℜnz thatparametrizes the hand motions along the synergies.

In this paper soft synergies, previously introduced in [13],are defined as a joint displacement aggregation correspondingto a reduced dimension representation of hand movementsaccording to a compliant model of joint torques which canbe for instance obtained with tendons [33]. In other terms,the reference values of joint angular velocities qre f ∈ℜnq is alinear combination of synergy velocities z ∈ℜnz with nz ≤ nq

qre f = Sz

through the synergy matrix S ∈ ℜnq×nz , whose columns de-scribe the shapes, or directions, of each synergy in the jointspace.

As detailed in [11], [17], to solve the problem of forcedistribution we need to introduce other equations, usuallyreferred to as constitutive equations, that model the systemcompliance. Starting from an initial equilibrium condition, thecontact force variation is expressed as

∆λ = Ks(J∆q−GT∆u) (6)

where ∆q and ∆u are the variations of joint angles andobject motion respectively, and Ks ∈ ℜnc×nc represents thecontact stiffness matrix. If we consider the joint compliance,as discussed for instance in [32], also the joint torques can beexpressed as

∆τ = Kq(∆qre f −∆q) (7)

where Kq ∈ ℜnq×nq is the joint stiffness matrix and ∆qre f isthe joint reference value variation.

Starting from an equilibrium configuration and applying asmall change of the synergy reference value ∆z, according toa procedure similar to those described in [17] and detailed in[11], it is possible to evaluate the corresponding configurationvariation. In particular, object displacement ∆u is given by

∆u =V ∆z (8)

where V = (GKGT)−1GKJS, in which the equivalent stiffnessK is evaluated as K = (K−1

s + JKqJT)−1 [32]. From eq. (6)one gets the contact force changes ∆λ as

∆λ = P∆z (9)

where P =(I−G+

K G)

KJS, with G+K pseudoinverse of grasp

matrix G weighted with the stiffness matrix K [17].For the sake of simplicity, the expressions of matrices P,

V and K summarized here are those obtained by neglectingthe so-called geometric effects, arising from the linearizationof eq. (4) and depending on J matrix variability with respectto hand and object configuration. A more complete discussionon the role of geometric effects on the grasping linearizedequations is presented, for instance, in [34].

Among all the possible motions of the grasped objects,rigid-body motions are relevant since they do not involvevisco-elastic deformations in the contact points. Consideringeq. (6) and imposing ∆λ = 0, the rigid body motion can beobtained computing N

[JS −GT

]. Let us then define a matrix

Page 5: Mapping Synergies from Human to Robotic Hands with ...sirslab.dii.unisi.it › papers › 2013 › Gioioso.TRO.2013.Grasping.Fin.pdf · between a human hand and a robotic hand is

Γ, whose columns form a basis of such subspace. Under thehypothesis that the object motion is not indeterminate neitherredundant [31], matrix Γ can be expressed as

Γ = N[

JS −GT ]=

[ΓzcsΓucs

](10)

where the image spaces of Γzcs and Γucs consist of coordinatedrigid–body motions of the mechanism, for the soft synergyreferences and the object position and orientation, respectively.It is possible to show that

R(Γucs)⊆R(V )

i.e. rigid-body motions of the object are not all the possiblemotions of the object controlled by synergies as in (8). Thesubspace of all synergy controlled object motions R(V ) alsocontains motions due to deformations of elastic elements inthe model. For a complete discussion on rigid body motions,including kinematic indeterminacy and redundancy, the readeris referred to [11].

Remark 1: When the free-hand motion is considered andconsequently the contact forces are zero, according to thecompliant model previously introduced in eq. (7), the referenceand the actual values of the hand joints are equal, i.e. qre f = q.This assumption is made in the following Section IV where thealgorithm for mapping synergies of the human hand to robotichands motions is defined. In Section V we will analyse theperformance of the proposed mapping procedure both in free-hand motions and during grasping operations. In the latter casewe will use the mapping procedure to command the referencevalues of the joint variables qre f which will differ from jointconfigurations q because of grasping contacts and compliance.

IV. MAPPING SYNERGIES

The target of this study is to define a way to map a set ofsynergies defined on a reference human hand, the paradigmatichand, onto a generic robotic hand (Fig. 5). However, themethod can be used to map also arbitrary motions of thehuman hand, including the case where all the joints areindependently controlled.

The kinematic analysis of the paradigmatic hand withpostural synergies is reported in [12] and is briefly summarizedhere. Similar analysis can be found in [35], [36]. With respectto [12], here the model has been enriched adding the distalinterphalangeal joints for the index, middle, ring and pinkiefingers, and the abduction/adduction DoF of the middle fingermetacarpal joint. The resulting kinematic model has 20 DoFs.

The hand’s fingers are modelled as kinematic chains sharingtheir origin in the hand wrist as in Fig. 4. The bone lengthshave been chosen according to the anatomy of the real handskeleton [37], [38]. The metacarpophalangeal (MCP) joint ofthe index, middle, ring and pinky fingers have two DoFs each(one for adduction/abduction and another flexion/extension).The proximal interphalangeal (PIP) and distal interphalangeal(DIP) joints of the other fingers have one DoF each. The thumbhas four DoFs: two DoFs in trapeziometacarpal (TM) joint,one DoF in metacarpophalangeal (MP) joint, and one DoF ininterphalangeal (IP) joint.

MP

PIP

DIP

TM

MP

IP

Fig. 4. Scheme of the paradigmatic human hand highlighting the jointsand the links, the gray dots represent two DoFs joints, while the black dotsrepresent single DoF joints.

Let the paradigmatic human hand be described by the jointvariable vector qh ∈ ℜ

nqh and assume that the subspace ofall configurations can be represented by a lower dimensionalinput vector z ∈ ℜnz (with nz ≤ nqh) parametrizing the handconfigurations along the synergies, i.e. qh = Shz, being Sh ∈ℜ

nqh×nz the synergy matrix. In terms of velocities one gets

qh = Shz. (11)

Regarding the robotic hand, let qr ∈ ℜnqr represent the jointvariable vector of the hand. In general due to the dis-similar kinematics we can have that the number of jointsof the paradigmatic hand differs from that of the robotichand (nqr 6= nqh).

The ultimate goal of this work is to control the referencejoint velocities qr of the robotic hand in a synergistic wayusing the vector of synergies z of the paradigmatic humanhand. This is what we mean by kinematic mapping betweenthe human and robotic hands. We want to define the mappingindependently from the kinematic structures of the hands, thehand configurations, and the number of possible contact pointsin case of grasping tasks. To the best of our knowledge allprevious synergy mapping strategies [7], [39] do not explicitlytake into account the task to be performed by the robotic hand.In this work, we propose a method of projecting synergiesfrom paradigmatic to robotic hands which explicitly takes intoaccount the task space. To define the mapping we assume thatboth the paradigmatic and the robotic hands are in two giveninitial configurations: q0h and q0r for the human and robotichands, respectively.

Remark 2: The two initial configurations can be indepen-dently chosen. This allows to obtain a more general map-ping procedure, with respect to the methods defined in theconfiguration space. However, the choice of very dissimilarinitial configurations may lead to hand trajectories that appearsubstantially different in the configuration space, although theyproduce, on the virtual object, the same displacement and thesame deformation. Very dissimilar initial configurations mayalso lead to other types of problems, for example one of thehands may reach its joint limits or singular configurationswhile the other could further move.

Page 6: Mapping Synergies from Human to Robotic Hands with ...sirslab.dii.unisi.it › papers › 2013 › Gioioso.TRO.2013.Grasping.Fin.pdf · between a human hand and a robotic hand is

rh

qrqhpih

pir

Fig. 5. Mapping synergies from the paradigmatic human hand to the robotichand: the reference points on the paradigmatic hand ph (blue dots) allows todefine the virtual sphere. Activating the human hand synergies, the sphere ismoved and strained; its motion and strain can be evaluated from the velocitiesof the reference points ph. This motion and strain, scaled by a factor dependingon the virtual sphere radii ratio, is then imposed to the virtual sphere relativeto the robotic hand, defined on the basis of the reference points pr (red dots).

Other than the initial configuration, a set of reference pointsph = [pT

1h, · · · , pTih, · · · ]T are chosen on the paradigmatic hand.

The number of reference points can be arbitrarily chosen.Remark 3: The reference points can be located on the fin-

gertips or at other points like, for instance, on the intermediatephalanges and/or on the hand palm. Reference points on thefingertips allow to define the whole hand joint configuration,thus this is a preferable choice. The role of the referencepoints is clear when the robotic hand is grasping an object:the reference points correspond to the contact points.

For the paradigmatic hand the virtual object used to describethe action in the object domain is a virtual sphere, computedas the minimum sphere containing the reference points in ph,as shown in Fig. 5. Note that in general not all the points lieon the virtual sphere surface but some of them are containedin it. Let us parametrize the virtual sphere by its center oh andradius rh. The motion imposed to the hand reference pointsby activating the synergies z moves the sphere and deforms itchanging its radius.

In this paper, the motion of the hand due to the synergiesis described modeling the effects of the hand motion on thevirtual sphere:

• a rigid-body motion, defined by the linear and angularvelocities of the sphere center oh and ωh, respectively;

• a non-rigid strain represented by the radius variation. Letr be the radius derivative that the sphere would have ifits radius length was one.

Although the virtual sphere does not represent an objectgrasped by the paradigmatic hand, it can be easily shownthat with a suitable model of joint and contact compliance,the rigid-body motion of the virtual sphere corresponds to themotion of a grasped spherical object and that the non-rigidmotion approximately accounts for the normal components ofthe contact forces for a spherical object grasp.

Representing the motion of the hand through the virtualobject, the motion of the generic reference point pih can beexpressed as

pih = oh +ωh× (pih−oh)+ rh (pih−oh) .

Grouping all the reference point motions one gets

ph = Ah

ohωhrh

, (12)

where matrix Ah ∈ℜnch×7 is defined as follows

Ah =

I −[p1h−oh]× (p1h−oh)· · · · · · · · ·I −[pih−oh]× (pih−oh)· · · · · · · · ·

. (13)

It is worth to note that, the first six columns of Ah, i.e. its firsttwo column blocks in eq. (13), correspond to the transpose ofgrasp matrix G defined in eq. (2) and evaluated consideringa hard finger contact model, as shown in eq. (3). Matrix Ahdepends on the type of motion that we decide to replicate onthe robotic hand and then it depends on the task. From (5),(11) and (12) we can evaluate the virtual sphere motion anddeformation as a function of the synergy vector velocity z ofthe paradigmatic hand oh

ωhrh

= A#h ph = A#

hJhShz (14)

where A#h denotes the pseudo-inverse of matrix Ah. In this

work, we assume that N (Ah) = /0. If this not the case,virtual sphere motions and/or deformations exist such that thecorresponding reference points displacement is zero. In thiscase, the problem is indeterminate and a unique solution forthe mapping problem can not be identified. This situation issimilar to indeterminate grasps, in which the transpose of graspmatrix has a non-trivial nullspace, as discussed in [31].

Once we modeled the effects of the human hand motionon the virtual sphere motions and deformations, we need tomap them on the robotic hand which is assumed to be ina given initial configuration q0r ∈ ℜnqr with reference pointspr ∈ ℜncr . Recall that no hypothesis were imposed on thenumber of reference points on the paradigmatic human androbotic hands, in general we can consider nch 6= ncr, neitheron their locations, and neither on the initial configuration ofthe two hands.

A virtual sphere is computed also in the robotic hand.The minimum sphere enclosing the reference points is foundwhere, or indicates its centre coordinates and rr its radius. Thevirtual spheres within the robotic hand and the human handcan differ in size. Let us define the object scaling factor as theratio between the two sphere radii

ksc =rr

rh. (15)

This factor is necessary to scale the velocities from theparadigmatic to the robotic hand workspace. Note that thescaling factor depends on both the size of the hands and ontheir configurations. Then, the motion and deformation of thevirtual sphere generated by the paradigmatic hand are scaledand tracked by the virtual sphere referred to the robotic hand or

ωrrr

= Kc

ohωhrh

(16)

Page 7: Mapping Synergies from Human to Robotic Hands with ...sirslab.dii.unisi.it › papers › 2013 › Gioioso.TRO.2013.Grasping.Fin.pdf · between a human hand and a robotic hand is

where the scale matrix Kc ∈ℜ7×7 is defined as

Kc =

kscI3,3 03,3 03,103,3 I3,3 03,101,3 01,3 1

.Eq. (16) represents the idea at the base of the proposedmapping. What we impose is that the virtual sphere definedon the robotic hand changes its velocity and radius accordingto the sphere defined on the paradigmatic hand, apart from thescaling factor.

Following eq. (12) and (13), the corresponding referencepoint velocity on the robotic hand is given by

pr = Ar

orωrrr

,where matrix Ar ∈ℜncr×7 is defined as

Ar =

I −[p1r−or]× (p1r−or)· · · · · · · · ·I −[pir−or]× (pir−or)· · · · · · · · ·

.From eq. (14) and (16) we can express the robotic handreference point velocities pr as a function of the synergyvelocities of the human hand z as

pr = ArKcA#hJhShz.

Finally, considering the robot hand differential kinematics pr =Jrqr, where Jr ∈ℜncr×nqr is its Jacobian matrix, and indicatingwith J#

r the robot hand Jacobian pseudoinverse, the followingrelationship between robot hand joint velocities and synergyvelocities can be established

qr = J#r ArKcA#

hJhShz. (17)

If the robotic hand in the given grasp configuration is re-dundant [31], i.e. if N (Jr) 6= /0, the solution of the inversekinematics problem can be generalized as

qr = J#r ArKcA#

hJhShz+(I− J#

r Jr)

ζ

where ζ can be arbitrarily set to exploit the redundancy of therobotic hand. The discussion on how to manage the robotichand redundancy is not deepened in this paper.

Remark 4: The vector of synergies actuation z referred to thehuman hand is mapped onto the robotic hand joint velocitiesqr through eq. (17) which is non-linear and depends on:• paradigmatic and robotic hand configurations q0h and qrh;• location of the reference points for the paradigmatic and

robotic hands, ph and pr.

V. SIMULATIONS AND PERFORMANCE ANALYSIS

The proposed mapping algorithm was validated throughnumerical simulations considering two different robotic handmodels. The simulations were carried out using the SyngraspMatlab toolbox [40]. The first model was a three-fingeredfully-actuated robotic hand with the kinematic structure thatresemble the Barrett Hand [41], with two joints in the thumb(that is fixed with respect to the palm, no abduction/adduction

joint) and three joints for each of other two fingers (two flex-ion/extension, one abduction/adduction). The second modelwas the DLR/HIT II Hand [42]. It has an anthropomorphicstructure with five fingers and 15 DoFs. The simulations wereperformed using Matlab R2009a over a 2.4 GHz Intel Corei5, 4 GB RAM.

The proposed virtual sphere algorithm was compared withthe joint to joint mapping and the fingertip mapping meth-ods summarized in Section II. Other mapping methods [29],[30] were not considered since they can not be extended tokinematic structures that differ from those proposed in therelative papers without introducing heuristic assumptions (e.g.virtual finger). It is worth noting that differently from otherapproaches where the joint to joint mapping only concernspre–grasps [23], we extended its use to grasp analysis includ-ing in the hand model both contact and joint compliance. Thisallows to easily extend the pure kinematic model to the quasi–static analysis of the grasp.

The actual grasp of different objects was considered. In thisanalysis we evaluated what happens if we use the mappingstrategy in eq. (17) to control the grasp of real objects.We compared the mapping algorithms in terms of internalforce variations and object motions. Both these aspects arefundamental to evaluate the performance of a grasping andmanipulation task [43]. In the end, a simulation of a possibleapplication of the method to a manipulation task is reported.

A. Internal force evaluation

The proposed mapping method is independent from thenumber of contact points considered for the paradigmatic andfor the robotic hands. A direct comparison of the force on theobject by the hands is not possible when different number ofcontact points are selected. We adopted a measure of the wholeobject deformation produced by the activation of synergies, ina compliant context as described in Section III, to evaluateand compare the performance of the mapping procedure. Weconsidered the energy variation of the hand-object system dueto contact forces.

For each mapping method and for each considered robotichand, a synergy variation ∆z was imposed to the paradigmatichuman hand resulting into a motion of the robotic hand. Notethat the fingers not involved in the grasp were moved accordingto the synergies’ activation on the paradigmatic hand, whilethey were left at their respective initial positions on therobotic hands, for all the simulations. According to eq. (9)the corresponding internal force variation ∆λ was evaluated.Assuming a linear compliant model for the contacts as thosedescribed in eq. (6), the contact force variation corresponds toa deformation of the contact springs. By indicating with ∆x thevector containing the deformation components of each contactpoint evaluated as ∆x = K−1

s ∆λ , the elastic energy variationproduced by the activation of synergies can be computed as

∆Eel =12

Ks‖∆x‖2 =12

K−1s ‖∆λ‖2,

where Ks is the contact stiffness matrix defined in eq. (6).The sign of the energy variations has been taken into accountduring the simulations.

Page 8: Mapping Synergies from Human to Robotic Hands with ...sirslab.dii.unisi.it › papers › 2013 › Gioioso.TRO.2013.Grasping.Fin.pdf · between a human hand and a robotic hand is

0.5 1 1.5 2 2.5 30

50

100

150

200

250

300

350

Scaling factor k=rr/rh

Ener

gy E

rror (

%)

Virtual SphereJoint to JointFingertip

Fig. 6. Energy variation error obtained with the Barrett hand by applying theproposed mapping algorithm, compared with those obtained by applying thejoint to joint and fingertip mapping methods.

Tables I and II show how far the Barrett Hand is from repli-cating the energy obtained with the paradigmatic hand movedalong the first, second and third synergy computed throughPCA analysis on experimental measures as described in [5]. Inthese simulations a precision/fingertip grasp is considered. Thereported values indicate, in percentage, the energy variationdifference. Two cases were considered, with three and fourreference points respectively, on the paradigmatic hand. Inboth cases three reference points were assumed on the robotichand. The size of the grasped object (a real sphere in this case)was selected as to rather fit the average position of the fingersof the paradigmatic hand. The sphere grasped by the BarrettHand was scaled using the scaling factor defined in eq. (16).As it can be seen from the tables, energy values obtained inthe robotic hands using the proposed virtual sphere methodare closer to those obtained for the paradigmatic hand, for allthe evaluated cases. It is worth to note that when the numberof contact points is different, a correction factor kcp =

nchncr

hasto be considered to compare the different values.Then deformation energy on the robotic hand was computedas

∆Eel,r =12

kcpKs‖∆x‖2.

The result trends obtained with three and four reference pointson the paradigmatic hand are quite similar, and show therobustness of the proposed mapping procedure with respectto the number of reference points chosen on the paradigmatichand.

Tables III and IV present the results of similar simulationsperformed with the DLR-HIT II Hand model. We consideredthe same number of reference points (four) in the robotic hand,while two cases, four and three reference points respectively,were considered for the paradigmatic hand. Also in this case,the performances of the proposed mapping method are betterwith respect to other methods, since the energy stored in thecontact springs of the robotic hand is closer to the valueobtained with the paradigmatic one. This means that thedeformation imposed to the grasped object by the robotichand is similar to the deformation that would be impressed bythe paradigmatic hand when a certain synergy input variation

−80−60−40−200204060

0

50

100

150

−80

−70

−60

−50

−40

−30

−20

−10

0

10

(a) Paradigmatic hand

−100−50050100

−100−50

050

100

−20

0

20

40

60

80

100

120

140

(b) Barrett Hand

−1000

100200

−100−50050100150

50

100

150

200

250

300

(c) DLR/HIT II Hand

Fig. 7. A power grasp performed by the paradigmatic 7(a), the Barrett 7(b)and the DLR-HIT II 7(c) hands.

is applied. Note that, in Table IV the fingertip method isnot considered since, in that example we considered morefingers in the robotic hand (four) than those considered inthe paradigmatic hand (three) and the fingertip method is notdirectly applicable.

Additional tests were performed to analyse the sensitivity ofthe obtained results, in terms of elastic energy variation, withrespect to variation of the object size. We considered spheresof different sizes, manipulated by the Barrett Hand, drivenby the paradigmatic hand grasping always the same sphere.Results obtained by activating only the first synergy are shownin Fig. 6. On the x -axis, the scaling factors computed as theratio between the two sphere radii are reported. On the y-axisthe elastic energy variation percentage difference between theparadigmatic and the robotic hand is reported. As it is clearfrom the diagram, the virtual sphere algorithm has alwaysa lower difference value, and, moreover, its performance issubstantially independent from the scaling factor, and thusfrom the object size. Similar results are obtained when othersynergies are activated.

In Table V the results for a case of power grasp arepresented. Six contact points were considered on the paradig-matic hand while four and six points were considered onthe Barrett and DLR/HIT II Hand, respectively. Fig. 7 showsthe considered grasps. The better results obtained with theDLR/HIT II Hand are due to the highest dexterity of the devicethat presents 15 DoFs instead of the 8 DoFs considered forthe Barrett Hand.

Page 9: Mapping Synergies from Human to Robotic Hands with ...sirslab.dii.unisi.it › papers › 2013 › Gioioso.TRO.2013.Grasping.Fin.pdf · between a human hand and a robotic hand is

TABLE IENERGY VARIATION ERROR FOR THE BARRETT HAND WITH THREE

CONTACT POINTS ON THE PARADIGMATIC HAND

Synergies Virtual Sphere Joint-to-joint Fingertip

Syn 1 21% 62% 122%Syn 2 10% 192% 103%Syn 3 50% 30% 166%

TABLE IIENERGY VARIATION ERROR FOR THE BARRETT HAND WITH FOUR

CONTACT POINTS ON THE PARADIGMATIC HAND

Synergies Virtual Sphere Joint-to-joint Fingertip

Syn 1 7% 62% 123%Syn 2 45% 219% 100%Syn 3 41% 68% 125%

TABLE IIIENERGY VARIATION ERROR FOR THE DLR-HIT II HAND WITH FOUR

CONTACT POINTS (PARADIGMATIC FOUR CONTACT POINTS)

Synergies Virtual Sphere Joint-to-joint Fingertip

Syn 1 12% 87% 86%Syn 2 54% 244% 103%Syn 3 52% 83% 73%

TABLE IVENERGY VARIATION ERROR FOR THE DLR-HIT II HAND WITH FOUR

CONTACT POINTS (PARADIGMATIC THREE CONTACT POINTS)

Synergies Virtual Sphere Joint-to-joint Fingertip

Syn 1 1% 86% −Syn 2 26% 211% −Syn 3 10% 290% −

TABLE VENERGY VARIATION IN POWER GRASPS

Synergies Barrett DLR-HIT

Syn 1 22.27% 19.98%Syn 2 35.17% 28.22%Syn 3 53.35% 43.50%

B. Object motion during manipulation

In order to investigate how the mapping procedure influ-ences the grasped object trajectory, we simulated the motion∆u of an object grasped by the robotic hands, while one ora combination of synergies are activated on the paradigmatichand. As summarized above, ∆u is due both to the rigid bodymotions of the hand-object system, and to the deformation ofthe compliance as seen from the contact points [13]. The vector∆u is composed of the object centroid displacement and objectrotation, and was evaluated according to eq. (8). For the sakeof simplicity, in the presented results only the translationalcomponent of the object motion was considered. We evaluatedthe difference between the centroid motion directions of threedifferent objects grasped by the paradigmatic and the robotichands: a sphere, a cube and a cylinder. The angular differencewas computed considering the angle between the two motion

TABLE VIANGULAR DIFFERENCES FOR A CUBE USING THE VIRTUAL SPHERE

MAPPING

Synergies Barrett

Syn 1 27.83◦Syn 2 4.31◦Syn 3 34.85◦Syn 1+2 9.87◦Syn 1+3 1.37◦

TABLE VIIANGULAR DIFFERENCES FOR A CYLINDER USING THE VIRTUAL SPHERE

MAPPING

Synergies DLR-HIT II

Syn 1 18.58◦Syn 2 7.31◦Syn 3 0.92◦Syn 1+2 16.02◦Syn 1+3 16.27◦

direction unit vectors on the common spanned plane.The results for a cubic object motion are reported in

Table VI. In this case we considered the motion due to theactivation of the first three synergies separately and combinedusing the virtual sphere mapping onto the Barrett Hand.Concerning the cylinder, Table VII reports the differencesbetween motion directions obtained mapping onto the DLR-HIT II Hand the first three synergies separately and combined.In Fig. 10 the consider grasps are shown.

For the spherical object, we evaluated also the sensitivityof the mapping procedure with respect to hand initial con-figuration changing the starting position of the robotic handsby varying its palm orientation with respect to the palm ofthe paradigmatic hand (see Fig. 8). The obtained results arereported in Fig. 9(a) and 9(b) for the Barrett and the DLR-HITII hand, respectively.

In the graphs, the angles on the x-axis represent the angular

Fig. 8. The Barrett Hand in two different initial configurations. The bluearrow represents the motion direction and the red one represents the rotationaxis.

Page 10: Mapping Synergies from Human to Robotic Hands with ...sirslab.dii.unisi.it › papers › 2013 › Gioioso.TRO.2013.Grasping.Fin.pdf · between a human hand and a robotic hand is

0 10 20 30 40 50 60 70 80 900

20

40

60

80

100

120

140

Degree

Deg

ree

Virtual SphereJoint to JointFingertip

(a) Barrett Hand

0 10 20 30 40 50 60 70 80 900

20

40

60

80

100

120

140

Degree

Deg

ree

Virtual SphereJoint to JointFingertip

(b) DLR/HIT II Hand

Fig. 9. Angular error of the object motion directions obtained with the threedifferent mapping approaches on the Barrett Hand(a), and on the DLR/HITII Hand(b) with respect to the motion direction of the object manipulated bythe Paradigmatic hand.

−60−40−20020400

50100

150

−80

−70

−60

−50

−40

−30

−20

−10

0

10

(a) Paradigmatic hand

−50050

−50

0

50

0

20

40

60

80

100

120

140

yx

(b) Barrett Hand

−80−60−40−200204060

050

100150

−70

−60

−50

−40

−30

−20

−10

0

10

y

x

z

(c) Paradigmatic hand

−500

50100

−100−50

050

60

80

100

120

140

160

180

200

220

240

(d) DLR/HIT II Hand

Fig. 10. Manipulation of a cube and a cylinder using Barrett and DLR-HITII hand respectively.

difference between the paradigmatic and robotic palm orien-tations. The value 0◦ corresponds to a perfect alignment ofthe two palms. Then the orientation was changed by applyingto the robotic hand a rotation about a direction taken asnormal to the motion direction of the object manipulated bythe paradigmatic hand. The obtained results show that the fin-gertip method and the virtual sphere mapping are substantiallyindependent from the different orientations of the hands. Thevirtual sphere mapping behaves better in terms of differencebetween object directions. The joint to joint mapping method,instead, obtains worse results, and its sensitivity with respectto hand orientation cannot be disregarded. In particular, in theDLR-HIT II Hand, we observe that the joint to joint angularerror is approximately linearly dependent on the orientationdifference between the hands.

C. Manipulation Task

The mapping algorithm, presented in eq. (17), was alsotested in a manipulation task. In this example, we consideredthe grasp of a cubic object. In the initial configuration theparadigmatic hand grasped the cube with three contact pointsplaced at the fingertips of the thumb, the index and themiddle finger, as sketched in Fig. 11(a). We suppose that theinternal forces, in the reference configuration, are sufficientto guarantee a stable grasp, then they satisfies force closurecriteria, [31]. The same cube was held by the DLR-HIT IIHand, as shown in Fig. 11(c). A cube with dimension increasedby a factor 2.5 was grasped by the Barrett Hand, as shownin Fig. 11(b). The resulting scaling factor, necessary for thevirtual sphere mapping algorithm, that depends both on theobject size and on the position of the contact points, for theBarrett hand was ksc = 2.85. It is worth to recall that the scalingfactor, defined in eq. (15) is determined by the configurationof the both robotic and paradigmatic hand, once the contactpoints are fixed.

According to the results presented in [11], when threecontact points are considered and a hard finger contact modelis assumed, the dimension of the internal force subspace, i.e.the dimension of grasp matrix G nullspace, is three, then atleast a set of four synergies is needed to produce a rigid bodymotion, as those defined in eq. (10). The first four synergiesfor the paradigmatic hand were considered to obtain a rigidbody motion for the object, they were evaluated by calculatingmatrix Γ in eq. (10). The paradigmatic hand rigid body motionwas then mapped onto movements of the two robotic handsthrough the proposed virtual sphere mapping algorithm. Thecentroid of the cube was displaced of 6mm, along the directiondefined by the rigid-body motion imposed by the paradigmatichand. For each integration step, matrix Γ = [ΓT

zcs ΓTucs] was

evaluated from eq. (10). In this example, nz = 4, Γzcs ∈ ℜ4

and Γucs ∈ ℜ4 so the rigid body motion produced by theparadigmatic hand was generated by activating the followingsynergies ∆z = Γzcs∆T , with ∆T = 0.04s. The correspondingobject displacement was ∆uh = Γucs∆T . The trajectory of theobject center can be then updated and the grasp is evaluatedin the new configuration for the paradigmatic hand. For therobotic hands, the object displacements were evaluated accord-ing to eq. (8) as ∆ur =Vr∆z. The simulation was carried out in1.07s. The trajectories of the cube grasped by the paradigmatichand and of those manipulated by the robotic hands were thenevaluated by numerical integration. For the Barrett hand, theaverage distance between the final point of the trajectorieswas 3mm, the mean value of the difference, during all thepath, was 1mm. These values have been evaluated taking intoaccount the scaling factor. In the DLR-HIT II Hand casethe trajectories were practically the same. The sensibly betterperformances of the DLR-HIT II Hand are a consequence ofits higher dexterity with respect to the other one.

In Fig. 12 the trajectories of the cubes manipulated by thethree hand models are shown. In this plot the actual trajectoriesare displayed, without re-scaling them to take into account thescaling factor, for this reason the Barrett Hand trajectory is

Page 11: Mapping Synergies from Human to Robotic Hands with ...sirslab.dii.unisi.it › papers › 2013 › Gioioso.TRO.2013.Grasping.Fin.pdf · between a human hand and a robotic hand is

−60−40−20020400100

200−90

−80

−70

−60

−50

−40

−30

−20

−10

0

10

x (mm)y (mm)

z (m

m)

(a) Paradigmatic hand

−150−100

−500

50

−100

0

100

200

−120

−100

−80

−60

−40

−20

0

20

x (mm)y (mm)

z (m

m)

(b) Barrett Hand

−50 0 50050100150200

−120

−100

−80

−60

−40

−20

0

x (mm)y (mm)

z (m

m)

(c) DLR/HIT II Hand

Fig. 11. The human-like hand grasping a cube with the thumb, index andmiddle finger(a), the modular three-finger robotic hand (b) and the DLR-HITII Hand (c) grasping a cube.

−33.5−33−32.5−32−31.5−31−30.5−30−58

−56

−54

−52

−50

−48

−46

−44

−42

mm

mm

Paradigmatic / DLR−HIT II HandBarrett Hand

Fig. 12. Trajectories of the cubes manipulated by the Paradigmatic Hand andthe two robotic hands, projected on the X−Z plane of the frame {N}.

longer than the other two.

VI. DISCUSSION

The proposed mapping strategy, based on mimicking be-haviour of human hand synergies, establishes the foundationsof an interface between a higher level control, that definesthe synergy reference values z, and the robotic hand. Thehigh level can be thought as independent from the robotichand structure. The proposed mapping strategy representsthe interface whereby the input synergies are translated intoreference joint velocities which actually control the robotichand. The main advantage of this approach is that the highlevel control is substantially independent from the robotic hand

and depends only on the specific manipulation tasks to beperformed. It can be possible, for example, to use the samecontroller with different robotic hands, or simply substitute adevice without changing the controller, thus realizing a sort ofabstraction layer for robotic hand control based on synergies.This mapping has been numerically evaluated in grasping andmanipulation tasks. Work is in progress to validate the virtualsphere mapping also for the approaching phase of grasps.Simulation results are encouraging in terms of performancesas shown in the previous section.

However, this approach presents some drawbacks. Theproposed mapping is based on a heuristic approach: we chooseto reproduce a part of the hand motion, which practicallycorresponds to move and uniformly squeeze a spherical ob-ject. Although uniformly squeezing and moving an objectexplain a wide range of tasks, many other possibilities existin manipulating objects which are not modelled with thismapping. In [44], for instance, an ellipsoid was consideredas virtual object, and its three–dimensional radial deformationwas included in the mapping. The algorithm can be modifiedto consider different shapes and different types of movementsin the task space. Increasing the number of parameters in thevirtual object and virtual displacement definition allows toreplicate more complex hand movements, but, at the sametime, increases the mapping complexity and decreases itsrobustness.

In this work we assume that the location of contact points isknown. Different grasping planning algorithms (e.g. [45]) canbe used to choose the contact points and the hand configura-tions to be used as parameters of the virtual sphere mapping.Concerning the number of contact points in the paradigmaticand robotic hand, the numerical simulations showed that theyinfluence the performance of the mapping procedure, in termsof object deformation and displacement. For the DLR-HIT IIHand, for instance, the best results are obtained when threecontact points are considered on the paradigmatic hand andfour points are considered on the robotic one. This is probablydue to the fact that the fourth contact point does not addsignificant information, but, on the contrary, it adds a sortof noise in the virtual object displacement and deformationestimation, that cannot be replicated by the robotic hand. Fur-thermore, the different synergies lead to different performancein terms of elastic energy variation and object motions, inparticular, the mapping of synergies that, in the paradigmatichand, do not produce a significant virtual object displacementor radial deformation, is more difficult. This is the case, forexample for the DLR-HIT II Hand, of the second synergyand three contact points in the paradigmatic hand, in whichthe fingers approximately moves on the virtual object surface,without producing a significant object displacement or radialdeformation.

The performance of the mapping algorithm, in terms ofreplicating the effect produced by the paradigmatic hand onthe object, clearly depends on the kinematic structure of therobotic hand, in particular, on the number of fingers andjoints per finger, and their arrangement. Theoretically thereis not a lower bound to the number of fingers and joints toimplement the proposed mapping. The minimum level of hand

Page 12: Mapping Synergies from Human to Robotic Hands with ...sirslab.dii.unisi.it › papers › 2013 › Gioioso.TRO.2013.Grasping.Fin.pdf · between a human hand and a robotic hand is

structure complexity can be defined on the basis of the requiredperformance level.

In this paper we focused on replicating, in terms of virtualobject motion and change of internal squeezing force, theaction produced by controlling the paradigmatic human handthrough synergies. The human hand is the best existing grasp-ing device, and for this reason we started from it to developthe proposed mapping. In particular, the synergy organizationof hand joint was observed and mapped in this work, sinceit represents a simple but versatile way to control a complexkinematic structure, and then naturally is the base on whichthe mapping procedure was initially developed and tested.

Finally, it is worth underlining that the proposed mappingis not limited to the synergistic motion of the human hand butcan be generalized to arbitrary hand motions by simply using,from eq. (11), the joint velocities of the paradigmatic hand inthe mapping function (17).

VII. CONCLUSIONS AND FUTURE WORK

Designing synergy-based control strategies in the paradig-matic hand domain can dramatically reduce the dimensionalityof the grasping and manipulation problems for robotic hands.However, an efficient mapping is needed to deal with robotichands with dissimilar kinematics. A synergy based mappingapproach can be useful also in uncertain conditions. Thehuman hand synergies are able to adapt to a wide range oftasks, and to work in uncertain environments, with an almostinfinitely wide set of objects. The synergies are defined tosynthesize the common components of all these tasks andthen they represent an optimal trade-off between simplicityand versatility. A synergy based mapping can take advantageof these properties and consequently can result to be a robustcontrol solution also when the environment conditions and/orthe planned tasks are uncertain.

We proposed a method for mapping synergies that, using avirtual object, allows to specify the mappings directly in thetask space, thus, avoiding the problem of dissimilar kinematicsbetween human–like hand and robotic hands. We comparedour solution to the most used solutions existing in literatureand we evinced that the proposed method is more efficient interms of internal forces mapping and direction of motion. Wetested the method in simulation with two robotic hands modelwith dissimilar kinematic structure. Further investigations ondifferent robotic hands are planned. One of the main issueof our approach is that the mapping is not linear and that itsimplementation may need a high computational burden. Theongoing research is evaluating the conditions whereby somesimplification can be applied to get constant or slowly varyingmappings.

ACKNOWLEDGMENTS

This work was partially supported by the European Com-mission with the Collaborative Project no. 248587, “THEHand Embodied”, within the FP7-ICT- 2009-4-2-1 program“Cognitive Systems and Robotics” and the Collaborative EU-Project “Hands.dvi” in the context of ECHORD (EuropeanClearing House for Open Robotics Development).

REFERENCES

[1] S. Jacobsen, J. Wood, D. Knutti, and K. Biggers, “The Utah/MITdextrous hand: work in progress,” Int. J. Robot. Res., vol. 3, no. 4,p. 21, 1984.

[2] “The shadow dextrous hand,” shadow Robot Company, [Online]:http://www.shadow.org.uk/products/newhand.shtml.

[3] J. Butterfass, M. Grebenstein, H. Liu, and G. Hirzinger, “DLR-handII: next generation of a dextrous robot hand,” in Proc. IEEE Int. Conf.Robot. Automat., 2001.

[4] A. Bicchi, “Hands for Dexterous Manipulation and Robust Grasping: ADifficult Road Toward Simplicity,” IEEE Trans. Robot., vol. 16, no. 6,pp. 652–662, 2000.

[5] M. Santello, M. Flanders, and J. Soechting, “Postural hand synergies fortool use,” J. Neuros., vol. 18, no. 23, pp. 10 105–10 115, 1998.

[6] M. Santello and S. J. F., “Force synergies for multifingered grasping,”Experimental Brain Research, vol. 133, no. 4, pp. 457–467, 2000.

[7] M. Ciocarlie, C. Goldfeder, and P. Allen, “Dimensionality reduction forhand-independent dexterous robotic grasping,” in Proc. IEEE/RSJ Int.Conf. Intel. Robots Syst., 2007, pp. 3270–3275.

[8] C. Brown and H. Asada, “Inter-finger coordination and postural syner-gies in robot hands via mechanical implementation of principal compo-nents analysis,” in Proc. IEEE/RSJ Int. Conf. Intel. Robots Syst. IEEE,2007, pp. 2877–2882.

[9] M. Ciocarlie and P. Allen, “Hand posture subspaces for dexterous roboticgrasping,” Int. J. Robot. Res., vol. 28, pp. 851–867, 2009.

[10] T. Wimboeck, B. Jan, and G. Hirzinger, “Synergy-level impedancecontrol for a multifingered hand,” in Proc. IEEE Int. Conf. Robot.Automat., 2011.

[11] D. Prattichizzo, M. Malvezzi, and A. Bicchi, “On motion and forcecontrollability of grasping hands with postural synergies,” in Robotics:Science and Systems VI. The MIT Press, 2010.

[12] M. Gabiccini, A. Bicchi, D. Prattichizzo, and M. Malvezzi, “On therole of hand synergies in the optimal choice of grasping forces,” Auton.Robot., vol. 31, pp. 235–252, 2011.

[13] D. Prattichizzo, M. Malvezzi, M. Gabiccini, and A. Bicchi, “On the ma-nipulability ellipsoids of underactuated robotic hands with compliance,”Robot. Autonom. Syst., vol. 60, no. 3, pp. 337–346, 2012.

[14] J. Salisbury and J. J. Craig, “Articulated hands, force control andkinematic issues,” Int. J. Robot. Res., vol. 1, no. 1, pp. 4–17, 1982.

[15] T. Yoshikawa, “Manipulability of robotic mechanisms,” Int. J. Robot.Res., vol. 4, no. 2, pp. 3–9, 1985.

[16] A. Bicchi and D. Prattichizzo, “Manipulability of cooperating robotswith passive joints,” in Proc. IEEE Int. Conf. Robot. Automat., 1998.

[17] A. Bicchi, “Force distribution in multiple whole-limb manipulation,” inProc. IEEE Int. Conf. Robot. Automat., 1993.

[18] D. Prattichizzo and A. Bicchi, “Consistent task specification for manip-ulation systems with general kinematics,” ASME Journal of DynamicsSystems Measurements and Control, vol. 119, pp. 760–767, 1997.

[19] G. Baud-Bovy, D. Prattichizzo, and N. Brogi, “Does torque minimizationyield a stable human grasp?” in Multi-Point Physical Interaction withReal and Virtual Objects, F. Barbagli, D. Prattichizzo, and K. Salisbury,Eds. Springer, 2005.

[20] M. Fischer, P. Van der Smagt, and G. Hirzinger, “Learning techniquesin a dataglove based telemanipulation system for the dlr hand,” in Proc.IEEE Int. Conf. Robot. Automat., 1998, pp. 1603–1608.

[21] S. Ekvall and D. Kragic, “Interactive grasp learning based on humandemonstration,” in Proc. IEEE Int. Conf. Robot. Automat., 2004.

[22] M. Do, J. Romero, H. Kjellstrom, P. Azad, T. Asfour, D. Kragic, andR. Dillmann, “Grasp recognition and mapping on humanoid robots,” inProc. IEEE/RSJ Int. Conf. Intel. Hum. Robots, 2009.

[23] M. T. Ciocarlie and P. K. Allen, “Hand posture subspaces for dexterousrobotic grasping,” Int. J. Robot. Res., vol. 28, no. 7, pp. 851–867, 2009.

[24] A. Peer, S. Einenkel, and M. Buss, “Multi-fingered telemanipulation -mapping of a human hand to a three finger gripper,” in Proc. IEEE Int.Sym. Rob. and Hum. Inter. Com., 2008.

[25] J. Liu and Y. Zhang, “Mapping human hand motion to dexterous robotichand,” in Proc. IEEE Int. Conf. Robot. Biomime., 2007, pp. 829–834.

[26] S. B. Kang and K. Ikeuchi, “Robot task programming by humandemonstration: mapping human grasps to manipulator grasps,” in Proc.IEEE/RSJ Int. Conf. Intel. Robots Syst., 1994.

[27] L. Pao and T. Speeter, “Transformation of human hand positions forrobotic hand control,” in Proc. IEEE Int. Conf. Robot. Automat., 1989,pp. 1758–1763.

[28] P. Gorce and N. Rezzoug, “A method to learn hand grasping posturefrom noisy sensing information,” Robotica, vol. 22, no. 03, pp. 309–318, 2004.

Page 13: Mapping Synergies from Human to Robotic Hands with ...sirslab.dii.unisi.it › papers › 2013 › Gioioso.TRO.2013.Grasping.Fin.pdf · between a human hand and a robotic hand is

[29] W. Griffin, R. Findley, M. Turner, and M. Cutkosky, “Calibration andmapping of a human hand for dexterous telemanipulation,” in Proc. Sym.Haptic Inter. for Virt. Env. and Tele. Sys., 2000.

[30] H. Wang, K. Low, M. Wang, and G. F., “A mapping method fortelemanipulation of the non-anthropomorphic robotic hands with initialexperimental validation,” in Proc. IEEE Int. Conf. Robot. Automat.,2005.

[31] D. Prattichizzo and J. Trinkle, “Grasping,” in Handbook on Robotics,B. Siciliano and O. Kathib, Eds. Springer, 2008, pp. 671–700.

[32] S. Chen and I. Kao, “Conservative congruence transformation for jointand cartesian stiffness matrices of robotic hands and fingers,” Int. J.Robot. Res., vol. 19, no. 9, pp. 835–847, 2000.

[33] A. Bicchi and D. Prattichizzo, “Analysis and optimization of tendi-nous actuation for biomorphically designed robotic systems,” Robotica,vol. 18, no. 1, pp. 23–31, 2000.

[34] M. Malvezzi and D. Prattichizzo, “Evaluation of grasp stiffness in un-deractuated compliant hands,” in Proc. IEEE Int. Conf. Robot. Automat.,2013.

[35] S. Cobos, M. Ferre, S. Uran, J. Ortego, and C. Pena, “Efficient humanhand kinematics for manipulation tasks,” in Proc. IEEE/RSJ Int. Conf.Intel. Robots Syst., 2008, pp. 2246–2251.

[36] I. Albrecht, J. Haber, and H. Seidel, “Construction and animationof anatomically based human hand models,” in Proc. ACM SIG-GRAPH/Eurographics Sym. Comp. Anim., 2003, pp. 98–109.

[37] S. Mulatto, A. Formaglio, M. Malvezzi, and D. Prattichizzo, “Usingpostural synergies to animate a low-dimensional hand avatar in hapticsimulation,” IEEE Trans. Hapt., vol. PP, no. 99, p. 1, 2012.

[38] K. Kim, Y. Youm, and W. Chung, “Human kinematic factor for hapticmanipulation: The wrist to thumb,” in Proc. Sym. Haptic Inter. for Virt.Env. and Tele. Sys., 2002, pp. 319–326.

[39] T. Geng, M. Lee, and M. Hulse, “Transferring human grasping synergiesto a robot,” Mechatronics, vol. 21, no. 1, 2011.

[40] M. Malvezzi, G. Gioioso, G. Salvietti, D. Prattichizzo, and A. Bicchi,“Syngrasp: a matlab toolbox for grasp analysis of human and robotichands,” in Proc. IEEE Int. Conf. Robot. Automat., 2013.

[41] “The barrett hand,” barrett Technologies Inc., [Online], Available:http://www.barrett.com/robot/products-hand.htm.

[42] J. Butterfaß, M. Grebenstein, H. Liu, and G. Hirzinger, “DLR-hand II:next generation of a dextrous robot hand,” in Proc. IEEE Int. Conf.Robot. Automat., 2001.

[43] O. Brock, J. Kuffner, and J. Xiao, “Motion for manipulation tasks,” inHandbook on Robotics, B. Siciliano and O. Kathib, Eds. Springer,2008, pp. 615–645.

[44] G. Gioioso, G. Salvietti, M. Malvezzi, and D. Prattichizzo, “An object-based approach to map human hand synergies onto robotic hands withdissimilar kinematics,” in Robotics: Science and Systems VIII. TheMIT Press, 2012.

[45] C. Borst, M. Fischer, and G. Hirzinger, “Grasp planning: How to choosea suitable task wrench space,” in Proc. IEEE Int. Conf. Robot. Automat.,2004.

Guido Gioioso received the “Laurea Specialistica”(con Lode) in Information Engineering from theUnivesity of Siena in 2011. He is currently a Ph.D.student at the Department of Information Engineer-ing and Mathematics of the University of Siena andat the Department of Advanced Robotics, IstitutoItaliano di Tecnologia. His research interests arerobotic and human grasping, haptics, teleoperationand mobile robots.

Gionata Salvietti received the PhD on InformationEngineering and the “Laurea Specialistica” (conLode) in Information Engineering, specialization inRobotics and Automation, from the University ofSiena on 2012 and 2009 respectively. He was vis-iting researcher at DLR Institute for Robotic andMechatronics, Germany in 2012 and visiting studentat TAMS group, University of Hamburg, Germany in2008. He is currently Post Doc Junior at Departmentof Advanced Robotics, Istituto Italiano di Tecnolo-gia. His research interests are robotic and human

grasping, haptics and medical robotics.

Monica Malvezzi is Assistant Professor of Me-chanics and Mechanism Theory at the Departmentof Information Engineering and Mathematics of theUniversity of Siena. She received the Laurea degreein Mechanical Engineering from the University ofFlorence in 1999 and the Ph.D. degree in Ap-plied Mechanics from the University of Bologna in2003. Her main research interests are in control ofmechanical systems, robotics, vehicle localization,multibody dynamics, haptics, grasping and dexterousmanipulation.

Domenico Prattichizzo M.S. degree in ElectronicsEngineering and the Ph.D. degree in Robotics andAutomation from the University of Pisa in 1991 and1995, respectively. Since 2002 Associate Professorof Robotics at the University of Siena. Since 2009Scientific Consultant at Istituto Italiano di Tecnolo-gia, Genova Italy. In 1994, Visiting Scientist at theMIT AI Lab. Co-editor of the books “Control Prob-lems in Robotics” (2003), and “Multi-Point PhysicalInteraction with Real and Virtual Objects” (2005)by Springer. Guest Co-Editor of Special Issue on

“Visual Servoing” of Mechatronics (2012). Guest Co-Editor of Special Issue“Robotics and Neuroscience” of the Brain Research Bulletin (2008). Co-author of the “Grasping” chapter of “Handbook of Robotics” Springer, 2008.Since 2007 Associate Editor in Chief of the IEEE Trans. on Haptics. From2003 to 2007, Associate Editor of the IEEE Trans. on Robotics and IEEETrans. on Control Systems Technologies. Research interests are in haptics,grasping, visual servoing, mobile robotics and geometric control. Author ofmore than 200 papers in robotics and automatic control.