autonomous object handover using wrist tactile information

15
See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/318539898 Autonomous Object Handover Using Wrist Tactile Information Chapter in Lecture Notes in Computer Science · July 2017 DOI: 10.1007/978-3-319-64107-2_35 CITATIONS 0 READS 43 5 authors, including: Some of the authors of this publication are also working on these related projects: SignSpeak View project STIFF FLOP View project Jelizaveta Konstantinova Queen Mary, University of London 27 PUBLICATIONS 69 CITATIONS SEE PROFILE Senka Krivić University of Innsbruck 12 PUBLICATIONS 14 CITATIONS SEE PROFILE Justus H. Piater University of Innsbruck 204 PUBLICATIONS 2,379 CITATIONS SEE PROFILE Kaspar Althoefer King's College London 370 PUBLICATIONS 3,560 CITATIONS SEE PROFILE All content following this page was uploaded by Jelizaveta Konstantinova on 17 October 2017. The user has requested enhancement of the downloaded file.

Upload: others

Post on 30-Oct-2021

2 views

Category:

Documents


0 download

TRANSCRIPT

Seediscussions,stats,andauthorprofilesforthispublicationat:https://www.researchgate.net/publication/318539898

AutonomousObjectHandoverUsingWristTactileInformation

ChapterinLectureNotesinComputerScience·July2017

DOI:10.1007/978-3-319-64107-2_35

CITATIONS

0

READS

43

5authors,including:

Someoftheauthorsofthispublicationarealsoworkingontheserelatedprojects:

SignSpeakViewproject

STIFFFLOPViewproject

JelizavetaKonstantinova

QueenMary,UniversityofLondon

27PUBLICATIONS69CITATIONS

SEEPROFILE

SenkaKrivićUniversityofInnsbruck

12PUBLICATIONS14CITATIONS

SEEPROFILE

JustusH.Piater

UniversityofInnsbruck

204PUBLICATIONS2,379CITATIONS

SEEPROFILE

KasparAlthoefer

King'sCollegeLondon

370PUBLICATIONS3,560CITATIONS

SEEPROFILE

AllcontentfollowingthispagewasuploadedbyJelizavetaKonstantinovaon17October2017.

Theuserhasrequestedenhancementofthedownloadedfile.

Autonomous Object Handover using WristTactile Information

Jelizaveta Konstantinova1, Senka Krivic2 Agostino Stilli3, Justus Piater2, andKaspar Althoefer1

1 School of Engineering and Material Science, Queen Mary University of London,U.K.,

2 Department of Computer Science, University of Innsbruck, Austria,3 Department of Informatics, King’s College London, U.K.

[email protected]

Abstract. Grasping in an uncertain environment is a topic of greatinterest in robotics. In this paper we focus on the challenge of objecthandover capable of coping with a wide range of different and unspecifiedobjects. Handover is the action of object passing an object from one agentto another. In this work handover is performed from human to robot. Wepresent a robust method that relies only on the force information fromthe wrist and does not use any vision and tactile information from thefingers. By analyzing readings from a wrist force sensor, models of tactileresponse for receiving and releasing an object were identified and testedduring validation experiments.

1 Introduction

Fully autonomous grasping and manipulation of objects is a topic of great impor-tance in robotics. One of the main challenges is stable grasping of an undefinedobject in uncertain environments [1]. To successfully grasp an object, the robot isrequired to appropriately time the motion of the fingers. Enveloping the fingersaround the object too early or too late might lead to a weak grip or a completemiss [2]. Often, the capability of grasping an object fully autonomously is notrequired as the robot is collaborating with a human when performing a task.Robotic systems are becoming safer and are entering the living or work spaceof people [3]. This is important for scenarios such as household assistance andelderly care, hospital nursing or assistance during rehabilitation or disabilities.In such contexts close interaction between the human and the robot is essential.Safety is an issue, as the robot might inadvertently endanger or harm the hu-man; hence, the timing of object grasping and releasing actions is an importantaspect in this context.

Handover of an object from human to robot is one of the most commontasks where human-robot interaction is required. Handover simplifies grasp plan-ning and implementation [4] as the object is given to the hand directly. How-ever, it requires an accurate detection mechanism to discriminate stable grasps.

The following studies addressed the problem of robotic object handover. Gen-erally, handover is performed based on human motion patterns that are studiedand implemented on the robot [5], or they are used as input to a learning-by-demonstration system [6]. Whether the human action is learned or used as areference, the existing approaches are computationally complex, require onlinerobotic learning or a large data set of demonstrations.

A common problem between fully autonomous grasping and collaborativegrasping is the ability to understand whether an object is safely grasped or not.It is possible to formally discriminate whether an object is successfully gripped.For example, Nagata et al. [7] presented a grasping system based on force andtorque feedback that senses when the human has a stable grasp on the object,after which the robot can release the object. Such techniques are computationallyintensive or require too detailed information regarding the placement of thefingers on the object. It is possible to detect a good grasp by relying on a tactilesensor or a multi-modal sensory system. In [8] authors use the wrist’s currentto discriminate a stable grasp. This is a simplistic approach, and does not scalewell to objects with different geometries, friction and material properties. Workin [9] presented a robotic grasping controller inspired by human trials to graspan object with minimal normal forces while ensuring the object does not slip.Such approach, however, requires a good estimate or prior knowledge of thefriction of the objects to be successful. Multi-modal sensory systems increase theinformation available to the robot to take decisions such as detecting the contactswith the object [10]. However, such systems require additional computationalcomplexity to perform sensor fusion in real time.

We describe a simple controller that is able to cope with different objectsof different mechanical properties. It was tested on a multifingered hand and istargeted at human-robot interaction. This paper presents work addressing theproblem of object handover using an algorithm which relies on wrist force andtorque feedback only. Both actions of handing over the object to the human andtaking it back from the user are studied. This work is performed as part of the EUproject SQUIRREL. The control algorithm is implemented on the SQUIRRELrobot platform, which is intended to be used in domestic environments such asa nursery for children.

Section 2 describes the design of the methodology of our studies, includingthe design of the robotic system and the handover problem. In Section 2.3 wepresent the experimental studies that were conducted for data collection. Then,in Section 3 the modeling of tactile data for object reception and release ispresented. Section 4 presents validation studies that were performed to evaluatethe performance of the algorithm. Conclusions are drawn out in Section VI.

2 Methodology

2.1 SQUIRREL Robot

The work presented here uses a robotic platform developed for the EU-FP7project SQUIRREL (Figure 1). It consists of a mobile base (FESTO Robotino),

Fig. 1: SQUIRREL robot with the SoftHand as an end effector.

Fig. 2: Wrist sensor and wrist housing structure.

a custom-made 5 degree-of-freedom lightweight robotic arm (FESTO) and theend effector. In this version of the robot system the Pisa/IIT SoftHand [11] isused. It is an underactuated multifingered robotic hand with a single actuateddegree of freedom. The hand pose is not predetermined; it adapts dependingon the physical interaction of its body with the environment. Therefore, thisbrings more challenges for the design of the handover detection algorithm, asthe pose of the hand cannot be taken into account. The second version of theSQUIRREL robot is equipped with a KCL metamorphic hand [12] (Figure 3).This hand has a reconfigurable palm and is able to adopt a wide range of differentgrasping postures. Therefore, this paper focuses on methods to detect handoverindependently of the end effector, and uses force information obtained at thewrist.

The wrist of the robotic hand is equipped with a 6-axis force/torque sensor(FT17 from IIT). Apart from the detection of contact during handover, thepurpose of this sensor is 1) to act as a safety switch in case of unexpectedcollision, 2) to detect forces applied during kinesthetic teaching, and 3) to detectthe weight of an object during grasping.

Fig. 3: KCL metamorphic hand with reconfigurable palm mounted on theSQUIRREL robot.

The structure of the wrist, shown in Figure 2, is designed to fulfill the fol-lowing requirements. First, it creates a mechanical limitation on the maximumdeflection of the wrist to protect the sensor from overloading and to preventdamage of the structure. In addition, it expands the force range where the sen-sor can operate safely before saturation, acting as a spring with a large elasticconstant (in comparison to the sensor alone) in parallel with the sensor itself. Inorder to produce a spring-like behavior using a rigid material (ABS 3D printingmaterial), a flexible cantilever structure was integrated directly with the wallsof the structure.

2.2 Object Handover

A handover action is a structured action that is composed of different intermedi-ate steps. For instance, human-to-human handovers [13] are composed of threemain stages: reaching the receiving agent, signaling the intention to do a han-dover, and passing the object to the receiver. In this work, we study handovers ofobjects between a human and a robot (object reception) and vice-versa (objectreleasing).

The action of handover is part of more a complex robot behavior, wherethe robot acts as a companion and assistant to a human. In the general system,handover is triggered by a high-level planner which defines the type of handover.The handover type is determined by the final hand pose and orientation whileperforming the transfer of the object. There are different types that depend onthe class of the object and the subsequent robot task.

Different types of handovers are discriminated by the hand posture used tohold or receive the object, i.e., which edge of the object is grasped, whether anobject has a handle or a specific grasping area such as a knife, etc. Detecting themost suitable type of handover is outside the scope of this paper. Nevertheless,in this work, the robot is required to identify the contact with an object to be

Fig. 4: Handover of an object to a human.

grasped or a request to release an object for different postures of an end-effector.An example of a handover is shown in Figure 4.

Our definition of handover between a human and a robot is as follows. Toreceive an object from a human subject, the robot is required to approach thehuman, to open the end effector, and to signal to the human the intention ofreceiving the object. Then, the robot is required to ensure that the object isin contact with the end effector in order to perform a successful grasp and torestrain the object in the robotic hand (Figure 5a). Handing over an objectto a human requires approaching the subject while carrying the object in theend effector, ensuring that the object is safely restrained by the human, andreleasing the object (Figure 5b). Force sensing is used to confirm that the objectis in contact with the robotic hand before receiving it, then to confirm that theobject is grasped successfully, and, finally, that it can be safely given to a humanfor a release stage.

2.3 Experimental Studies

Experimental studies were conducted, recording tactile data from the wrist foranalysis and design of a handover detection algorithm. The case study presentedin this work and the associated project is a kindergarten scenario, where therobot interacts with children and helps them sort and clean up toys. Such acomplex scenario contains a variety of objects of different shapes, sizes and tex-tures. The set of objects used for the handover is shown in Figure 6. Six objects,different in weight, dimension and stiffness, were selected from toy objects typi-cally found in nurseries.

During this stage of experiments, the decision to grasp or release an objectwas made based on human visual observation. In all experiments for the al-

Handover type

Approach

Signal

Detect

Grasp Sensing

Confirm

Result

(a)

Handover type

Approach

Signal

Detect Sensing

Release

Result

(b)

Fig. 5: (a) The handover sequence for reception an object from a human; (b) thehandover sequence for releasing an object to a human.

gorithm design, the hand pose of the robot was fixed, while during validationstudies (Section 4) new handover types (with different hand poses) were tested.

3 Modeling of Tactile Information

3.1 Object Reception

Algorithm Giving an object to the robot is the first stage of the handover. Theorientation of the robotic hand before receiving an object is not predefined, asit depends on the previous task and can be different for each trial. However, thewrist remains at the same position during handover. Three-dimensional forcesacting on the sensor by the mounted robotic hand alone change according to itsorientation. To compensate for the effect of the orientation of the end effector,the magnitude F of the force vector is considered in this analysis.

The forces encountered while pushing the object into the robotic hand dependon the weight of the specific object and the strength of the subject. When theobject is pushed into the robotic hand by a person, dynamic changes in the forcemagnitude appear. In order to understand those changes, the time derivative orrate of change of force is analyzed. Assuming constant mass, the derivative offorce is the derivative of acceleration, or jerk. This value indicates how slowly orhow fast the force is changing.

Fig. 6: Set of objects used in the experiments, containing six objects of variousproperties.

The final step of the algorithm is to assign a threshold or a classificationalgorithm that triggers a handover action. In order to detect those changes, anempirical approach was used. A threshold is derived from the experimental dataset using the standard deviation of the derivative of force. It is described in thenext section and is tested during validation studies as described in Section 4.

Analysis of Experimental Data Set This section presents the analysis oftactile data recorded during the experimental studies. The target subjects of thescenario are children of pre-school age. Therefore, the push of a child cannot becompared with the same action performed by an adult. For instance, our datashow that the standard deviation of the force encountered for different subjectsand trials varies from 0.24 N to 3.14 N. As a handover is characterized by a changeof force, the effect of mean magnitude should be removed. The resulting, non-dimensional representation of standard deviation is the coefficient of variation,or relative standard deviation, that shows the variability of trials irrespective ofthe mean magnitude. It is expressed as the ratio of standard deviation over themean. The mean value of relative standard deviation for force magnitude is just0.03, while the same value for the derivative of force (jerk) is 17.6. This meansthat the use of jerk can provide a better estimate of the time instant when thecontact occurs.

Figure 7 displays the derivative of force for four randomly-selected trials. Itcan be seen that the force exhibits sudden changes that might correspond to theinstant when the object is pushed into the robotic hand.

The next step of the empirical approach is to establish a threshold to de-tect those changes. The grasp threshold is derived from the standard deviationacross all trials. If the jerk of the force is beyond the threshold then initiating agrasp would lead to a safe grip. The threshold that identifies the contact withan object can be calculated in several ways. Figure 8 shows the distribution ofstandard deviation values across the experimental data set. The minimum valueis 0.22 m/s3 and the maximum value is 1.44 m/s3. Thus, the variance of the dis-tribution of standard deviation is 1.22 m/s3. Further on, in Section4 ten differentthresholds from minimum to maximum vale with a step of 0.12 were evaluated.

Fig. 7: Derivative of force (jerk) for four randomly-selected trials. Peaks indicatethe instant of contact.

1

0.5

1

1.5

Stan

dard

dev

iatio

n of

jerk

,

mm

3 /s

Fig. 8: Distribution of standard deviation for jerk for object reception across thevalidation data set.

3.2 Object Releasing

Algorithm Release of the grasped object to the human is the last stage of ahandover action. During object release the robot is required to open the hand. Arelease is triggered when the human securely holds or pulls an object. A releaserequest is identified by observing the force sensor at the wrist only, as it isalready done for the reception stage of an object. Based on the analysis of forcemagnitude, it was found that the release action consists of two different forcepatterns. Therefore, it requires a different approach compared to the action ofobject reception.

For this reason, a second ad-hoc algorithm was developed and evaluated. Thepseudocode of the algorithm is presented in Algorithm 1. The algorithm requiresas input the force magnitude calculated from the forces of the wrist. The forcemagnitude was detrended by removing the mean. A release action is triggeredafter N consecutive samples of sign opposite to the reference are detected. Inother words, detection of a steady change of the direction of force magnitude isrequired. The value of N was empirically set to five. In addition, the empirical

algorithm based on standard deviation of force derivative as used for objectreception was tested.

Input: F : force magnitudeOutput: true if and only if release is confirmedm← mean(F [0], F [1], . . . , F [K]);refSign ← sign(F [K]−m);n← 0;/* When countStarted is true the algorithm counts the number

of samples with opposite sign */

countStarted ← false;for i = 1; n < N ; i++ do

detectSign ← sign(F [K + i]−m);if detectSign != refSign then

countStarted ← true;n++;

endelse if countStarted then

n← 0;// Swap the sign: positive to negative and vice-versa

refSign ← invert(refSign);countStarted ← false;

end

endreturn true /* The loop terminates if N samples in a row with

opposite sign to the reference are detected */

Algorithm 1: Detection algorithm for releasing an object. The mean iscalculated from an arbitrary number K of samples.

Analysis of Experimental Data The release-request motion patterns can bedivided in two main categories as shown in Figure 9. The first strategy, shownas a red dotted line, corresponds to sharp, sudden peaks, and is more similar tothe strategy that is observed for object reception. The second strategy, shownas a blue solid line, represents the scenario when the object is grasped and thenpulled with a steady force. The change of force magnitude corresponds to a rampin the recorded data.

Similarly to the stage of object reception, the empirical approach uses thedistribution of standard deviation of jerk that is shown in Figure 10.The min-imum value of jerk distribution is 0.26 m/s3, and the maximum is 1.20 m/s3.Therefore, the evaluation of optimal threshold for the empirical approach wasperformed from the minimum to the maximum value with a step of 0.09 m/s3.

Fig. 9: Two types of strategies detected during the handover action of objectrelease. Trials are randomly selected from two types of patterns.

4 Validation Studies

This section describes the validation studies that were carried out in order totest and to determine the best performance of the proposed handover detectionmethods. Five new subjects and a set of different objects was used in this section.The objects are shown in Figure 11, and each subject arbitrarily selected sixobjects. The subject was standing in the reachability space of the object.

To assess and compare the performance of the proposed algorithms, exper-imental studies were carried out on the real robot. The grasping and releasingscenarios were tested separately, and for both actions the proposed approacheswere tested and compared. The success rate of the each approach was evaluated.In other words, it was studied how often a certain algorithm is able to detect agrasp or a release request correctly, as well as to estimate the number of falsepositives.

1

0.5

1

1.5

Stan

dard

dev

iatio

n of

jerk

,

mm

3 /s

Fig. 10: Distribution of standard deviation for jerk for object releasing acrossexperimental data set.

Fig. 11: Objects used for validation studies.

0 0.22 0.34 0.46 0.58 0.7 0.82 0.94 1.16 1.28 1.4

Standard deviation of jerk, mm 3/s

0

20

40

60

80

Det

ectio

n ra

te ,

%

Fig. 12: Detection rate in percentages for different thresholds derived from thedistribution of jerk for object reception.

4.1 Validation of Object Reception

The validation of the empirical approach for object reception is performed. Theoptimal threshold is calculated based on the performance of the evaluation dataset to discriminate the moment of contact for approaching. Figure 12 showsthe performance of the empirical algorithm for different thresholds. It can beobserved that the performance of the algorithm is improving until it reachesa peak (0.94mm3/s) and after reaching it, stays at the same level. Therefore,the thresholds that corresponds to the highest detection rate is chosen for thisalgorithm.

4.2 Validation of Object Releasing

Two approaches were tested for the validation of the object releasing action ofhandover. The empirical method used for the grasping algorithm shows poor

0.26 0.35 0.47 0.54 0.64 0.73 0.83 0.92 1.01 1.1

Standard deviation of jerk, mm3/s

10

20

30

40

50

60

70

80

Det

ectio

n ra

te, %

Fig. 13: Detection rate in percentages for different thresholds derived from thedistribution of jerk for object releasing.

Fig. 14: The performance of the ad-hoc algorithm for both strategies, shown indifferent lines, of release the action; markers indicate the moment of the handrelease.

performance with the maximum detection rate of 46%. The performance for theempirical method is shown in Figure 13. The ad-hoc algorithm developed for re-lease detection, instead, had a good performance with 89% correct classification.Figure 14 shows the example cases of classification for the ad-hoc algorithm.The black dot indicates the time instant of successful release detection. It canbe seen that the algorithm performs well for both release strategies. The ad-hocalgorithm is also simple to implement and fast to execute.

5 Conclusions

In this paper we present an approach of handover detection using only the infor-mation from the wrist force/torque sensor. The algorithms for object receptionand release were developed based on experimentally collected data, and were val-idated via a separate set of experimental studies. The proposed algorithm wasdeveloped for a wide range of toy objects that is typical of objects commonlyfound in household environments.

In future work, it is planned to develop a learning algorithm for the ad-hocapproach. Additionally, a recurrent neural network can be tested to take intoaccount the influence of the previous force readings when deciding whether totake an action or stay idle.

ACKNOWLEDGMENT

The research leading to these results has received funding from the EuropeanCommunity’s Seventh Framework Programme under grant agreement no. 610532,SQUIRREL, and from the European Unions Horizon 2020 research and innova-tion programme under grant agreement no. 287728, FourByThree.

References

1. Dov Katz, Yuri Pyuro, and Oliver Brock. Learning to manipulate articulated ob-jects in unstructured environments using a grounded relational representation. InProceedings of Robotics: Science and Systems IV, pages 254–261, Zurich, Switzer-land, June 2008.

2. G. Cotugno, V. Mohan, K. Althoefer, and T. Nanayakkara. Simplifying graspingcomplexity through generalization of kinaesthetically learned synergies. In 2014IEEE International Conference on Robotics and Automation (ICRA), pages 5345–5351, May 2014.

3. I. Maurtua, N. Pedrocchi, A. Orlandini, J. d. G. Fernndez, C. Vogel, A. Geenen,K. Althoefer, and A. Shafti. Fourbythree: Imagine humans and robots workinghand in hand. In 2016 IEEE 21st International Conference on Emerging Tech-nologies and Factory Automation (ETFA), pages 1–8, Sept 2016.

4. Aaron Edsinger and Charles C Kemp. Human-robot interaction for cooperativemanipulation: Handing objects to one another. In Robot and Human interactiveCommunication, 2007. RO-MAN 2007. The 16th IEEE International Symposiumon, pages 1167–1172. IEEE, 2007.

5. Chien-Ming Huang, Maya Cakmak, and Bilge Mutlu. Adaptive coordination strate-gies for human-robot handovers. In Robotics: Science and Systems, 2015.

6. Andras Kupcsik, David Hsu, and Wee Sun Lee. Learning dynamic robot-to-humanobject handover from human feedback. arXiv preprint arXiv:1603.06390, 2016.

7. K. Nagata, Y. Oosaki, M. Kakikura, and H. Tsukune. Delivery by hand betweenhuman and robot based on fingertip force-torque information. In Proceedings. 1998IEEE/RSJ International Conference on Intelligent Robots and Systems. Innova-tions in Theory, Practice and Applications (Cat. No.98CH36190), volume 2, pages750–757 vol.2, Oct 1998.

8. Heinrich Mellmann and Giuseppe Cotugno. Dynamic motion control: Adaptivebimanual grasping for a humanoid robot. Fundamenta Informaticae, 112(1):89–101, 2011.

9. M. J. Sadigh and H. Ahmadi. Robust control algorithm for safe grasping based onforce sensing. In 2008 IEEE International Conference on Robotics and Biomimet-ics, pages 1279–1284, Feb 2009.

10. Javier Felip, Antonio Morales, and Tamim Asfour. Multi-sensor and predictionfusion for contact detection and localization. In Humanoid Robots (Humanoids),2014 14th IEEE-RAS International Conference on, pages 601–607. IEEE, 2014.

11. Manuel G. Catalano, Giorgio Grioli, Alessandro Serio, Edoardo Farnioli, CristinaPiazza, and Antonio Bicchi. Adaptive synergies for a humanoid robot hand. In 12thIEEE-RAS International Conference on Humanoid Robots (Humanoids 2012), Os-aka, Japan, November 29 - Dec. 1, 2012, pages 7–14, 2012.

12. Jian S. Dai, Delun Wang, and Lei Cui. Orientation and workspace analysis ofthe multifingered metamorphic hand-metahand. IEEE Transactions on Robotics,25(4):942–947, 2009.

13. Kyle Strabala, Min Kyung Lee, Anca Dragan, Jodi Forlizzi, Siddhartha Srinivasa,Maya Cakmak, and Vincenzo Micelli. Towards seamless human-robot handovers.Journal of Human-Robot Interaction, 2013.

View publication statsView publication stats