natural user interface for robot task...

5
Natural User Interface for Robot Task Assignment Steven J. Levine Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology Cambridge, MA 02139 USA Email: [email protected] Shawn Schaffert and Neal Checka Vecna Labs Vecna Technologies, Inc. Cambridge, MA 02140 USA Email: {shawn.schaffert, nchecka}@vecna.com Abstract—We present a user interface that utilizes hand- tracking and gesture detection to provide a natural and intuitive interface to a semi-autonomous robotic manipulation system. The developed interface leverages humans and robots for what they each do best. The robotic system performs object detection, and grasp and motion planning. The human operator oversees high-level task assignment and manages unexpected failures. We discuss how this low-cost, task-level interface can reduce operator training time, decrease an operator’s cognitive load, and increase their situational awareness. There are many potential applications for this system such as explosive ordnance disposal, logistics package handling, and assembly tasks. I. I NTRODUCTION There are many domains in which humans must expend significant attention and care in controlling the low-level details of robotic systems. Highly-trained experts are required to operate explosive ordnance disposal (EOD) robots whose wheels and manipulators are manually controlled. Deep-sea scientific exploration missions often employ remotely-operated vehicles with complex manipulators, each joint of which is of- ten individually controlled by a joystick from the surface. Even telepresence robots that are designed for untrained operators are often challenging to control when coupled with a limited field-of-view and network latency. Although these manual low-level control techniques are often effective, they do have a serious drawback: a huge amount of attention must be invested by human operators who must carefully consider numerous low-level details of the platform. The cognitive load required even for a simple task such as picking up an object can be quite large. Furthermore, in some domains, it is not even possible to teleoperate a robot at reactive time scales. For example, it would be quite difficult for an operator on Earth to control NASA’s Robonaut robot on a space mission, as the round-trip communication time governed by the speed of light (on the order of minutes) would make fast teleoperation challenging [10]. We introduce a novel method for human-robot interaction that addresses these difficulties by elevating the level of abstraction at which a human operator controls robots. Instead of controlling the low-level details of how robots complete their actions, a human operator instead uses a gestural interface to quickly specify high-level goals for the robot, which are then executed autonomously via motion and grasp planning algorithms. This allows the human operator to save time and instead focus on the greater picture of the task at hand, rather than on the numerous low-level control details. Furthermore, we surmise that such a system would be of great value to many industries; a single human operator could control many robots in parallel, with each autonomously managing its own low-level details. Such a system is a stepping stone towards introducing greater autonomy in many industrial settings. We implement this user interface for a simple logistics domain, in which a robot stacks blocks representing palletized shipments. A human, operating at a virtual reality control station, sees the surroundings of a robot, including obstacles and objects capable of being manipulated. The human operator may then “drag and drop” a virtual object using natural hand gestures, in effect specifying its desired goal placement. The robot then autonomously plans an appropriate grasp and motion plan to move the desired object, without further human intervention (unless an unexpected failure occurs requiring assistance). The rest of this paper is organized as follows. We first examine prior related work in the field of teleoperated robotics. We then proceed to describe our system, detailing its percep- tion, manipulation, and user interface components. We finally conclude with some experimental results and a discussion of future work. II. RELATED WORK Robotics researchers have used several input modalities of teleoperation interfaces from joysticks [11] to 3D depth- based systems [3, 8]. Some input systems [3] directly con- trol the robot’s joint, while other systems [6, 8] incorporate gesture recognition to send discrete commands to the robot. Specialized input systems that allow the operator to both directly control the robot and send discrete commands have also been developed such as the system of Yussof et al [17]. Their system utilizes a 3D mouse and head-mounted display interface for interacting with a humanoid robot. The user can directly specify end-effector points with the 3D mouse or, via gesture recognition, can send a specific command to the robot. More recently, touch-based mobile devices have been utilized. Checka et al [2] demonstrated a multi-modal, handheld interface for controlling a hospital logistics robot. Several researchers [3, 5, 7] have developed RGB- and depth-based contactless hand-tracking systems for teleopera- tion. These systems track the full 6D pose of the operator’s hand to control the robot’s end-effector often through a virtual

Upload: trinhxuyen

Post on 17-Feb-2018

223 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: Natural User Interface for Robot Task Assignmenthci.cs.wisc.edu/workshops/RSS2014/wp-content/uploads/2013/12/... · Natural User Interface for Robot Task Assignment ... for this system

Natural User Interface for Robot Task AssignmentSteven J. Levine

Computer Science and Artificial Intelligence LaboratoryMassachusetts Institute of Technology

Cambridge, MA 02139 USAEmail: [email protected]

Shawn Schaffert and Neal CheckaVecna Labs

Vecna Technologies, Inc.Cambridge, MA 02140 USA

Email: {shawn.schaffert, nchecka}@vecna.com

Abstract—We present a user interface that utilizes hand-tracking and gesture detection to provide a natural and intuitiveinterface to a semi-autonomous robotic manipulation system.The developed interface leverages humans and robots for whatthey each do best. The robotic system performs object detection,and grasp and motion planning. The human operator overseeshigh-level task assignment and manages unexpected failures.We discuss how this low-cost, task-level interface can reduceoperator training time, decrease an operator’s cognitive load, andincrease their situational awareness. There are many potentialapplications for this system such as explosive ordnance disposal,logistics package handling, and assembly tasks.

I. INTRODUCTION

There are many domains in which humans must expendsignificant attention and care in controlling the low-leveldetails of robotic systems. Highly-trained experts are requiredto operate explosive ordnance disposal (EOD) robots whosewheels and manipulators are manually controlled. Deep-seascientific exploration missions often employ remotely-operatedvehicles with complex manipulators, each joint of which is of-ten individually controlled by a joystick from the surface. Eventelepresence robots that are designed for untrained operatorsare often challenging to control when coupled with a limitedfield-of-view and network latency.

Although these manual low-level control techniques areoften effective, they do have a serious drawback: a hugeamount of attention must be invested by human operatorswho must carefully consider numerous low-level details of theplatform. The cognitive load required even for a simple tasksuch as picking up an object can be quite large. Furthermore, insome domains, it is not even possible to teleoperate a robot atreactive time scales. For example, it would be quite difficult foran operator on Earth to control NASA’s Robonaut robot on aspace mission, as the round-trip communication time governedby the speed of light (on the order of minutes) would makefast teleoperation challenging [10].

We introduce a novel method for human-robot interactionthat addresses these difficulties by elevating the level ofabstraction at which a human operator controls robots. Insteadof controlling the low-level details of how robots completetheir actions, a human operator instead uses a gestural interfaceto quickly specify high-level goals for the robot, which arethen executed autonomously via motion and grasp planningalgorithms. This allows the human operator to save time andinstead focus on the greater picture of the task at hand, rather

than on the numerous low-level control details. Furthermore,we surmise that such a system would be of great value tomany industries; a single human operator could control manyrobots in parallel, with each autonomously managing its ownlow-level details. Such a system is a stepping stone towardsintroducing greater autonomy in many industrial settings.

We implement this user interface for a simple logisticsdomain, in which a robot stacks blocks representing palletizedshipments. A human, operating at a virtual reality controlstation, sees the surroundings of a robot, including obstaclesand objects capable of being manipulated. The human operatormay then “drag and drop” a virtual object using naturalhand gestures, in effect specifying its desired goal placement.The robot then autonomously plans an appropriate grasp andmotion plan to move the desired object, without further humanintervention (unless an unexpected failure occurs requiringassistance).

The rest of this paper is organized as follows. We firstexamine prior related work in the field of teleoperated robotics.We then proceed to describe our system, detailing its percep-tion, manipulation, and user interface components. We finallyconclude with some experimental results and a discussion offuture work.

II. RELATED WORK

Robotics researchers have used several input modalitiesof teleoperation interfaces from joysticks [11] to 3D depth-based systems [3, 8]. Some input systems [3] directly con-trol the robot’s joint, while other systems [6, 8] incorporategesture recognition to send discrete commands to the robot.Specialized input systems that allow the operator to bothdirectly control the robot and send discrete commands havealso been developed such as the system of Yussof et al [17].Their system utilizes a 3D mouse and head-mounted displayinterface for interacting with a humanoid robot. The usercan directly specify end-effector points with the 3D mouseor, via gesture recognition, can send a specific command tothe robot. More recently, touch-based mobile devices havebeen utilized. Checka et al [2] demonstrated a multi-modal,handheld interface for controlling a hospital logistics robot.

Several researchers [3, 5, 7] have developed RGB- anddepth-based contactless hand-tracking systems for teleopera-tion. These systems track the full 6D pose of the operator’shand to control the robot’s end-effector often through a virtual

Page 2: Natural User Interface for Robot Task Assignmenthci.cs.wisc.edu/workshops/RSS2014/wp-content/uploads/2013/12/... · Natural User Interface for Robot Task Assignment ... for this system

Fig. 1. System architecture. A human’s hand is tracked at a virtual realityworkstation using a structured-light sensor (SLS) such as a Kinect, andgestures are estimated. These are used by the user interface to manipulatevirtual objects in the robot’s workspace, as detected by the perception systemvia a second SLS. Upon moving a virtual object, the grasp engine and ROSmotion planning framework generate a collision-free trajectory for a pick andplace operation, which the robot then executes.

robot model. For instance, in the work by Du et al [3], theuser’s hand is directly mapped into the end-effector pose for avirtual 6D manipulator. Each assigned pose is converted, viainverse kinematics, to joint goals that are carried out by theactual robot. A live scene of the robot superimposed with thevirtual robot model is provided as feedback to the user. Incontrast to these approaches, our interface operates entirely atthe task-level, integrating higher-levels of autonomy such ascollision-free planning and automatic object detection.

Thompson [16] developed a multiscale teleoperation demowhich shares a number of similarities with our work. Theirapproach also uses an SLS and the 3Gear Systems trackingsystem, combined with the ROS motion planning pipeline tocontrol a robotic end effector. Their approach is novel inthat the user may adjust the motion scale factor, enablingincreased positioning accuracy. Our approach differ signif-icantly, however, in that users of our interface manipulateobjects of interest at a high-level of abstraction, rather thanthe robot’s end-effector itself. Additionally, another differenceis that we use object “snapping” to useful locations for ourmethod of increased positioning accuracy, instead of a scaleknob interface.

III. SYSTEM OVERVIEW

In this section, we describe the various components of ourgesture-based user interface. A high-level architecture diagramis shown in Figure 1.

A. 3D Hand Tracking

To track the human’s hand pose, we use a point cloud-based approach developed by 3Gear Systems1. Their software

1Please see http://www.threegear.com/ for more information.

is capable of tracking a human’s hand and fingers using astructured light sensor (SLS), such as a Kinect. In addition totracking both the position and orientation of each hand, thesystem also recognizes various gestures, such as pinching.

We use an Asus Xtion Pro structured light sensor (SLS)mounted above the users’ hands facing downward, enabling asmall, desktop-sized user workspace (as illustrated in Figure1). The output of the hand and gesture tracking is then madeavailable to the user interface via ROS topics.

B. Object Detection and Tracking

Our object detection and tracking system utilizes a SLSdirected at the robot’s workspace. Given a 3D point cloud fromthe sensor, our approach to object detection proceeds by firstsegmenting visual observations into background (static scene)and foreground (objects) and then matching each detectedobject to known shape models using a model fitting algorithm.Our initial system is limited to box-like objects; however, weare currently developing a model fitting approach based on theiterative closest point (ICP) to detect general objects given aCAD model [1] algorithm.

In our application, we manually define a region of interest(ROI) that corresponds to the extent of the working area. Wealso assume that in the ROI, the background consists of onlythe ground plane. Given the 3D point cloud of the scene, theground plane of the scene is estimated as the dominant planethen extracted using a robust plane fitting algorithm. Due toits simplicity and efficacy, we chose the RANSAC algorithm[4] for dominant plane estimation.

RANSAC proceeds by iteratively:• creating samples Si from a small number of points

randomly chosen from the initial 3D point cloud.• estimating a plane equation pi from each sample Si

• computing the number of inliers Ii corresponding to planepi. In our implementation, a point is considered as aninlier if its distance to plane p is less than a user-definedthreshold.

RANSAC estimates the dominant (ground) plane p as theplane pk corresponding to the sample with the highest inliernumber Ii. Optionally, the estimate of plane p can be refinedby performing a non-linear optimization using all availableinliers. The result of the segmentation step is shown inFigure 2.

Next, points corresponding to the ground plane are removedfrom the initial 3D point cloud. Then, an agglomerativeclustering algorithm groups the remaining points into differentcandidate objects. Unlike the k-means clustering algorithm,agglomerative clustering does not require a priori knowledgeof the number of clusters and only exploits the topologicalstructure of the objects, not their geometry. The result of theclustering step is shown in Figure 3(a).

To extract box models from a 3D point cloud, we imple-mented a cuboid fitting algorithm. The input of the algorithm isa 3D point cloud corresponding to an object cluster extractedusing the previously described segmentation algorithm. Thealgorithm proceeds by first extracting face primitives. Face

Page 3: Natural User Interface for Robot Task Assignmenthci.cs.wisc.edu/workshops/RSS2014/wp-content/uploads/2013/12/... · Natural User Interface for Robot Task Assignment ... for this system

(a) (b)

Fig. 2. (a) Input color image. (b) Input depth image with estimated groundplane (in green)

primitives are defined by a plane equation and a set of boundsthat correspond to the extent of the associated point cloud.These primitives are estimated using the RANSAC planefitting algorithm, which is applied iteratively to the input 3Dpoint cloud. The final step clusters the face primitives. Eachcluster represents either a box or a non-box object. To achievethis, we implemented a randomized hypothesis prediction-and-verification algorithm.

The algorithm iteratively:• extracts a subset of face primitives and computes a box

model from the subset.• verifies whether the computed box model is consistent

with visual observations by computing a fitting error.• retains the box model if the fitting error is smaller than

a user-defined threshold.In this framework, we define the fitting error E for a box

model B as:E =

∑i

minj||Bi −Mj ||2

where Bi are sample points from box model B and Mi are3D input points (visual observations). The final result of thecuboid fitting algorithm step is shown in Figure 3(b).

C. Hand Gesture-based User Interface

Now that we have introduced the hand and object trackingsubsystems, we proceed to describe how they are integratedtogether to form the user interface.

A virtual world model of the robot’s surroundings is con-structed and displayed to the user. This model incorporatesthe objects of interest, the robot itself, and any other features

(a) (b)

Fig. 3. (a) Two clusters estimated using the agglomerative clustering. (b)Cuboid models fit to clusters using a randomized hypothesis prediction-and-verification algorithm.

(a) (b)

Fig. 4. The robot’s world model, as displayed to the human operator. Notethe blue robot arm, objects to be manipulated (blocks in this scene), andthe white glove (cursor) representing the human’s hand. (a) After pinching ablock, the block follows the cursor, allowing the person to place it down at adesired location. (b) The block has “snapped” into the desired position exactlyaligned with the box below, thereby increasing the precision with which theoperator may specify place goals.

Fig. 5. The virtual cursor (white glove visible on screen) mimics the user’shand, allowing a person to naturally manipulate virtual objects. The robotthen executes the desired pick and place operation. For a video, please seehttp://goo.gl/w3jo4Y

in the robot’s workspace as detected by the SLS. An exampleview can be seen in Figure 4.

Additionally overlaid in this world model is a 3D cursordisplayed as a white glove. The movement of this cursormirrors the operator’s hand movements at the control station,providing a natural and intuitive method of interacting withthe virtual robot’s surroundings. Additionally, for ease of use,the color of the cursor changes when the person pinches, or toindicate that the cursor is touching a graspable virtual object.A human operator, as well as the white glove cursor, can beseen in Figure 5.

The operator interacts with the virtual world as follows.When the cursor is touching an object, a pinch will attachthat object the cursor, allowing it to be dragged around. Asecond pinch drops the object in place, thereby allowingthe operator to move objects in the virtual world. This finallocation specifies a goal location for the object, and the robot

Page 4: Natural User Interface for Robot Task Assignmenthci.cs.wisc.edu/workshops/RSS2014/wp-content/uploads/2013/12/... · Natural User Interface for Robot Task Assignment ... for this system

Fig. 6. Six potential grasps for autonomously picking up a block.

will proceed to pick up and move the real object. It’s importantto note that the robot will not move the object through thesame path used by the human operator. Rather, it will plan itsown path autonomously, so long as the object’s desired poseis achieved. This allows the operator to focus on higher-levelgoals (figuring out where the objects should go), and tasksthe robot with figuring out how to achieve them (finding thespecific grasp and collision-free trajectory).

To improve accuracy, we also incorporated a “snapping”feature into the user interface. Dragged blocks snap to keyposes often desired by users. For example, blocks can easilybe snapped on top of other blocks or to orientations parallel orperpendicular to other blocks to allow ease of stacking. Thiscan be seen in Figure 4(b).

Visual feedback to the user is implemented using rviz, theROS visualization tool [9]. Our software receives the handpose and gesture estimates from the 3Gear Systems software,the detected and tracked objects from the perception system,and publishes appropriate marker messages to rviz to displaythe virtual world. Additionally, our software makes calls to themanipulation pipeline and grasp planning engine to implementthe autonomous pick and place operations.

D. Autonomous Manipulation

Once the user has picked up and placed a virtual objectby “pinching and placing,” it is the robot’s responsibility toautonomously implement that operation. As noted earlier, therobot is not required to move the object through the same pathtaken by the human. Rather, it autonomously plans its own,collision-free trajectory so that the operator is not burdenedwith the exact details of how to move the object.

We leverage many recent advances in robotic motion plan-ning provided by the ROS manipulation pipeline. Implementedin ROS fuerte with the arm_navigation stack (which isnow superseded by ROS MoveIt!), our system uses variousmotion planning algorithms provided by the OMPL planningpackage [14, 15] to compute motion trajectories.

We use a custom object grasp engine to compute feasiblegrasps for the robot around an object of interest. These graspsare currently implemented for simple shape primitives such asboxes and cylinders, and take into account the object geometryand the gripper’s size limitations. Various grasps are shownoverlaid in Figure 6. Of the generated candidate grasps, onlythose that are kinematically feasible (as tested by computinginverse kinematics [12]) and collision-free are further con-sidered for motion planning. Furthermore, motion planningconsiders the feasibility of both the pick and place operationssimultaneously for a given grasp to increase robustness.

IV. EXPERIMENTAL RESULTS

We implemented the above-described interface and appliedit to a Kinova Jaco 6-DOF robotic arm. Please see a videoof the functioning system at http://goo.gl/w3jo4Y, in which ahuman operator stacks some virtual blocks, causing the robotto do the same.

Preliminary results of a usability study seem promising.We asked a small number of participants to try our systemas well as a more traditional joystick method of controllingthe robot’s end effector. We asked the participants to comparethe two approaches along the following dimensions: perceivedintuitiveness, level of concentration required, trust in the robot,and task completion time once trained. Our initial results showthat most participants found our gesture based interface moreintuitive, and felt it required less concentration than manuallycontrolling the robot.

However, participants generally trusted the robot less (wesurmise due to its greater autonomy), and the single-robotpick and place sometimes took longer to complete with ourinterface. We believe this to be a result of non-optimal,randomized motion planning taking longer to execute whencompared with the more direct paths taken by the joystickoperators. Additionally, we noted that after commanding therobot via our gesture interface, the human operator oftensat idly while the robot executed the action. We thereforehypothesize that, should the operator be delegating to multiplerobots in parallel, the overall task completion time woulddecrease dramatically due to parallelization.

V. CONCLUSIONS AND FUTURE WORK

We have developed a functioning, hand gesture-based userinterface for controlling robots at a higher level of abstraction.Instead of burdening the human operator with the low-leveldetails of completing a task, the operator simply specifiesthe desired goal through a natural pinch-and-drag operationfrom a virtual reality workspace. Not only does this appearto relieve the cognitive load of the human operator, but webelieve it will also be of great applicability to industry both1.) as a stepping stone to achieving greater autonomy inmany industrial applications, and 2.) increasing efficiency byseparating what robots and humans each do best, respectively.

There are a number of areas for future work, one of whichis implementing and testing the system with multiple robots toquantify any increases in efficiency. Our implementation used

Page 5: Natural User Interface for Robot Task Assignmenthci.cs.wisc.edu/workshops/RSS2014/wp-content/uploads/2013/12/... · Natural User Interface for Robot Task Assignment ... for this system

a single robot, and we hypothesize that adding more robotswill yield substantial performance improvements through par-allelization.

Another potentially fruitful area is the integration of our ap-proach with generative planning algorithms. Such algorithmscould afford additional autonomy to the robots, enabling themto solve more complex problems without human intervention.For example, a generative planner could determine that inorder to move supplies from pallet A to pallet B, any objectsalready on pallet B must first be moved. In essence, our ges-tural interface would specify the goal inputs to the generativeplanner, which would then generate a task plan (a sequenceof pick and place operations) to achieve that desired goal.

Finally, a third area of future work is to improve the virtualreality interface. One current difficulty is inferring depth forpicking up virtual objects from a 2D screen. Our currentapproach addresses this by changing the cursor’s color whenit is touching an object. In the future, a 3D display or headmounted display could provide a more intuitive experience.Additionally, adding tactile feedback to our interface similarlyto that demonstrated by Disney [13] could improve the expe-rience of grasping an object.

ACKNOWLEDGMENTS

We thank 3Gear Systems (particularly Robert Wang) fortheir hard work on the point cloud-based hand tracking algo-rithms, and for their exceptionally useful and fast customersupport that made this work possible. Additionally, we thankHarsha Rajendra Prasad and David Demirdjian for their in-sightful conversations and hard work implementing variouspieces of the system.

REFERENCES

[1] Paul J Besl and Neil D McKay. Method for registrationof 3-d shapes. In Robotics-DL tentative, pages 586–606.International Society for Optics and Photonics, 1992.

[2] Neal Checka, Shawn Schaffert, David Demirdjian, JanFalkowski, and Daniel H Grollman. Handheld operatorcontrol unit. In Proceedings of the Seventh AnnualACM/IEEE International Conference on Human-RobotInteraction, pages 137–138. ACM, 2012.

[3] Guanglong Du, Ping Zhang, Jianhua Mai, and ZelingLi. Markerless kinect-based hand tracking for robotteleoperation. International Journal of Advanced RoboticSystems, 9, 2012.

[4] Martin A. Fischler and Robert C. Bolles. RandomSample Consensus: A Paradigm for Model Fitting withApplications to Image Analysis and Automated Cartogra-phy. Communications of the ACM, 24(6):381–395, 1981.

[5] Pablo Gil, Carlos Mateo, and Fernando Torres. 3d visualsensing of the human hand for the remote operation of arobotic hand. International Journal of Advanced RoboticSystems, 11, 2014.

[6] Chao Hu, Max Qinghu Meng, Peter Xiaoping Liu, andXiang Wang. Visual gesture recognition for human-machine interface of robot teleoperation. In IEEE/RSJ

International Conference on Intelligent Robots and Sys-tems, 2003.(IROS 2003)., volume 2, pages 1560–1565.IEEE, 2003.

[7] Jonathan Kofman, Xianghai Wu, Timothy J Luu, and Sid-dharth Verma. Teleoperation of a robot manipulator usinga vision-based human-robot interface. IEEE Transactionson Industrial Electronics, 52(5):1206–1219, 2005.

[8] Kun Qian, Jie Niu, and Hong Yang. Developing agesture based remote human-robot interaction systemusing kinect. International Journal of Smart Home, 7(4), 2013.

[9] Morgan Quigley, Ken Conley, Brian Gerkey, Josh Faust,Tully Foote, Jeremy Leibs, Rob Wheeler, and Andrew YNg. Ros: an open-source robot operating system. In ICRAWorkshop on Open Source Software, volume 3, 2009.

[10] Fredrik Rehnmark, William Bluethmann, JoshuaMehling, Robert O Ambrose, Myron Diftler, MarsChu, and Ryan Necessary. Robonaut: the ’short list’ oftechnology hurdles. Computer, 38(1):28–37, 2005.

[11] Neo Ee Sian, Kazuhito Yokoi, Shuuji Kajita, FumioKanehiro, and Kazuo Tanie. Whole body teleoperation ofa humanoid robot-development of a simple master deviceusing joysticks. In IEEE/RSJ International Conferenceon Intelligent Robots and Systems., volume 3, pages2569–2574. IEEE, 2002.

[12] R. Smits. KDL: Kinematics and Dynamics Library. http://www.orocos.org/kdl.

[13] Rajinder Sodhi, Ivan Poupyrev, Matthew Glisson, andAli Israr. Aireal: interactive tactile experiences in freeair. ACM Transactions on Graphics (TOG), 32(4):134,2013.

[14] Ioan A. Sucan and Sachin Chitta. Moveit! http://moveit.ros.org, 2013.

[15] Ioan A. Sucan, Mark Moll, and Lydia E. Kavraki.The Open Motion Planning Library. IEEE Robotics& Automation Magazine, 19(4):72–82, December 2012.http://ompl.kavrakilab.org.

[16] Jack. Thompson. Multiscale teleoperation demo. http://bit.ly/STAi0d, 2013.

[17] Hanafiah Yussof, Genci Capi, Yasuo Nasu, MitsuhiroYamano, and Masahiro Ohka. A CORBA-Based ControlArchitecture for Real-Time Teleoperation Tasks in aDevelopmental Humanoid Robot. International Journalof Advanced Robotic Systems, 8(2), 2011.