spqr@work: team description paper - rockin competition · “spqr team”. in rockin camp 2014,...

4
SPQR@Work: Team Description Paper Marco Imperoli, Harold Agudelo, Daniele Evangelista, Ahmad Irjoob, Valerio Sanelli, Alberto Pretto and Daniele Nardi Department of Computer, Control, and Management Engineering “Antonio Ruberti“, Sapienza University of Rome, Italy. Email: [email protected], [email protected], [email protected], [email protected], [email protected], {pretto,nardi}@dis.uniroma1.it Website: http://www.dis.uniroma1.it/ labrococo/SPQRWork Abstract—The SPQR@Work team is a spin-off of the S.P.Q.R. RoboCup team of the Department of Computer, Control, and Management Engineering “Antonio Ruberti” at Sapienza Uni- versity of Rome, created to participate at the RoCKIn@Work competition. This document describes the scientific background, the team members’ competences and the employed robot hardware and software that the SPQR@Work team will exploit if it will be accepted to participate to the 2015 RoCKIn competition event. I. I NTRODUCTION The SPQR@Work team has been founded as an offshoot of the S.P.Q.R. RoboCup team, that is involved in RoboCup competitions since 1998 in different leagues : Middle-size 1998-2002; Four-legged 2000-2007; Real-Rescue robots since 2003; Virtual-Rescue robots since 2006; Standard Platform League since 2008 The team involves both PhD students and graduated students, members of the Cognitive Cooperating Robots laboratory, supervised by Dr. Alberto Pretto and Prof. Daniele Nardi. SPQR@Work goals are the following: Set up a team to successfully compete in RoCKIn@Work competition. Propose and test new techniques for (1) RGB-D object recognition and localization (2) Object manipulation and grasping (3) Robot Navigation and Planning. Promote teaching initiatives to foster the participation of students in research on intelligent mobile robotics. II. TEAM PRESENTATION A. Team Details Team name: SPQR@Work Selected track: RoCKIn@Work Fig. 1. The SPQR@Work robot Team members: Harold Agudelo, Daniele Evange- lista, Ahmad Irjoob, Valerio Sanelli Team leader: Marco Imperoli Advisors: Alberto Pretto, Daniele Nardi Robot: 1 Kuka youBot B. Involved Institutions 1) RoCoCo: The laboratory RoCoCo (Cognitive Cooperating Robots) is located at DIAG (Dipartimento di Ingegneria Informatica, Automatica e Gestionale) of Sapienza University of Rome, Italy. The research developed at RoCoCo laboratory is in the area of Artificial Intelligence and Robotics: Lab RoCoCo has a strong background in knowledge representation and reasoning, cognitive robotics, information fusion, mapping and localization, object recognition and tracking. multi-robot coordination, robot learning, stereo- vision, vision-based people detection. RoCoCo has been involved in RoboCup competitions since 1998 in different leagues.

Upload: others

Post on 25-Jul-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: SPQR@Work: Team Description Paper - RoCKIn Competition · “SPQR Team”. In RoCKIn Camp 2014, “SPQR” won in ex-aequo with another team the “Best Practical in Perception”

SPQR@Work: Team Description Paper

Marco Imperoli, Harold Agudelo, Daniele Evangelista, Ahmad Irjoob,Valerio Sanelli, Alberto Pretto and Daniele Nardi

Department of Computer, Control, and Management Engineering“Antonio Ruberti“, Sapienza University of Rome, Italy.

Email: [email protected], [email protected],

[email protected], [email protected],

[email protected], {pretto,nardi}@dis.uniroma1.itWebsite: http://www.dis.uniroma1.it/ labrococo/SPQRWork

Abstract—The SPQR@Work team is a spin-off of the S.P.Q.R.RoboCup team of the Department of Computer, Control, andManagement Engineering “Antonio Ruberti” at Sapienza Uni-versity of Rome, created to participate at the [email protected] document describes the scientific background, the teammembers’ competences and the employed robot hardware andsoftware that the SPQR@Work team will exploit if it will beaccepted to participate to the 2015 RoCKIn competition event.

I. INTRODUCTION

The SPQR@Work team has been founded as an offshootof the S.P.Q.R. RoboCup team, that is involved in RoboCupcompetitions since 1998 in different leagues :

• Middle-size 1998-2002;

• Four-legged 2000-2007;

• Real-Rescue robots since 2003;

• Virtual-Rescue robots since 2006;

• Standard Platform League since 2008

The team involves both PhD students and graduatedstudents, members of the Cognitive Cooperating Robotslaboratory, supervised by Dr. Alberto Pretto and Prof. DanieleNardi.

SPQR@Work goals are the following:

• Set up a team to successfully compete inRoCKIn@Work competition.

• Propose and test new techniques for (1) RGB-D objectrecognition and localization (2) Object manipulationand grasping (3) Robot Navigation and Planning.

• Promote teaching initiatives to foster the participationof students in research on intelligent mobile robotics.

II. TEAM PRESENTATION

A. Team Details

• Team name: SPQR@Work

• Selected track: RoCKIn@Work

Fig. 1. The SPQR@Work robot

• Team members: Harold Agudelo, Daniele Evange-lista, Ahmad Irjoob, Valerio Sanelli

• Team leader: Marco Imperoli

• Advisors: Alberto Pretto, Daniele Nardi

• Robot: 1 Kuka youBot

B. Involved Institutions

1) RoCoCo: The laboratory RoCoCo (CognitiveCooperating Robots) is located at DIAG (Dipartimentodi Ingegneria Informatica, Automatica e Gestionale) ofSapienza University of Rome, Italy. The research developed atRoCoCo laboratory is in the area of Artificial Intelligence andRobotics: Lab RoCoCo has a strong background in knowledgerepresentation and reasoning, cognitive robotics, informationfusion, mapping and localization, object recognition andtracking. multi-robot coordination, robot learning, stereo-vision, vision-based people detection. RoCoCo has beeninvolved in RoboCup competitions since 1998 in differentleagues.

Page 2: SPQR@Work: Team Description Paper - RoCKIn Competition · “SPQR Team”. In RoCKIn Camp 2014, “SPQR” won in ex-aequo with another team the “Best Practical in Perception”

C. Competence of Team Members

Marco Imperoli: Imperoli is a first year Ph.D. student atSapienza University of Rome. In 2015 he received the Masterdegree in Artificial Intelligence and Robotics Engineering. Hisresearch is mainly focused on active perception problems forobject recognition and accurate full 3D objects pose estima-tion.

Harold Agudelo: Agudelo is a second year student of theMaster degree in Artificial Intelligence and Robotics Engi-neering at Sapienza University of Rome. In 2013 he receivedthe Electronic Engineer degree from Universidad del Norte,Colombia. His areas of interest are related with ArtificialIntelligence in domotics.

Daniele Evangelista: Evangelista is currently attending thesecond year of the Master degree in Articial Intelligence andRobotics at Sapienza University, in Rome. In October 2014he got the bachelor degree in Informatics and Telecommuni-cations engineering at University of Cassino. He is also theco-founder and CEO of Alibabyte Software House S.r.l.s., anItalian company mainly focused on web, desktop and mobilesoftware development. His main interests are in MachineLearning and Artificial Intelligence in robotics applications.

Ahmad Irjoob: Irjoob is a Master student of ArtificialIntelligence and Robotics at Sapienza University of Rome. Hereceived the Bachelor degree in Telecommunication Engineerfrom Yarmook University, Jordan. He is mainly focused onthe computer networks field applied to autonomous robots andintelligent systems.

Valerio Sanelli: Sanelli is a Master student of ArtificialIntelligence and Robotics at la Sapienza, University of Rome.He received the Bachelor Degree in Computer Engineering atSapienza University of Rome in 2013. He is mainly focusedon Artificial Intelligence, with particular emphasis on RobotLearning, Automated Reasoning and Vision systems.

Alberto Pretto: Pretto is Assistant Professor at SapienzaUniversity of Rome since October 2013. He received hisPh.D. degree in October 2009 from the University of Padua,where he worked as a postdoctoral researcher at the IntelligentAutonomous Systems Lab (Department of Information Engi-neering). In 2005, he was one of the funders of IT+RoboticsSrl, a spin-off company of the University of Padua working onrobotics and machine vision. Alberto Pretto’s main research ac-tivities include vision-based 3D reconstruction, visual-inertialnavigation, and object recognition.

Daniele Nardi: Nardi is Full Professor at DIS, where hewas employed since 1986. His current research interests arein the field of artificial intelligence and cognitive robotics.He is Trustee of RoboCup and currently the president of theRobotCup Federation. From 2005-2008, he also served as theco-chair of IEEE Technical Committee of International Work-shop on Safety, Security and Rescue Robotics. He is active inthe context of autonomous rescue robots and successfully co-ordinated a team participating on several editions of RoboCupRescue since the year 2002. Nardi received the “IJCAI-91Publisher’s Prize” and the prize “Intelligenza Artificiale 1993”from the Associazione Italiana per l’Intelligenza Artificiale(AI*IA).

D. RoCKIn Events participations

Imperoli and Pretto participated at the RoCKIn Camp 2014,RoCKIn Competition 2015, and RoCKIn Camp 2015, with the“SPQR Team”. In RoCKIn Camp 2014, “SPQR” won in ex-aequo with another team the “Best Practical in Perception”award.

III. ROBOT DESCRIPTION

A. Hardware Configuration

The SPQR@Work robot (Fig. 1) is a KUKA youBot withthe following sensor suite:

• A frontal Hokuyo laser scanner, to be used for navi-gation and obstacles avoidance tasks

• A Kinect RGB-D sensor: the area viewed by thekinect includes the working area of the arm, in orderto perform object manipulation tasks without robotmotions

• An on-board laptop running Linux Ubuntu 14.04, to beused to run all the software modules (e.g., navigation,object recognition, etc...). The internal youBot IntelAtom PC is not used.

• A color USB3 high-resolution camera on the 5th jointof the manipulator for accurate object localization

B. Software Configuration

The SPQR@Work software infrastructure is based on theROS Indigo middleware [1] running on Ubuntu 14.04, never-theless most of the developed software tools are stand-alone,middleware-agnostic: packages are integrated within ROS bymeans of suitable wrappers.The navigation stack is based on particle filter based local-ization and mapping algorithms, provided by the ROS AMCLand GMapping [2] packages, the latter implemented by onemember of the laboratory RoCoCo. Tha base motion planningtask is accomplished using two different subsystems (seeFig. 7):

• A global planner, which provides the global pathto follow, based on the standard MoveBase ROSpackage;

• A custom-built local planner, which, given the globalplan and a costmap, produces velocity commands tosend to the mobile base controller.

Our ad hoc local planner is based on artificial potential fields,designed for holonomic mobile robots.

We use a custom-built module also for the manipulationand the grasping tasks: the arm motion planner we propose isable to plan accurate trajectories assuming that the best wayto grasp an object disposed in an crowded environment is tolet the gripper follows a straight line in the Cartesian spacetowards the object of interest.

Object recognition and localization is obtained usingreliable algorithms that exploit both vision features and depth

Page 3: SPQR@Work: Team Description Paper - RoCKIn Competition · “SPQR Team”. In RoCKIn Camp 2014, “SPQR” won in ex-aequo with another team the “Best Practical in Perception”

information (Sec. IV-1). This module includes a pipeline ofprocessing blocks (Fig. 2): the pipeline includes processingblocks for sub-sampling, clustering, noise reduction, and2D and 3D features detection and descriptors. The modelmatching step is performed associating the extracted featureswith features pre-computed from a rigid template (e.g., theCAD model) of the object. The result is a set of objectcandidates, that is refined using an novel object registrationstrategy.

The decision making module is based on a finite statemachine implemented with the ROS Actionlib package.

Currently, the SPQR@Work robot is able to:

• Localize itself and safely navigate toward a selectedtarget area

• Detect QR code-like markers using vision

• Recognize and localize objects using RGB-D sensors,using the techniques proposed in [3] and [4]

• Perform simple visual-servoing and manipulationtasks (e.g., pick and place task, “cutting a string” task,. . . )

IV. RESEARCH AREAS AND CONTRIBUTIONS

Several research areas are involved in the SPQR@Workteam development, and some novel contributions have beenproposed. Among others, a novel registration method of 3Dtextureless objects (Sec. IV-1) and an arm motion plannerwell suited to plan accurate trajectories also in an crowdedenvironment (Sec. IV-2).

1) Object Recognition and Localization: A reliable andaccurate object recognition system is an essential prerequisiteto successfully compete in the RoCKIn competitions. TheSPQR@Work team members presented two robust and flexiblevision systems for 3D localization of untextured objects forindustrial robots [3], [4]. In particular, in [3] we presenteda robust and efficient method for object detection and 3Dpose estimation that exploits a novel edge-based registrationalgorithm we called Direct Directional Chamfer Optimization(D2CO). D2CO refines the object position employing a non-linear optimization procedure, where the cost being minimizedis extracted directly from a 3D image tensor composed by asequence of distance maps. No ICP-like iterative re-associationsteps are required: the data association is implicitly optimizedwhile inferring the object pose. Our approach is able to handletextureless and partially occluded objects and does not requireany off-line object learning step: just a 3D CAD model isneeded (see Fig. 3).

The Directional Chamfer Distance (DCD) tensor encodesthe minimum distance of an image point to an edge point in ajoint direction/location space. Unlike the Chamfer distance, theDCD takes into account also the edges directions (see Fig. 4).

We extract a set of object candidates by pre-computingthe (projected) raster templates along with their imageorientations for a large number of possible 3D locations;for each point, we lookup the DCD tensor, computing the

Fig. 3. The raster models used in our algorithms are computed from the 3DCAD models of the objects.

Fig. 4. DCD tensor computation pipeline.

Fig. 5. An example of object registration using the D2CO algorithm.

template average distance. The templates are then sorted forincreasing distances.

D2CO finally refines the object position employing a non-linear optimization procedure that minimizes a tensor-basedcost function (e.g., see Fig. 5). We use the LM algorithmwith a Huber loss function in order to reduces the influence ofoutliers. During the optimization, both the (projected) pointsposition and the (projected) points orientation are constantlyupdated.

2) Object Manipulation and Grasping: Very often, theobjects to be manipulated are randomly disposed in clutteredenvironments: a precise, smooth and safe arm trajectory plan-ning is an essential requirement. The arm motion plannerwe propose is able to plan accurate trajectories assumingthat the best way to grasp an object disposed in an crowdedenvironment is to let the gripper follows a straight line in theCartesian space towards the object of interest. Thanks to ourobstacles avoidance system, each planned path is guaranteed tobe free from collisions, and very accurate object grasping andplacing are performed. In order to accomplish the manipulationtask, we have defined a set of arm control points along with twodifferent obstacles representations: spherical and box obstacles(e.g., see Fig. 6). Using these primitives, our trajectory plannercan easily compute collision-free paths.

Page 4: SPQR@Work: Team Description Paper - RoCKIn Competition · “SPQR Team”. In RoCKIn Camp 2014, “SPQR” won in ex-aequo with another team the “Best Practical in Perception”

Fig. 2. The SPQR@Work object recognition and localization pipeline.

Fig. 6. Visualization of three different points of view of the same sceneusing our OpenGL GUI. The manipulator control points and the obstacles arerendered in white and red respectively. The yellow path represents the plannedend-effector trajectory.

3) Robot Navigation and Planning: In our experience, wefound that the existing navigation frameworks (e.g., the popular“move base” package of the ROS framework [1]) usuallydepends on a large number of parameters: it is usually verydifficult to find a good parameters set that enables a reliableand safe navigation in most of the conditions. To addressthis problem, we developed an ad hoc local planner basedon artificial potential fields, designed for holonomic mobilerobots. The idea is to use an existing navigation framework as aprimary navigation system, focusing on the planning reliabilityand safety. Our local planner is then used only for refinementpurposes, improving the accuracy (see Fig. 7).

Fig. 7. An example of navigation path (dark gray). The curved portion ofthe entire path is computed by the primary navigation planner. Our potentialfields based planner is used for approaching the final target (red) and forsliding away from obstacles at the beginning of the navigation task (blue).

Moreover, in order to improve the whole navigationsystem, we adopt a robot odometry refinement technique,that allows to enhance the robot localization accuracy. Thismethod performs a fast matching between the incoming laserscans. The robot odometry is then updated according with thedisplacement computed by a scan matcher.

V. REUSABILITY AND RELEVANCE TO INDUSTRIALROBOTICS

• Some modules of the proposed object recognition andlocalization systems (Sec. IV-1) is currently installed

in seven real world industrial plants 1, with differentsetups, working with hundreds of different models andsuccessfully guiding the manipulators to pick severalhundreds of thousands of pieces per year.

• Our current arm trajectory planning and control sys-tem rivals other state-of-the-art planning frameworks(e.g., MoveIt! [5]). We aim to further improve thismodule and possibly to make it publicly available.

REFERENCES

[1] ROS - Robot Operating System. [Online]. Available: http://wiki.ros.org[2] G. Grisetti, C. Stachniss, and W. Burgard, “Improved techniques for grid

mapping with rao-blackwellized particle filters,” Trans. Rob., vol. 23,no. 1, pp. 34–46, Feb. 2007.

[3] M. Imperoli and A. Pretto, “D2CO: Fast and robust registration of 3Dtextureless objects using the Directional Chamfer Distance,” in Proc. of10th International Conference on Computer Vision Systems (ICVS 2015),2015, pp. 316–328.

[4] A. Pretto, S. Tonello, and E. Menegatti, “Flexible 3d localization of pla-nar objects for industrial bin-picking with monocamera vision system,”in Proc. of: IEEE International Conference on Automation Science andEngineering (CASE), 2013, pp. 168 – 175.

[5] MoveIt! [Online]. Available: http://moveit.ros.org/

1E.g., https://www.youtube.com/watch?v=jqhuEjnDdfI