an optimization based cartesian controller for mobile
TRANSCRIPT
An Optimization Based Cartesian Controller forMobile Manipulation of Service Robots
in Domestic Environments
Emilia Anna Brzozowska
Thesis to obtain the Master of Science Degree in
Information Systems and Computer Engineering
Supervisor: Prof. Rodrigo Martins de Matos Ventura
Examination Committee
Chairperson: Prof. João António Madeiras PereiraSupervisor: Prof. Rodrigo Martins de Matos Ventura
Member of the Committee: Prof. Alexandre José Malheiro Bernardino
July 2018
ii
Dedicated to my beloved grandparents,
Grazyna and Leszek Bacia
iii
iv
Acknowledgments
I would like to express my deep gratitude to prof. Rodrigo Ventura, my thesis supervisors, for his patient
guidance, enthusiastic encouragement and useful critiques of this work. His willingness to give his time
so generously has been very much appreciated.
I would like to express my very great appreciation to my team leader and close friend MSc. Oscar
Lima for his constant support and passed knowledge during development of this work. It was an honor
to be a member of your team and grow under your guidance.
I would also like to extend my thanks the whole SocRob@Home team members, for all the advice,
help and care you gave me during this intense development time. Working with you was pure pleasure
and an unforgettable experience.
Finally, I wish to thank my parents and sisters for their unconditional support and encouragement
throughout my study.
Thank you!
v
vi
Resumo
Este trabalho e sobre a execucao robusta de tarefas complexas de manipulacao para robos de servico
que operam em ambientes dinamicos. O principal objetivo e a manipulacao de objetos utilizando a
visao como parte do ciclo, de forma a analisar a resposta do sistema. Foram identificados dois prob-
lemas principais, o primeiro relacionado com a inexistencia de solucao cinematica inversa para certas
configuracoes do braco e a segunda relacionada com a calibracao precisa do braco do robo em relacao
a camera e a base do robo. Para lidar com esses problemas, e proposto uma optimizacao baseada
num controlador cartesiano, capaz de controlar a base do robo e o braco de forma combinada.Os re-
sultados mostram que o controlador e capaz de alcancar diferentes configuracoes aleatorias com alta
probabilidade de sucesso.
As aplicacoes deste controlador incluem, por exemplo: agarrar e mover objetos , passar objetos entre
humano e robo, servoing visual, etc.
Palavras-chave: Optimizacao baseado num controlador cartesiano, controle de movimento
de corpo inteiro, controle combinado da base e braco do robo, Jacobiana estendida, visual servoing,
manipulacao de objetos em tempo real
vii
viii
Abstract
This work is about robust execution of complex manipulation tasks for service robots operating in dy-
namic environments. Our goal is the reliable manipulation of objects with vision in the loop. We identify
two main problems, the first one being the lack of inverse kinematic solution for particular arm config-
urations and the second, accurate robot base-camera-arm calibration. To cope with this problems we
propose an optimization based Cartesian controller that is able to control the robot base and arm in a
combined way. Our results show that the controller is able to reach random arm configurations with a
high probability of success.
Applications of this controller include for instance: object grasp, place, human to robot object handover,
visual servoing, etc.
Keywords: Optimization based Cartesian controller, whole body motion control, combined
control of robot base and arm, extended Jacobian, visual servoing, real-time object manipulation
ix
x
Contents
Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v
Resumo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
1 Introduction 1
1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Problem Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.4 Thesis Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2 Background 5
2.1 Background Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.2 Robotic Arm Kinematics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.2.1 Forward Kinematics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2.2 Inverse Kinematics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.2.3 Jacobian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.3 Path Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.4 Common Path Planners . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.4.1 Rapidly-exploring Randomized Tree (RRT) Algorithm . . . . . . . . . . . . . . . . . 10
2.4.2 Expansive Space Trees (EST) Algorithm . . . . . . . . . . . . . . . . . . . . . . . . 10
2.4.3 Probabilistic Roadmap Method (PRM) Algorithm . . . . . . . . . . . . . . . . . . . 11
2.5 OctoMap Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.6 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3 Controller Design 13
3.1 Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.2 Previous Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.3 Developed Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.3.1 Arm Controller Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.3.2 Arm Controller Mathematical Model . . . . . . . . . . . . . . . . . . . . . . . . . . 16
xi
CONTENTS
3.3.3 Arm and Base Controller Architecture . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.3.4 Arm and Base Controller Mathematical Model . . . . . . . . . . . . . . . . . . . . . 20
3.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
4 Implementation 27
4.1 Manipulator description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
4.2 Robot Operating System (ROS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
4.3 Open Motion Planning Library (OMPL) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
4.4 Unified Robot Description Format (URDF) . . . . . . . . . . . . . . . . . . . . . . . . . . 29
4.5 MoveIt! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
4.6 SciPy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
4.7 PyKDL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
5 Results 31
5.1 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
5.2 Arm Controller Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
5.2.1 Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
5.2.2 Real Robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
5.2.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
5.3 Arm and Base Controller Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
5.3.1 Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
5.3.2 Real Robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
5.3.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
5.4 Summary of Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
5.5 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
6 Conclusions 45
6.1 Summary of Thesis Achievements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
6.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Bibliography 47
xii
List of Tables
5.1 Reasonable rotation restrictions accordingly to SocRob@Home technical report. . . . . . 34
5.2 Final rotation restrictions in conducted experiment. . . . . . . . . . . . . . . . . . . . . . . 35
5.3 Evaluation summary for realistic goal poses. . . . . . . . . . . . . . . . . . . . . . . . . . . 42
xiii
LIST OF TABLES
xiv
List of Figures
1.1 Robot attempting to grasp dishwasher handle. . . . . . . . . . . . . . . . . . . . . . . . . 2
2.1 Kinematic chain scheme. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.2 Attachments of frames for mathematical description. . . . . . . . . . . . . . . . . . . . . . 6
2.3 Two-link planar manipulator. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.4 Octree - geometrical representation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.1 MBot - arm and camera outline. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.2 Open loop architectural approach. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.3 Previous solution system architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.4 Designed new system architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.5 Arm repulsion potential function. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.6 Extended new system architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.7 Base repulsion potential function for rotation. . . . . . . . . . . . . . . . . . . . . . . . . . 24
4.1 DYNAMIXEL smart actuators. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
4.2 Manipulator used in the project. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
5.1 Example of placing marker on the object. . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
5.2 Starting arm configurations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
5.3 Testing algorithm. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
5.4 The area in which the goal poses are generated. . . . . . . . . . . . . . . . . . . . . . . . 33
5.5 The area in which the front grasp goal poses are generated. . . . . . . . . . . . . . . . . . 35
5.6 Different goal positions for arm controller experiment on real robot. . . . . . . . . . . . . . 36
5.7 Area of randomized poses for arm and base controller - overview. . . . . . . . . . . . . . 38
5.8 Area of randomized poses for arm and base controller - front grasp. . . . . . . . . . . . . 38
5.9 Area of randomized poses for arm and base controller - side grasp. . . . . . . . . . . . . . 39
5.10 Different video data captured while conducting experiment - front grasp. . . . . . . . . . . 40
5.11 Different video data captured while conducting experiment - side grasp. . . . . . . . . . . 41
xv
LIST OF FIGURES
xvi
Chapter 1
Introduction
This document provides solution for obtaining real-time compliant manipulation pipeline for mobile ser-
vice robots, tested on a simulation as well as on the real robot. In this chapter, we present the overview
of the work starting with motivation (Section 1.1) and problem statement (Section 1.2) followed by contri-
butions of this thesis (Section 1.3). To conclude we provide the reader with the structure of this document
(Section 1.4).
1.1 Motivation
Robust execution of complex manipulation tasks in dynamic environments is essential for service robots
in domestic environments to assist their users in their daily tasks.
Such tasks could include for instance, the transportation of objects between locations (e.g. from a table
to a cupboard), which implicitly involve the successful execution of sub-tasks like perceiving, fetching,
carrying and placing that object.
Typically, object grasping problems are approached by using separate offline path planning and open
loop execution methods, which expose some disadvantages. For instance, during trajectory execution,
the robot is not sensitive to changes in the environment. If the target object pose is changed, the robot
should ideally deal successfully with those situations and adjust accordingly. Moreover such methods
typically control only the arm and do not take into account the robot base. This situation often leads to
failures in finding inverse kinematic solution, specially when the robot base is wrongly placed.
A second problem when it comes to object grasping is related with robot base-camera-arm cali-
bration. This work is motivated by the possibility of bypassing the calibration problem, which can be
achieved by fixing a marker to the end effector. Since both the object and the marker (located in the end
effector) are perceived w.r.t camera frame, then we can overcome e.g. a badly calibrated camera, arm,
or both.
1
CHAPTER 1. INTRODUCTION
1.2 Problem Statement
The problem is defined as the design, implementation and testing of a real-time closed-loop Cartesian
controller which inputs the target pose from a perception module. The controller includes both robot
base and arm combined such that its reachability is maximized. The developed solution is integrated
and tested on the MBot robot (Fig 1.1) both in simulation and real robot. Mbot is a service robot, ulti-
mately used in a hospitals to interact with autistic children.
Figure 1.1: Robot attempting to grasp dishwasher handle.
We apply this method for a mobile service robot (Mbot) in domestic environments, where household
objects need to be manipulated, however the algorithms presented in this work are general and can be
applied for robots acting in other environments as well.
The controller inputs a target pose from perception data, generated by using off the shelf solutions.
The object detection part on itself is out of the scope of our work but rather a tool for pose generation
purposes. As output of our model we have velocity commands for the arm servos along with velocities
for the robot base. Our work is focused on the arm control to reach a target pose, where the robot end
effector frame needs to be matched with a target frame e.g. an object, to perform a grasping task, but
grasp planning on itself is out of the scope of this work.
1.3 Contributions
The main objective of this study is to provide real-time compliant manipulation pipeline for mobile service
robots object manipulation in home environments. With this work we contributed to the SocRob@Home
manipulation skills. Presented study succeeded with following contributions:
2
CHAPTER 1. INTRODUCTION
• We have contributed with a generic and flexible real time base + arm closed loop cartesian velocity
controller that is able to achieve in general (with its manipulator), a target pose, by moving robot
base and arm combined.
• Development of a velocity controller hardware interface (driver) for Dynamixel motors derived from
a open source implementation of position and trajectory control.
• Experimental evaluation of developed controller both in simulation and real robot.
• We contribute to the enhancement of the robot skills, as the controller can potentially be used for
the following applications: object grasp, place, human to robot object handover, visual servoing,
etc.
1.4 Thesis Outline
This work has following structure: Chapter 2 provides background information required to understand
proposed solution followed by the state of the art review. Chapter 3 is dedicated to present the new
approach for manipulation control. This chapter focuses on comparison between previous manipulation
architecture in a open-loop and the new closed-loop solution, including mathematical models used in
the controller development process. Chapter 4 gives information about used hardware and gives short
review about the software adapted in the previous manipulation approach, used by the SocRob@Home
team. Chapter 5 shows obtained results during controller experimental evaluation, followed by discus-
sion. The thesis ends up with the Chapter 6, where we summarize thesis achievements and discuss
possible future work steps, to enhance presented controller.
3
CHAPTER 1. INTRODUCTION
4
Chapter 2
Background
This chapter provides fundamentals of theoretical knowledge required to understand developed solution
for previously stated problem (Section 1.2). Firstly, we briefly explain what is the manipulator in terms of
robotics (Section 2.1) and what is kinematics of robotic manipulators (Section 2.2). Secondly we explain
importance of path planning in manipulation task (Section 2.3), followed by short description of few of the
most common path planners (Section 2.4). Next section (Section 2.5) is devoted to OctoMap subject.
Chapter ends up with summary of related work (Section 2.6).
2.1 Background Introduction
Robotic servo manipulators are kinematic chains, composed of sequence of bodies, called links, and
connected by joints (Fig 2.1). One end of the manipulator is fixed and one is free to perform a given
task. In robotic manipulators joints are the movable elements, which enable relative movement between
the neighboring links. Each joint has one degree of freedom and depending on its motion, we can
classify it to the one of two groups: linear or rotational type [1]. For manipulation and control of robotic
arm it is important to understand mathematical description of the position and orientation of its links in
space.
Figure 2.1: Kinematic chain scheme.
5
CHAPTER 2. BACKGROUND
2.2 Robotic Arm Kinematics
To describe position and orientation of a robotic manipulator in space, frames are attached to the links
and end-effector. The position and orientation of frames, with respect to reference frame, called base
frame, are mathematical description of the body’s location, as illustrated in Fig. 2.2. Mathematical
Figure 2.2: Attachments of frames for mathematical description.
description of relation between position and orientation of neighboring frames, in respect to each other,
can be presented by linear transformation matrix in homogeneous space. The transformation matrix is
called homogeneous, because of combining translation and rotation into one convenient representation.
Homogeneous transformation is linear transformation in a three dimensional space.
Description of the frame is given by position vector (position description) relative to some other ref-
erence frame and rotation matrix (orientation description). Mathematically, a transformation mapping a
description of frame {A} to description of frame {B} is given by [2]:
ABT =
2
6666664
r11 r12 r13 tx
r21 r22 r23 ty
r31 r32 r33 tz
0 0 0 1
3
7777775(2.1)
In transformation matrix we can distinguish rotation matrix (3x3) and translation vector (3x1) and use
following notation:
T =
2
4 R t
0 0 0 1
3
5 (2.2)
where:
R – rotation 3x3 matrix with three degrees of freedom
t – position translation vector
To compute transformation between frame {0} and last link frame we can use kinematic equations.
6
CHAPTER 2. BACKGROUND
Knowing the values of the link parameters, the particular link-transformation matrixes can be computed.
Then, the link transformations can be multiplied together to find the single transformation that relates
frame {i} to frame {0}:0iT =0
1 T
12T
23T . . .
i �1iT (2.3)
The transformation is a function of all joint variables and it computes Cartesian position and orientation
of the last link frame [2].
2.2.1 Forward Kinematics
Knowing simple geometrical manipulator’s frames positions, in respect to other adjacent frames, it is
possible to compute coordinates of the end effector, from the given joints angles. This is referred as
forward kinematics [3].
For mathematical description of forward kinematics we will consider the simplest two-links planar
manipulator with a given joints angles (✓1, ✓2) and links lengths (a1, a2) presented below (Fig.2.3). W
consider axes (z0, z1, z2) as normal to the page and the base frame (x0 y0 z0) is given. The final frame
(x2 y2 z2) is fixed at the end of the link 2 as shown [4]. To calculate forward kinematics problem, one can
Figure 2.3: Two-link planar manipulator. The joint axes (z0, z1, z2) are normal to the page and are notshown in the figure (source: [4]).
use Denavit-Hartenberg convention which defines a transform between frame {i} relative to the frame
{i -1}, namely i�1iT . The general form of i�1
iT [2]:
i�1iT =
2
6666664
cos ✓i �sin ✓i 0 ai�1
sin ✓i cos ai�1 cos ✓i sin ai�1 �sin ai�1 �sin � ai�1 di
sin ✓i sin ai�1 cos ✓i cos ai�1 cos � ai�1 �sin � ai�1 di
0 0 0 1
3
7777775(2.4)
where:
ai, di - geometrical parameters fixed by mechanical design of manipulator
7
CHAPTER 2. BACKGROUND
The matrix above provides equations which solving give coordinates of the end-effector frame (x2 y2 z2)
relative to the base frame (x0 y0 z0), what solves forward kinematics problem.
2.2.2 Inverse Kinematics
A problem with a common practical interest is the inverse problem of forward kinematics, called inverse
kinematics. The inverse kinematics method calculates a set of all possible joints angles, which can
be used to obtain desired position and orientation of the end effector. In terms of Denavit-Hartenberg
convention we are searching for an angle (✓1) with given i�1iT [2]. In case of obtaining multiply solutions,
robot needs to decide which is the most optimal in terms of various factors.
There is also possibility of nonexistence of the problem solution. It also might be state that point is not
reachable for the manipulator [3]. Those complications make inverse kinematics problem more complex
than forward kinematics. Finding inverse kinematics solution can lead to solving complex nonlinear
equations. There are several methods that can be used to solve the problem, such as iterative methods
or closed-form method.
The iterative methods solve the inverse kinematics problem by using a sequence of attempts leading
to incrementally better configuration for the joints angles. Achieving better configuration means minimiz-
ing the difference between the current and target positions of the robot’s end-effector [5].
Another approach is closed-form method. In a closed-form method, the solution of the joints angles
configuration can be straightforward expressed as a set of closed-form equations. Closed-form method
gives a clear, single solution when is used for 6-degree-of-freedom systems with special kinematic struc-
ture of kinematic chains [6].
2.2.3 Jacobian
In this thesis particular case, the fundamental aspect of robotic manipulator is to calculate end effector
(robotic hand) velocity, knowing joints velocities. Its inverse problem of finding the joint velocities for given
end effector velocity. To solve both of those problems, we need Jacobian matrix, which is obtained from
the kinematics arm parameters. Jacobian is multidimensional form of the derivative. For the vector of
functions Y = (y1, y2, y3, . . . yn), n 2 N :
2
6666666664
y1 = f1(x1, x2, x3, . . . xn)
y2 = f2(x1, x2, x3, . . . xn)
y3 = f3(x1, x2, x3, . . . xn)...
yn = fn(x1, x2, x3, . . . xn)
3
7777777775
(2.5)
we can use following notation [2]:
Y = F (X) (2.6)
8
CHAPTER 2. BACKGROUND
After calculating differentials of y as a function on differentials of x we obtain:
2
6666666664
�y1 = @f1@x1
x1 +@f1@x2
x2 +@f1@x3
x3 + · · ·+ @f1@xn
�xn
�y2 = @f2@x1
x1 +@f2@x2
x2 +@f2@x3
x3 + · · ·+ @f2@xn
�xn
�y3 = @f3@x1
x1 +@f3@x2
x2 +@f3@x3
x3 + · · ·+ @f3@xn
�xn
...
�yn = @fn@x1
x1 +@fn@x2
x2 +@fn@x3
x3 + · · ·+ @fn@xn
�xn
3
7777777775
(2.7)
we can use following notation:
�Y = @F (�X) (2.8)
where the partial derivatives @F@X is called Jacobian J(X):
�Y = J(X)�X (2.9)
By dividing both sides by differential time elements we get:
Y = J(X)X (2.10)
where Jacobian J(X) is a time varying linear transformation, that maps velocities in X into velocities in
Y . For this work purposes we can use following notation [2]:
0v = 0
J(⇥)⇥ (2.11)
where:
v - vector of Cartesian velocities
⇥ - vector of joint angles
Jacobian is the partial derivative of the kinematics equations matrix with respect to joint angles, that
maps joint velocity to the end-effector spatial velocity expressed in the world coordinate frame. Essen-
tial for this work is the inverse example, when for the given end-effector velocities we want to calculate
joints velocities. We can use Jacobian inverse to compute joints velocities, as long as the Jacobian is
non-singular.
2.3 Path Planning
One of the most important requirements for mobile service robots is possibility to move the end-effector
fluently from one pose to another. To be able to perform this task, robot needs to be provided with a path
planning algorithm. The general path planning problem is to determine collision-free trajectory, between
two specified positions and orientations (e. g. initial pose of end-effector and final pose of end-effector),
with a given description of robot’s environment.
9
CHAPTER 2. BACKGROUND
In order to plan a collision-free path for a robotic arm, the robot manipulator is usually represented
in its joint space. In the joint space, each configuration of the robot manipulator can be represented
as a point. The sets in the joint space which correspond to the configuration of the robot manipulator
that collide with obstacles in the work-space are called joint space obstacles. After finding joint space
obstacles, a path planning problem can be transformed into the problem of finding a path for a point
from initial configuration to the point of final configuration for the manipulator in the joint space without
intersecting the joint space obstacles.
To plan the path for a robot manipulator in the joint space, the path planning algorithms for mobile
robots may be applied, when the dimension of the joint space is not too high. Two major difficulties
for planning collision-free path or a robot manipulator are finding joint space obstacle (i.e. mapping
obstacles in the work-space to the robot joint space), and planning collision-free path in the joint space
with high dimensions [7].
2.4 Common Path Planners
Path planning is the core element for the autonomous mobile robots. There are various approaches
and motion planning algorithms proposed by previous and current path planning researchers for past
years. One of the most commonly used sampling-based algorithms for autonomous mobile robots are:
Rapidly-exploring Randomized Trees (RRT), Expansive Space Trees (EST) and Probabilistic Roadmap
Method (PRM).
2.4.1 Rapidly-exploring Randomized Tree (RRT) Algorithm
The RRT planner is one of the first single queries geometric planners. It was introduced in [8] as an
efficient data structure and sampling scheme to quickly search high-dimensional spaces, that have both
algebraic constraints (arising from obstacles) and differential constraints (arising from dynamics).
The main idea is to explore the state space by sampling while biasing the exploration toward unex-
plored areas randomly selecting states in the state space. The RRT algorithm is solving single-query
path planning problems by incrementally building Rapidly-exploring Random Tree (RRT) rooted at the
start configuration [9].
2.4.2 Expansive Space Trees (EST) Algorithm
The EST is a tree-based motion single query geometric planner, that attempts to detect the less explored
area of the space by measuring the density of the explored space. Algorithm biases exploration toward
parts of the space with lowest density. The EST algorithm iteratively executes two basic steps: expansion
10
CHAPTER 2. BACKGROUND
and connection. The road map is built via random sampling of the state space, until either the path is
found, or the maximum number of iteration is reached [10].
2.4.3 Probabilistic Roadmap Method (PRM) Algorithm
The PRM is the sampling-based geometric planner. Algorithm is a multi-query type, which means that it
builds roadmap of the entire environment, which can be used for multiply queries. Probabilistic Roadmap
Method proceeds in two phases: a learning phase and a query phase [11]. In the learning phase a
probabilistic roadmap is built by randomly selecting free configurations of the robot and connecting them
using some simple (but very fast) motion planner. Following the learning phase, multiply queries can
be answered. In the query phase algorithm asks for a path between two possible configurations of the
robot. PRM tries to find a path between start configuration and goal configurations to two nodes of
the previously built roadmap and search a graph to find a sequence of edges connecting two nodes.
Learning and query phases can be executed non-sequentially. The last stage of the algorithm is to
transform found sequence of edges into feasible path for the robot [12].
2.5 OctoMap Method
One of the newest approaches of representing robot’s environment is OctoMap method. OctoMap is a
probabilistic 3D mapping method based on octrees (Fig.2.4), that allows represent robot’s environment in
three-dimensional occupancy grid mapping data structures, especially appropriate for robot manipulator.
Following Armin H. [13] description: “An octree is a hierarchical data structure for spatial subdivision in
3D. Each node in an octree represents the space contained in a cubic volume, usually called a voxel.
This volume is recursively subdivided into eight sub-volumes until a given minimum voxel size is reached.
The minimum voxel size determines the resolution of the octree.”
Figure 2.4: Octree - geometrical representation.
While for higher dimensional systems (e. g. robot manipulators) sampling-based planning algorithms
are having difficulties with the need of computational power, the OctoMap library brings a compression
11
CHAPTER 2. BACKGROUND
and minimizes memory consumption along with inducing efficient data transmission [13]. OctoMap
method is based on probabilistic occupancy estimation and it represents, not only occupied space, but
also free and unmapped areas.
2.6 Related Work
Many research groups currently focus on developing system for mobile manipulation in domestic envi-
ronment. A very prominent example is the work presented by Nimbro@Home team [14], which devel-
oped an arm controller using differential inverse kinematics method to follow computed trajectories. The
team solved inverse kinematics redundancy problem by using null-space optimization of the previously
implemented cost function. Optimization criterion is convenient joint angles configuration and penalty
function for getting close to the joints limits.
Nieuwenhuisen et al. [15] use the Kinodynamic Motion Planning by Interior-Exterior Cell Exploration
algorithm [16] for the motion planning. Then they filter grasp poses and motion paths before execution
against the height map they use, finding collision-free solution of the inverse kinematics.
Chitta et al. [17] use randomized motion planners from the OMPL Library [18]. The approach for
trajectory execution includes state machine concept, that involves moving the sensors of the robot to
maintain visibility of the robotic arm, which is executing planned motion. The controller is tracking and
executing desired trajectory in the same time. For trajectory collision checking Chitta also uses inverse
kinematics.
In the work presented by Stuckler et al. [19] mobile manipulation and motion control problem is
approached separately for the robots wheeled mobile base and for the robotic arm. Control directions
of individual wheel velocities are coming from inverse kinematics analytical solutions. For the arms
they developed compliance controller with the servo actuators with limited torque. Collision free path is
obtained with implemented differential inverse kinematics.
Ciocarlie et al. [20] research is another example of usage Kinodynamic Motion Planning by Interior-
Exterior Cell Exploration algorithm from the OMPL Library as a motion planner. In this approach, collision
awareness is also provided by inverse kinematics method. The motion planner generates paths that are
processed by trajectory optimization planner. In this study, they pass previously generated path from
the OMPL as an initial condition for the optimization problem. Executed by the controller trajectory is
continuously tracked and aborted in case of detecting possible collision. Controller is responsible for
eventual re-planning the motion path.
The Willow Garage [21] implemented sampling-based motion planning for reaching for Personal
Robot 2 (PR2). Similar to our approach, researchers from Willow Garage are using 3D perception
pipeline gives dynamic obstacle map, which is used for collision checking. In this researcher group
approach, the goal of the motion can be specified by position of each joint of the arm or by desired state.
Even though we have seen a great success in presented inverse kinematics approach, there are
still areas to improve in terms of relationship between perception and action. In order to enhance this
relationship functioning, we will use forward kinematics in a real-time closed loop fashion for this thesis.
12
Chapter 3
Controller Design
First part of this chapter presents the previous architecture that was used by the SocRob@Home team
(Section 3.2) and the second part is devoted to developed new solution (Section 3.3). Second part is
divided into subsections to describe in detail separate new architectural approaches for arm controller
(Subsection 3.3.1) and arm and base controller (Subsection 3.3.3). Both architectures are followed by
referring mathematical models (Subsections 3.3.2, 3.3.4).
3.1 Approach
Before an object can be manipulated, the robot needs to perceive the environment and the object 3D
pose by using a perceptual module. The perception problem on itself, including the object detection, 3D
pose estimation from (noisy) sensor data and object classification are out of the scope of this work and
therefore we use off the shelf ready solutions for integration purposes. In this work, we use the robot
head camera out of the three that are available in the MBot robot (see Fig. 3.1).
Figure 3.1: MBot: 1) head camera used in the project, 2) manipulator used in the project
13
CHAPTER 3. CONTROLLER DESIGN
3.2 Previous Architecture
In the previously used system, perception and trajectory execution are in open loop fashion (Fig 3.2).
Once the trajectory is planned, execution module does not considering possibility of goal object move-
ment or unexpected obstacles. In this approach robot is “blindfolded” what gives several drawbacks
mentioned in the first chapter (Section 1.2).
Figure 3.2: Open loop architectural approach.
The previous architecture (Fig. 3.3) perception model uses convolutional neural networks method for
object recognition and OctoMap1 framework to determine object position and working area. OctoMap
reads images from the Kinect camera and compute OctoMap, which can be used for trajectory planning.
Perception model gives position of the object in respect to the camera and sends this information to
following modules. Grasp planning module defines the most convenient end-effector grasp pose for the
object, based on information received from perception module. Object pose, grasp pose and 3D mapped
environment are inputs for the trajectory planning module. In the current solution trajectory is computed
with the Open Motion Planning Library (OMPL) that provides variety of sampling-based motion planning
methods. The OMPL is described more detailed in the Chapter 4.
Figure 3.3: Previous solution system architecture.
Trajectory is fully planned, from the starting pose A till the final pose B, before the manipulator starts
its motion. For the motion planning is used MoveIt!2 framework. MoveIt! is open source software for
mobile manipulation, connecting together modules responsible for motion planning, manipulation, 3D
perception, control and navigation [22]. In described architecture does not exist any module responsible
for obstacle avoidance and manipulator path collision checker. The following module takes computed
trajectory, which is a sequence of joints angles, converts them into the drivers speed and sends infor-
mation to the servos of manipulator.
1https://octomap.github.io/2http://moveit.ros.org/
14
CHAPTER 3. CONTROLLER DESIGN
3.3 Developed Solution
In order to address the proposed challenges, we developed a solution that closes the loop between
perception and arm control. Our visual servoing approach allows the robot to overcome problems such
as changing the object goal pose in real time. Controller design details are provided in the next section.
As a perception module we are going to use head camera and marker detection approach. We
assume grasp planning module are given and remain the same as formerly described in the previous
architecture (Section 3.2). For this reasons, we are not defining perception module and grasp planning
module repeatedly in following detail description of designed architecture.
3.3.1 Arm Controller Architecture
In designed architecture (Fig. 3.4) we are considering real-time closed loop between perception module
and the pose of the end-effector frame. Arm controller module receives error (difference) between
given position and orientation of the targeting end-effector pose in respect to the camera and its current
position and orientation. Arm controller is responsible for minimizing the difference between those two
poses and compute corresponding corrections as desired Cartesian velocity of the end effector.
Optimisation module computes the best solution of joints velocities with defined constraints and
repulsion potential function. Optimized cost function consists of desired Cartesian end-effector velocity
and Jacobian matrix based on the current state of seven joints of the manipulator. Optimisation results
with joint velocities and sends final speed information to the servos of the robotic arm. Finally, forward
kinematics method computes updated current end-effector pose and send it back to the arm controller.
Arm controller module calculates new pose error and sends the updated error value to optimisation
module in a real-time closed-loop fashion.
Figure 3.4: Designed new system architecture.
The essential part of this concept is to develop correctly working arm controller and optimisation
module. The arm controller module is responsible for computing the pose error and optimisation mod-
ule for its minimization. Based on the error value, optimization module computes new velocity for the
end-effector in Cartesian work-space. Having in consideration, that all arm controller calculations are
15
CHAPTER 3. CONTROLLER DESIGN
executed in a real-time closed loop form, developed module has to be efficient and adequate to compu-
tational power of the hardware.
In described architecture (Fig. 3.4) we are not considering motion of the base platform. We assume
that targeting object is in reachable position for manipulator without movement of robotic platform and
our arm controller module is focusing on servos of the manipulator only.
3.3.2 Arm Controller Mathematical Model
The main goal of the arm controller is to minimise difference between the goal pose (Pg) and the current
pose (Pc). Pose Pi is an array, that gives information about position (x, y, z) and orientation (↵,�, ✓) of
the frame i (Pi = (xi, yi, zi,↵i,�i, ✓i)).
During calculation we encounters redundancy problem resulting from the not equal dimensions: ve-
locity in Cartesian space is 6 dimensional p = (vx, vy, vz,!x,!y,!z) while the arm has 7 degrees of
freedom, what gives 7 dimensions ⇥ = (⇥1, ⇥2, ⇥3, ⇥4, ⇥5, ⇥6, ⇥7). Optimisation defined in this way
has an infinite number of solutions. For that reason we defined optimisation problem that minimise the
difference between the goal pose (Pg) and the current pose (Pc) of the end-effector in the next optimi-
sation step:
Minimize ||Pg � (Pc + p�t)||2
w.r.t p (3.1)
where:
Pg - goal pose
Pc - current pose
p - current end-effector velocity in Cartesian space
�t - time stamp
Since we know, that Jacobian matrix maps joint velocities into end-effector velocity in Cartesian space,
we can represent p as:
p = J(⇥1,⇥2,⇥3,⇥4,⇥5,⇥6,⇥7)
2
6666666666666664
⇥1
⇥2
⇥3
⇥4
⇥5
⇥6
⇥7
3
7777777777777775
(3.2)
where:
J - Jacobian matrix
⇥n - current joint state, where n is a joint number
16
CHAPTER 3. CONTROLLER DESIGN
After considering ⇥ as seven elements vector (7x1) we can write shorter version of the same equation
as below:
p = J(⇥)⇥ (3.3)
We can modify previously stated cost function substituting p with J(⇥)⇥, what gives:
Minimize ||Pg � (Pc + J(⇥)⇥�t)||2
w.r.t. ⇥ (3.4)
subject to ⇥Min ⇥+ ⇥�t ⇥Max
⇥Min ⇥ ⇥Max
where:
⇥Min - array of joints states high limits
⇥Max - array of joints states low limits
⇥Min - array of joints minimum velocity limits
⇥Max - array of joints maximum velocity limits
In order to choose the most convenient arm configuration, as a solution out of all infinite possible
joints angles disposition, instead of using inequality constraints, we applied Repulsive potential function,
(P (⇥i)) that penalize for getting closer to joint limits and favors ”convenient” for manipulator configuration
(Fig. 3.5).
Figure 3.5: Arm repulsion potential function.
17
CHAPTER 3. CONTROLLER DESIGN
Repulsive potential function presented on Fig. 3.5 is described by following equations:
P (⇥i) =
8>>><
>>>:
�� ln(⇥i + ⇥i�t� (⇥Mini ) + � for ⇥i < ⇥Low
i
0 for ⇥Lowi ⇥i ⇥High
i
�� ln(�(⇥i + ⇥i�t) + (⇥Maxi ) + � for ⇥i > ⇥High
i
(3.5)
Where:
� - the slope of the logarithmic function
� - shift of the function relative to the y axis
⇥Mini - joint i low limit
⇥Maxi - joint i high limit
⇥Lowi - joint i low repulsion starting point
⇥Highi - joint i high repulsion starting point
Coefficients � and � were defined on the experimental way and they were estimated for Cyton Gamma
1500 manipulator used in this work. Simulation experiments resulted with following values:
� = 0.25 (3.6)
� = 0.1 (3.7)
To simplify repulsive potential logarithmic function representation we assume that:
⇥c = ⇥i + ⇥i�t (3.8)
What results with following equation:
P (⇥i) =
8>>><
>>>:
�0.25 ln(⇥c �⇥Mini ) + 0.1 for ⇥i < ⇥Low
i
0 for ⇥Lowi ⇥i ⇥High
i
�0.25 ln(�⇥c +⇥Maxi ) + 0.1 for ⇥i > ⇥High
i
(3.9)
Value of presented repulsive potential function close to ⇥Mini and ⇥Max
i is going to infinity, what prevent
controller for reaching joints limits and covers optimization constraints. After adding repulsive potential
function to the cost function of the optimisation problem, we obtain:
Minimize ||Pg � (Pc + J(⇥)⇥�t)||2 + P (⇥)
w.r.t. ⇥ (3.10)
subject to ⇥Min ⇥ ⇥Max
18
CHAPTER 3. CONTROLLER DESIGN
In order to achieve robotic arm motion more similar to human natural grasping behaviour, we increased
weight for orientation error. When humans grasp objects, first they adjust their hand (end-effector) to the
orientation then want to grasp, and then they decrease distance by moving hand closer to the object.
Same rule apply on robotic arm - we increased weight of orientation in comparison to position such that
robot first tries to obtain correct orientation of the end-effector and later on decreases distance between
end-effector position and targeting position. The coefficient accountable for increasing the importance of
the orientation has been chosen experimentally and is intended for the Cyton Gamma 1500 manipulator
in particular.
To obtain solution of given convex optimization problem we used the Sequential Least SQuares
Programming (SLSQP) method in scipy.optimize.minimize function from scipy.optimize3 open source
package. Short description of SciPy package is presented in the next Chapter.
3.3.3 Arm and Base Controller Architecture
In the second part of development process we extended architectural design for arm and base controller.
The main difference between extended version and previously described architecture is robotic base
motion. In the arm controller we were assuming that targeting pose is reachable for robot’s manipulator
without moving the base.
In presented below architecture (Fig. 3.6) the distance between the arm and the object is unre-
stricted. Arm and base controller module is receiving as an input error value, which is a difference
between targeting end-effector pose and current end-effector pose. Accordingly to the previous archi-
tecture, controller computes Cartesian velocity of the end-effector but this time along with the base.
Calculated velocity is an input for optimisation module that computes velocities for the manipulator ser-
vos and the base platform.
Figure 3.6: Extended new system architecture.
While we are considering arm and base motion, developed controller might compute multiply possible
motion solutions, for example: not moving the base at all and stretching the arm to almost its limits or
moving the base to the closest possible point and hold end-effector in a convenient distance to the robot.
3https://docs.scipy.org/doc/
19
CHAPTER 3. CONTROLLER DESIGN
In this work we assigned importance weights for each of the optimized variables, such that the base
moves only when targeting pose is hardly reachable for robotic arm or not reachable at all. Array of
weights has been chosen experimentally. Moreover, to avoid the situation in which the robot tries to
grasp the object behind his back without turning to the object, we used additional repulsive potential
logarithmic function which penalizes for rotating back to the object and favors when base platform faces
the object.
3.3.4 Arm and Base Controller Mathematical Model
To extend cost function by base platform velocities we added 3 new variables: (bx, by, b⇥) which can be
represented as 6 dimension vector as followed:
pb =hbx by 0 0 0 0 b⇥
iT(3.11)
where first three values (bx, by, 0) stands for linear velocities and three last values (0, 0, b⇥) represents
angular velocities of robotic platform.
pb =
2
4vb!b
3
5 (3.12)
Robotic platform motion is considered on flat surface therefore velocity in dimension Z always remains
0. For the same reason the only possible rotation is in yaw direction. Roll and pitch rotations remain
zeros. To extend controller by base motion we need to define end-effector position and orientation as
followed:
Pee = Pb +RbPbe (3.13)
Ree = RbRbe (3.14)
where:
Pee - position of end-effector in a world frame
Pb - position of base platform in a world frame
Rb - rotation of base platform in a world frame
P
be - position of end-effector in the robot frame
Ree - rotation of end-effector in a world frame
R
be - rotation of end-effector in the robot frame
In terms of position, we can represent following equation (3.13) as:
Pee =
2
6664
bx
by
0
3
7775+
2
6664
0 �b⇥ 0
b⇥ 0 0
0 0 0
3
7775
2
6664
x
y
z
3
7775(3.15)
20
CHAPTER 3. CONTROLLER DESIGN
where:
x, y, z - current position of end-effector with respect to robot frame
The representation of the Rb matrix comes from the skew-symmetric rotation matrix (3.16), which multi-
plied by the rotation matrix gives its time derivative.
2
6664
0 �!z !y
!z 0 �!x
�!y !x 0
3
7775(3.16)
For the reason that robotic platform can rotate only in one dimension we obtain previously used rotation
matrix:
2
6664
0 �b⇥ 0
b⇥ 0 0
0 0 0
3
7775(3.17)
where:
b⇥ - base platform orientation
Designed controller is based on velocities so we differentiate given equation and we obtain following
derivatives:
Pee = Pb + RbPbe +RbP
be (3.18)
As a result of extending Jacobian by base motion for linear part of controller we obtain end-effector ve-
locity equal to:
2
6664
vx
vy
vz
3
7775=
2
6664
bx
by
0
3
7775+
2
6664
0 �b⇥ 0
b⇥ 0 0
0 0 0
3
7775
2
6664
x
y
z
3
7775+ J1:3(⇥)⇥ (3.19)
where:
J1:3 - first three rows of Jacobian matrix
x, y, z - current position of end-effector with respect to robot frame
21
CHAPTER 3. CONTROLLER DESIGN
In terms of orientation, after differentiation of following equations (3.14):
Ree = RbRbe (3.20)
Ree = RbRbe +RbR
be (3.21)
Since we know, that the product of rotation velocities matrix in a world frame and the rotation matrix is
equal to product of rotation matrix and rotation velocities matrix in a robot frame:
R = ⌦wR = R⌦r (3.22)
where:
R - time derivative of rotation matrix
⌦w - rotation velocity matrix in a world frame
⌦r - rotation velocity matrix in a robot frame
we can represent the equation (3.21) as :
⌦eeRee = ⌦bRbRbe +RbR
be⌦
be (3.23)
which is equal to:
⌦eeRbRbe = ⌦bRbR
be +RbR
be⌦
be (3.24)
where:
RbRbe = I (3.25)
what gives:
⌦ee = ⌦b + ⌦be (3.26)
And accordingly as a result of extending Jacobian by base rotation for angular part of controller, we
obtain end-effector angular velocity equal to:
2
6664
!x
!y
!z
3
7775=
2
6664
0
0
b⇥
3
7775+ J4:6(⇥)⇥ (3.27)
where:
J4:6 - last three rows of Jacobian matrix
22
CHAPTER 3. CONTROLLER DESIGN
The final equation of extended Jacobian by linear and angular part of motion gives following product:
2
6666666666664
vx
vy
vz
!x
!y
!z
3
7777777777775
=
2
6666666666664
�b⇥y
b⇥x
0
0
0
0
3
7777777777775
+ J(⇥)⇥+
2
6666666666664
bx
by
0
0
0
b⇥
3
7777777777775
(3.28)
In order to prevent robot from turning back to the object and attempting ”backside grasping”, we de-
fined additional repulsive potential logarithmic function B(�), that favors when robot faces the object
and penalize when robot turns away from its target. All base rotation calculations are transformed from
radians to degrees.
B(�) = �� ln(�|�|+ 180) + � (3.29)
where � is equal to the angle (degrees) between the goal position and front side of the robotic platform,
which can be calculated as followed:
� = atan2yg � (by + by�t)
xg � (bx + bx�t)� (b✓ + b⇥�t) (3.30)
where:
xg - goal x position
yg - goal y position
Coefficients � and � has been chosen experimentally and result with following values :
� = 25 (3.31)
� = 128 (3.32)
what gives following repulsive potential function B(�) presented on the figure 3.7 :
B(�) = �25 ln(�|�|+ 180) + 128 (3.33)
23
CHAPTER 3. CONTROLLER DESIGN
Figure 3.7: Base repulsion potential function for rotation.
Last phase in the arm and base controller development process was to define weights for optimized
variables, such that base motion would apply only in the case, when targeting pose is not reachable or
hardly reachable for the manipulator.
We created array of weights coefficients w:
w =hw1 w2 w3 w4 w5 w6 w7 w8 w9 w10
i(3.34)
Assuming that optimized velocities are equal to array v:
v =h⇥1 ⇥2 ⇥3 ⇥4 ⇥5 ⇥6 ⇥7 bx by b⇥
i(3.35)
We can add following function W (v) to the cost function for each optimized value:
W (v) =
10X
i=1
wi|vi| (3.36)
24
CHAPTER 3. CONTROLLER DESIGN
Updated cost function is equal to :
Minimize ||Pg � (Pc + (
2
6666666666664
�b⇥y
b⇥x
0
0
0
0
3
7777777777775
+ J(⇥)⇥+
2
6666666666664
bx
by
0
0
0
b⇥
3
7777777777775
)�t)||2 + P (⇥) +B(�) +W (v)
w.r.t. ⇥1, ⇥2, ⇥3, ⇥4, ⇥5, ⇥6, ⇥7, bx, by, b⇥ (3.37)
subject to ⇥Min ⇥ ⇥Max
b
Min pb b
Max
where:
b
Min - array of base minimum velocity limits
b
Max - array of base maximum velocity limits
Weights coefficients were obtained experimentally and apply for the Cyton Gamma 1500 and Monarch
Robot (Mbot) in particular:
w =h0.03 0.025 0.025 0.02 0.015 0.01 0.001 4 4 8
i(3.38)
Method of solving given convex optimization problem remains the same as in the arm controller mathe-
matical model (Subsection 3.3.2). All coefficients used in presented mathematical models (�,�, w) can
be adjusted for the different arms, robots and manipulation requirements (e. g. backside grasping).
3.4 Discussion
The presented mathematical model can be adjusted to various types of robots and manipulators. Imple-
mented solution can compute Jacobian matrix for any manipulation chain, by reading information from
the robot description file. All the coefficients can be tuned for the needs of the project, such as minimum
and maximum values for joints limits or the slope of the potential repulsive function. One of the most
essential parameters to set are weights of computed solution array. Depending on the task, user can
prevent or completely block robot from turning or from moving in X or Y direction. The weight of base
theta velocity can be essential and differ depending on type of grasping: front or side. In fact, weights
can transform arm and base controller into only arm controller, by assigning relatively large importance
weights for the whole robotic platform velocities.
25
CHAPTER 3. CONTROLLER DESIGN
26
Chapter 4
Implementation
This chapter provides a brief description of hardware (specifically manipulator description) and software
used in this work as well as in the previous manipulation control approach used by the SocRob@Home
team. In this chapter we will briefly describe the manipulator used in the project, then we will explain the
operating system on which our developed software is base. The following part provides basic information
about libraries, packages and framework used in an old architecture and it ends up with a description of
chosen optimizer to solve stated optimization problem.
4.1 Manipulator description
For our project we are using the Cyton Gamma 1500 arm (Fig.4.2) produced by Robai company. It is
a small, seven degrees of freedom manipulator with a 3D printed gripper end effector. Joints of the
manipulator are made of two types of DYNAMIXEL smart actuators: MX-28 and MX-641 (Fig.4.1).
Figure 4.1: DYNAMIXEL smart actuators: MX-28 and MX-64.
1http://www.robotis.us/dynamixel/
27
CHAPTER 4. IMPLEMENTATION
As it’s written in the product documentation, humanoid robot arms with many degrees of freedom,
can reach around obstacles and through gaps, reconfigure for strength, and manipulate objects with
dexterous fluid motion. This manipulator has kinematic redundancy, that enables placement of a hand
or tool at a position and orientation in an unlimited number of ways [23]. Presented manipulator model
is no longer commercially available.
Figure 4.2: Manipulator used in the project.
4.2 Robot Operating System (ROS)
The Robot Operating System (ROS) is a flexible framework helping developers creates software for
robots. It provides device drivers, collection of tools and libraries, visualizers, message-passing, package
management, and more to simplify the task of creating complex applications across various robotics
platforms [24]. ROS is licensed under an open source. Developed controller is implemented in the
Robot Operating System environment for the needs of integration the module with the team code base.
4.3 Open Motion Planning Library (OMPL)
Open Motion Planning Library (OMPL) is an open-source planning motion library that consists many
sampling-based motion planning algorithms. OMPL is designed in order to be easily integrated into var-
ious systems, therefore it does not contain any visualization methods or collision checking [18]. Thanks
to that, OMPL is universal library ready to use with multiply systems and components.
OMPL consists of various path planners, such as RRT, EST and PRM algorithms. In this paper Open
Motion Planning Library is described as a tool used in a previous architectural approach for generating
motion path for the manipulator.
28
CHAPTER 4. IMPLEMENTATION
4.4 Unified Robot Description Format (URDF)
Unified Robot Description Format2 is a package of a number of XML format files, used in Robot Operat-
ing Systems (ROS) to describe all elements of the robot. XML files contain specification of sensors, robot
models, scenes and more. Each file of XML format has a corresponding in one or more programming
language parser.
In our project the URDF files contains the most essential information for this work about manipulator
(number of joints, links lengths, joints limits, etc.) as well as information about the robotic base.
4.5 MoveIt!
MoveIt!3 is the most widely used open-source motion planning framework for mobile manipulation, in-
cluding the latest technological solutions in manipulation, navigation, perception and control. It provides
an easy-to-use platform that helps to develop complex robotics software.
MoveIt! integrates directly with OMPL and uses the motion planners from that library. The planners in
OMPL has no concept of a robot and MoveIt! integrates URDF data about manipulator, OMPL libraries
and robot scene together and plans the motion of the robot. MoveIt! is used as a part of previous
architectural approach.
4.6 SciPy
SciPy is an open source software that provides several ready to use packages to solve mathematical,
science or engineering optimization problems. In this work we are going to use scipy.optimize package
to minimize predefined cost function. This package provides universal and widely known optimization
algorithms in different variations e. g. unconstrained and constrained minimization.
In this work we use least-squares minimization, in particular minimizing a cost function using Se-
quential Least SQuares Programming (SLSQP). Using this method allows us to define equality and
inequality constraints, as well as specify the lower and upper bounds for each independent variable.
4.7 PyKDL
The PyKDL4 is a fully ROS integrated python based Application Programming Interface for the Kine-
matics and Dynamics Library (KDL). Kinematics and Dynamics Library offers different type of functions
such us kinematic chains extraction or kinematic solvers. In this work we use PyKDL to extract kinematic
chain and based on it compute Jacobian matrix.
2http://wiki.ros.org/urdf3http://moveit.ros.org/4http://docs.ros.org/diamondback/api/kdl/html/python/
29
CHAPTER 4. IMPLEMENTATION
30
Chapter 5
Results
The goal of this chapter is to evaluate developed controllers and analyze results obtained in conducted
experiments. Both controllers were evaluated in the simulation environment, as well as on the real robot.
First part of the chapter is focused on the arm only controller (Section 5.2). Second part presents ex-
perimental results of arm and base controller evaluation (Section 5.3), followed by summary of obtained
results (Section 5.4) and discussion (Section 5.5).
5.1 Evaluation
In order to evaluate developed controller we divided experiments in two parts: simulation and real robot
testing. For the simulation testing purposes, we used program Gazebo 71 and visualization environment
Rviz2. For the real robot testing we used Mbot robot and the arm Cyton Gamma 1500.
In regards of real robot testing, it was necessary to develop velocity controller for robotic manip-
ulator. Developed additional software was integrated with the SocRob@Home team code base and
implemented on the real robot. To close the loop between the perception and manipulation we inte-
grated our module with perceptual model based on marker detection. For this purposes we used head
camera of the Mbot. During real experiments marker was placed on the top of the goal object (Fig. 5.1).
(a) Sponge with marker. (b) Can of coke with marker. (c) Chewing gums with marker.
Figure 5.1: Example of placing marker on the object.
1http://gazebosim.org/2http://wiki.ros.org/rviz
31
CHAPTER 5. RESULTS
We conducted several variants of the experimental setups. Experiments vary in terms of starting
arm configuration (Fig. 5.2) or targeting pose generation: realistic goal pose orientation and random
orientation, type of the goal pose - static and dynamic, area of randomized pose generation - front
grasping and side grasping, distance of generated poses - base movement not required and required.
(a) Candle (b) Pregrasp
Figure 5.2: Starting arm configurations.
5.2 Arm Controller Evaluation
To evaluate implemented arm controller we developed automated test script (Fig. 5.3) to achieve the
most objective results possible. Automated test script was modified adequately to the test case scenario
and used for both: simulation environment and on the real robot.
Figure 5.3: Core version of testing algorithm.
32
CHAPTER 5. RESULTS
Testing algorithm firstly set success and failure counter to zero. In the next step it generates random-
ized poses in terms of position and orientation. In order to obtain a valid success rate of experiment, the
generated random goal pose needs to meet the criterion of feasibility. To check if the generated goal
pose is reachable, testing algorithm calls forward kinematics module, which uses URDF arm model, and
return True/False information. In case of returning False, testing script goes back to the step when the
random pose is generated. After generating feasible pose algorithm sends arm to the starting configu-
ration (candle or pregrasp) and afterwards runs arm controller. If the pose is not reached in previously
defined time equal to 120 seconds, algorithm increase failure counter. Accordingly if the goal pose was
reached before set timeout, success counter is increased. Testing algorithm ends when the number of
attempts reaches 100.
5.2.1 Simulation
Testing algorithm remains in the random pose generation step until it finds feasible goal pose and move
to the next step of running arm controller. To increase probability of generated pose being reachable for
the manipulator, we predefined area in which those poses will be generated (Fig. 5.4).
(a) Front view (b) Top view (c) Side view
Figure 5.4: The area in which the goal poses are generated.
Front grasp area (green) Side grasp area (blue)
Dimensions: 0.25m x 0.8m x 0.35 Dimensions: 0.8m x 0.35m x 0.4m
Distance form the ground (Z-axis): 0.3m Distance form the ground (Z-axis): 0.35m
Distance from the center of the robot (X-axis): 0.35m Distance from the center of the robot (Y-axis): 0.45m
Decision, about dimensions of random poses generating areas, was based on the robot use case sce-
narios. The realistic use case scenario for this particular robot, is placing or fetching objects from the
small table or chair, therefore our experiments were adjusted to this requirements.
Random orientation
First experiment carried out in simulation environment was based on random poses withing the prede-
fined orientation area (Fig. 5.4). Taking into consideration, that developed controller is not aiming to
33
CHAPTER 5. RESULTS
reach the final pose before closing the gripper, but is targeting the pregrasp pose which is an input for
grasp planning module, we set following errors3 tolerance:
Position error tolerance :qe
2x + e
2y + e
2z = 0.01m (5.1)
Orientation error tolerance :qe
2r + e
2p + e
2y = 0.05 rad (5.2)
where:
ex, ey, ez - error between current position and goal position accordingly on X,Y,Z axes
er, ep, ey - error between current orientation and goal orientation accordingly on roll, pitch, yaw Euler
angles
Experiment summary:
⇤ Goal position: random (within the predefined area)
⇤ Goal orientation: random
⇤ Grasp type: front grasp (50/100) and side grasp (50/100) combined
⇤ Position error tolerance: 0.01m
⇤ Orientation error tolerance: 0.05 rad
⇤ Starting arm configuration: candle
⇤ Total number of attempts: 100
SUCCESS RATE: 78%
Realistic orientation
In this case scenario we focus only on the front grasping. Moreover, the starting arm configuration
is changed for pregrasp. The key difference between this experiment and the previous evaluation are
additional restrictions in terms of random pose orientation. Accordingly to the SocRob@Home technical
report ”SocRob Technical Report: Grasping Objects in Simulation with Perception in the Loop and a
Simple Pregrasp Planner approach”, the realistic pose orientation for Mbot to perform front grasping
vary between experimentally obtained reasonable parameters for each of the rotations(Table 5.4).
roll axis pitch axis yaw axis
between 0� and 0� between �3� and 3� between �25� and 50�
Table 5.1: Reasonable rotation limitations accordingly to SocRob@Home technical report.
In the following experiment we restricted roll and pitch rotation limits correspondingly to the technical
3The implementation uses quaternions and quaternic error. Due to lack of time, the text is not updated.
34
CHAPTER 5. RESULTS
report. The limitation of the yaw rotation were increased in comparison to the suggested reasonable
limitations, in order to extend grasping functionality to more natural for humans way (Table 5.2).
roll axis pitch axis yaw axis
between 0� and 0� between �3� and 3� between �90� and 90�
Table 5.2: Final rotation limitation in conducted experiment.
The pose error tolerance remains unchanged:
Position error tolerance :qe
2x + e
2y + e
2z = 0.01m (5.3)
Orientation error tolerance :qe
2r + e
2p + e
2y = 0.05 rad (5.4)
The pose position randomization area was reduced to meet front grasp requirements (Fig. 5.5):
(a) Front view (b) Side view
Figure 5.5: The area in which the front grasp goal poses are generated.
Experiment summary:
⇤ Goal position: random (within the predefined area (Fig. 5.5))
⇤ Goal orientation: randomized realistic (Table 5.2)
⇤ Grasp type: front grasp
⇤ Position error tolerance: 0.01m
⇤ Orientation error tolerance: 0.05 rad
⇤ Starting arm configuration: pregrasp
⇤ Total number of attempts: 50
SUCCESS RATE: 98%
35
CHAPTER 5. RESULTS
5.2.2 Real Robot
In order to conduct experiment on the real robot, the testing algorithm required several modifications.
The goal pose is no longer generated by randomization, but comes from the robot perceptual module.
Remaining part of the automated testing algorithm remains unmodified. In this experiment we close the
loop between perception and manipulation, by using head camera of the Mbot for markers detection and
obtaining the goal pose. Markers are placed on the top of the objects as presented in the Section 5.1
(Fig. 5.1). Due to pregrasp planner module being under development process and is out of scope of this
thesis, we used fixed realistic front grasp orientation4 for this test. Position of the goal pose is random
(Fig. 5.6). The starting arm configuration remains as pregrasp.
(a) Goal pose above the table (b) Goal pose on the table (c) Goal pose on the floor
Figure 5.6: Different goal positions for arm controller experiment on real robot.
For the real robot evaluation we increased the error tolerance for the following two reasons:
1) In simulation we have an ideal synthetic static pose, which is published at constant rate. For real
robot we have used an alvar marker detection algorithm with unstable position and publication rate. We
recall that real world marker detection is subject to image noise, blur, and network delay. Because of
that we have observed in our experiments an unstable (shaky) target pose.
2) Real robot manipulator joints suffer from backlash, and noisy encoder angle feedback which trans-
lates into end effector frame uncertainty once forward kinematics are computed. For that reason end
effector frame was also presenting a unstable (shaky) behavior
The real robot increased error5 tolerance as followed:
Position error tolerance :qe
2x + e
2y + e
2z = 0.05m (5.5)
Orientation error tolerance :qe
2r + e
2p + e
2y = 0.1 rad (5.6)
4Goal orientation is given in respect to the baselink frame, not to end-effector frame5The implementation uses quaternions and quaternic error. Due to lack of time, the text is not updated.
36
CHAPTER 5. RESULTS
Experiment summary:
⇤ Goal position: random
⇤ Goal orientation: fixed to (-79�,0�,-79�)
⇤ Grasp type: front grasp
⇤ Position error tolerance: 0.05m
⇤ Orientation error tolerance: 0.1 rad
⇤ Starting arm configuration: pregrasp
⇤ Total number of attempts: 20
SUCCESS RATE: 80%
5.2.3 Discussion
Conducted experiments exposed several issues of the implemented software. In regards of simulation
testing results, it was expected that in some of the cases controller gets trapped in the local minimum
problem. Another thing to discuss is timeout set to 120 seconds. In the first test case scenario poses are
randomized in both: position and orientation. For some of the extremely not convenient and not realistic
goal poses, controller needs more than 120sec to achieve the goal, what resulted with higher failure
rate than in the case of second experiment, where the poses where restricted to reasonable rotational
variations. On the other hand, while the human orders the robot to perform grasping object from the
table, we assume that one expects the execution time to no more than 2 minutes.
Considering real robot testing we encounter issues which don’t apply in case of simulation environ-
ment such as arm calibration error or not reachable goal pose. Since the goal pose is not generated
but perceived from the vision, modified testing algorithm no longer validate if the perceived goal pose
is reachable. Last problem that caused 50% of failures on the real robot is imperfection of perceptual
module and markers detection. While robot perform the task and gets closer to the goal pose, the
end-effector hardware part covers marker and the pose is no longer visible for the controller.
As the second part of the real robot experiment we tested arm following moving objects in real time,
which results can be seen on the published video6.
5.3 Arm and Base Controller Evaluation
To evaluate arm and base controller it was required to modify the automated test script. Since the main
goal of the combined arm and base controller is to reach poses, that are not reachable for the arm only
controller, we removed the part that validate if the pose is reachable. The remaining part of the test
script is not modified. Considering experience from the arm controller testing, experiments conducted
for the arm and base controller were considering only realistic orientations of the goal pose.
6https://youtu.be/CqsQnreHD2A
37
CHAPTER 5. RESULTS
5.3.1 Simulation
As mentioned before, for the simulation evaluation we randomized orientation of the generated pose
within the reasonable rotation limitations for roll, pitch and yaw (Table 5.2).
(a) Front grasp (b) Side grasp
Figure 5.7: Area of randomized poses for arm and base controller - overview.
In terms of position, we defined area of generating the pose to ensure, that the randomized pose is
not reachable without moving the base platform (Fig. 5.7).The predefined areas are having following
dimensions:
Front grasp area (green) Side grasp area (blue)
Dimensions: 0.5m x 1m x 0.25 Dimensions: 2m x 0.5m x 0.35m
Distance form the ground (Z-axis): 0.35m Distance form the ground (Z-axis): 0.35m
Distance from the center of the robot (X-axis): 0.8m Distance from the center of the robot (Y-axis): 1m
Front grasp
First experiment conducted for the arm and base controller was dedicated to front grasping task only.
Nevertheless, the distance between the goal pose and the robot is significantly bigger than in the arm
controller tests, we decided to not increase timeout and leave it equal to 120sec. The Fig.5.8 shows area
of pose randomization. Accordingly to predefined optimization weights (Section: 3.3.4), base motion is
limited to the cases, when its movement is necessary to reach the goal pose.
(a) Y and Z dimensions (b) X dimension
Figure 5.8: Area of randomized poses for arm and base controller - front grasp.
38
CHAPTER 5. RESULTS
Experiment summary:
⇤ Goal position: random⇤ Goal orientation: randomized realistic (Table 5.2)⇤ Grasp type: front grasp⇤ Position error tolerance: 0.01m⇤ Orientation error tolerance: 0.05 rad⇤ Starting arm configuration: pregrasp⇤ Total number of attempts: 50
SUCCESS RATE: 100%
Side grasp
Second experiment conducted for the arm and base controller was dedicated to side grasping task only
in the simulation environment. The position of the random pose generation was predefined with the
same criterion of enforce base movement, by defining unreachable positions (Fig. 5.9). The timeout
setting remained unmodified. The orientation randomization remained like in the previous front side arm
and base experiment, accordingly to Table 5.2. The starting arm configuration was changed to candle.
(a) Y and Z dimensions (b) X dimension
Figure 5.9: Area of randomized poses for arm and base controller - side grasp.
Experiment summary:
⇤ Goal position: random⇤ Goal orientation: randomized realistic (Table 5.2)⇤ Grasp type: side grasp⇤ Position error tolerance: 0.01m⇤ Orientation error tolerance: 0.05 rad⇤ Starting arm configuration: candle⇤ Total number of attempts: 50
SUCCESS RATE: 100%
39
CHAPTER 5. RESULTS
5.3.2 Real Robot
Final stage of the evaluation was implementation and testing arm and base controller on the real robot.
The orientation goal pose for both, front grasp and side grasp, was fixed during whole experiment. Such
as during the arm controller test, for the real robot evaluation we increased error tolerance due to the
same hardware defects as followed:
Position error tolerance :qe
2x + e
2y + e
2z = 0.05m (5.7)
Orientation error tolerance :qe
2r + e
2p + e
2y = 0.1 rad (5.8)
Front grasp
First experiment on the real robot was was dedicated to front grasping task only. During evaluation we
recorded video images from 3 different perspective: rviz visualization of the real world, the image from
the head camera used for markers detection and the university live camera placed in the laboratory
where the experiment was taken (Fig. 5.10).
(a) Live university camera image (b) Mbot head camera image (c) Rviz real world image
Figure 5.10: Different video data captured while conducting experiment - front grasp.
Experiment summary:
⇤ Goal position: random⇤ Goal orientation: fixed to (-79�,0�,-79�)⇤ Grasp type: front grasp⇤ Position error tolerance: 0.05m⇤ Orientation error tolerance: 0.1 rad⇤ Starting arm configuration: pregrasp⇤ Total number of attempts for static goal: 20
SUCCESS RATE: 90%
40
CHAPTER 5. RESULTS
Side grasp
Second part of the experiment on the real robot was was dedicated to side grasping task only. During
evaluation we collected same video images as in the front grasp testing (Fig. 5.11). This is the only
side grasp experiment conducted on the real robot, therefore we fixed goal pose orientation with a new
orientation suitable for this type of grasping. The fixed orientation is given in respect to the baselink
frame, not to the end-effector frame. The position of the goal pose is without any limitations.
(a) Live university camera image (b) Mbot head camera image (c) Rviz real world image
Figure 5.11: Different video data captured while conducting experiment - side grasp.
Experiment summary:
⇤ Goal position: random⇤ Goal orientation: fixed to (-90�,0�,-0�)⇤ Grasp type: side grasp⇤ Position error tolerance: 0.05m⇤ Orientation error tolerance: 0.1 rad⇤ Starting arm configuration: candle⇤ Total number of attempts for static goal: 20
SUCCESS RATE: 75%
5.3.3 Discussion
Tests conducted in the simulation environment, not only had 100% success ratio, but also had lower
execution time than in the arm controller simulation tests. Ones can wrongly expect, that because of
significantly increased distance between goal pose and the end-effector the total time of reaching the
goal pose should also increase. Instead of that, thanks to base motion, controller move the whole robotic
platform into convenient for the arm configuration position, what clearly smoothed the arm motion and
expedited execution time.
Considering real robot testing we encounter same issues like in the previous real robot tests such as
arm calibration error or covering marker detector with robot’s end-effector what leads to loosing target
pose. The side grasping success rate is significantly lower, than the success rate for front grasping. The
reason for that, is the starting arm configuration and the different fixed orientation of goal pose, which
contributed to covering marker by robots own mechanical parts more often than in the case of front
41
CHAPTER 5. RESULTS
grasping. In terms of real robot side grasping occurs another limitation that doesn’t have place in front
grasp task. Robot to be able to detect the goal pose needs a marker to be in the vision of the camera.
During the side grasp, robots starting head position is already close to its limits, what restrict marker
detection. Due to this issue, marker was out of the camera cone more often than in the case of front
grasping scenario.
As in the previous real robot experiment we tested arm and base controller in following moving
objects in real time, but criterion of success and failure were not definite enough to present it as a
quantity tests.
5.4 Summary of Results
We conducted 7 different experiments, 310 reaching a goal pose attempts, out of which 89% were
successful and 11% failed. We tested developed software in the simulation environment as well as o
the real robot, facing all hardware connected issues. We tested entirely random goal poses and realistic
poses with a divison for front grasping and side grasping task. To summarize reliability of developed
controller we gathered all results in the Table 5.3. Following summaries compare success rate only for
experiments with orientation goal pose restricted to reasonable rotations roll, pitch and yaw.
Success ratio
arm arm and base arm + arm and base
Simulation front grasp 98% 100% 99%
side grasp � 100% �
Real robot front grasp 80% 90% 85%
side grasp � 75% �
Table 5.3: Evaluation summary for realistic goal poses.
Additional summary7
Total number of attempts in simulation environment: 150
Total number of attempts on real robot: 60
Total success rate in simulation environment: 99.3%
Total success rate on the real robot: 81.6%
Total success rate for arm and base controller on the real robot: 82.5%
7Following summaries compare success rate only for experiments with orientation goal pose restricted to reasonable rotationsroll, pitch and yaw.
42
CHAPTER 5. RESULTS
5.5 Discussion
Considering arm controller (without the base motion) in simulation environment, the developed solution
shows 22% of failure ratio. Since the velocity controller is based on a real-time feedback in a closed
loop between perceptual module and arm motion, there is no motion planning part. Having controller
without motion planning results with a high risk of getting trapped in a local minimum of optimisation and
fail reaching the goal pose. On the other hand, not using the Moveit! library, which is responsible for
trajectory planning, arise with faster start of the manipulator motion. Another drawback of using separate
planning and execution is having ”blindfolded” robot. In the presented solution the manipulator is able to
follow the moving object in a real time. First experiment conducted on entirely randomized goal poses
was predictably less successful, than following experiment with reasonable orientation of generated goal
poses. The arm controller experiment on the real robot was less successful than expected results, based
on the simulation success ratio. Using real robot entails hardware and integration issues, unknown in
simulation experiment where we assume that each velocity command execution is ideal.
When it comes to arm and base controller experimental evaluation in simulation environment, the
success rate was 100%. Since the robot is able to move the base to the convenient for grasping con-
figuration, the not reachable poses are no longer existing. Being able to control the base along with the
base, not only shorten execution time, but also significantly decrease possibility of goal pose not being
feasible.
In terms of real robot testing, as previously, the success rate decreased because of hardware and
integration issues. All failures encountered in the real robot testing, were caused by the incorrect arm
calibration, loosing marker detection or not reachable goal pose. Failures resulted by inaccurate cali-
bration were revealed by running Rviz real world representation, among with the real arm execution. In
the simulation software, robot reached the goal pose, while in the real world end-effector was slightly
shifted in respect to the marker. From this example we can assume that robot ”thinks” that the arm is in
different position than in it is in real. Another issue was covering marker detector with robot’s hardware
while execution. While running Rviz real world representation we could easily observe that the goal
pose disappear from the vision, which results with no goal pose. From this analysis, we can draw the
conclusions that perception module, based on marker detection, should be improve to prevent ”loosing”
the goal pose when the manipulator covers the marker with its physical part. The solution for this issue
might be holding the previous value of the target pose, in case if its disappear while running controller.
This loosing marker tracking issue resulted with the increased failure rate in the side grasping real robot
experiment. Because of the starting arm configuration the probability of covering the marker with the
manipulator physical part was higher than in the case of the front grasp task. Another issue is limited
camera vision. In most of the cases we were able to turn the robotic head to keep the marker in the
vision. The issue encountered during side grasp tests, when the starting head position is turned into
the goal pose direction and its already close to its limits on the left turn side. Last observed issue, that
contributed to higher failure rate on the real robot testing, was missing grasp planning module. In order
to bypass this problem, we hardcoded reasonable realistic orientation for the end-effector, in the same
43
CHAPTER 5. RESULTS
time increasing possibility of the goal pose being not feasible (in the arm only controller).
The obtained results show, that simulation evaluation is usually more successful than the real robot
testing results. We can also conclude, that using arm and base controller guaranty higher probability of
reaching the goal pose, than the arm only controller. Another conclusion concerns type of the grasp.
When considering integration of the controller with the perception module based on marker detection,
the front grasping is more reliable than the side grasping.
44
Chapter 6
Conclusions
In this Chapter is to conclude the presented work. Firstly, we summarize its achievements in comparison
to predefined goals of this thesis (Section 6.1) and secondly we will birefly discuss potential future work
(Section 6.2) to advance the developed controller.
6.1 Summary of Thesis Achievements
In this document we presented solution for manipulation control for mobile service robots in domestic
environment. We developed real time closed-loop Cartesian controller base on optimization, where we
give a target pose for the end-effector. The implemented solution control both, manipulator and robotic
platform motion together.
During our work, we contributed to the team controlling skills, by developing velocity controller for the
manipulator. Both software, were fully integrated with the SocRob@Home team’s base and implemented
on the real robot. We also delivered automated testing software, that might be used for future parameters
tuning, depending on the need of the use case. In addition, developed solution is based on the local
coordinates and it does not require any navigation nor mapping data.
With this work, we enhanced team’s manipulation skills not only by integrating arm and base motion,
but also creating possibility of having feedback from perceptual object detection and become able to
grasp moving objects in a real time.
Having conducted various test cases scenarios, we point out some issues that needs to be improved,
in order to increase controller reliability such as minimizing arm calibration error or improving detection
skills. Implemented controller can be fully integrated with any other robot and manipulator by setting
various coefficients and uploading relevant unified robot description model.
6.2 Future Work
To enhance presented solution we propose following improvements that can be considered as a future
work. Since the developed controller does not include obstacle avoidance, we suggest to integrate it
45
CHAPTER 6. CONCLUSIONS
with a collision path module that is already used in the team.
Another issue are obstacles on the manipulator way. To solve this problem we suggest to add Oc-
tomap solution for mapping the Cartesian space, where manipulator motion is executed. Combining
Octomap method and real-time control approach we can develop software, that is able to handle grasp-
ing in the dynamic environment. Taking into account, that robots general purpose is interaction with
children in the hospital, being able to react in dynamical environment is an advantage.
While evaluating the controller, none of the goal poses where behind the robot. In case we would
like to integrate robot with external cameras and the goal pose would be published behind the robot,
there is a high probability that manipulator will heat the robot physical part - robot ”stomach”. Controller
choose the shortest way to the goal pose and not taking into consideration self collision model of the
robot. Adding this model to the optimisation would improve reliability of the presented solution.
Last thing to improve is changing perceptual module. Instead of using marker placed on the object
detection our controller can be integrated we the 3d representation of 2d object detection by bound-
ing box approach, what would highly enhance freedom in object grasping. Objects would not needed
anymore to be labeled by marker, to be detectable for the robot.
46
Bibliography
[1] P. I. Corke. Robotics, Vision & Control: Fundamental Algorithms in MATLAB. Springer, 2011.
ISBN:978-3-642-20143-1.
[2] J. J. Craig. Introduction to Roboticsl. Menlo Park: Addison Wesley, 2nd edition, 1986.
[3] R. K. Mittal and I. Nagrath. Robotics and Control. Tata McGraw-Hill Education Pvt. Ltd., New Delhi,
2003. ISBN:9780070482937.
[4] M. W. Spong and M. Vidyasacar. Robot Dynamics and Control. John Wiley and Sons, New York,
NY, USA, 1st edition, 1989. ISBN:978-0471612438.
[5] D. Manocha and J. F. Canny. Efficient inverse kinematics for general 6r manipulators. IEEE
Transactions on Robotics and Automation, 10(5):648–657, Oct 1994. ISSN 1042-296X. doi:
10.1109/70.326569.
[6] D. Tolani, A. Goswami, and N. I. Badler. Real-time inverse kinematics techniques for anthro-
pomorphic limbs. Graphical Models, 62(5):353 – 388, 2000. ISSN 1524-0703. doi: https:
//doi.org/10.1006/gmod.2000.0528.
[7] P. C.-Y. Sheu and Q. Xue. Intelligent Robotic Planning Systems. World Scientific Pub Co Inc, 1991.
[8] S. M. Lavalle. Rapidly-exploring random trees: A new tool for path planning. Technical report,
Computer Science Department, Iowa State University, 1998.
[9] J. J. Kuffner and S. M. LaValle. Rrt-connect: An efficient approach to single-query path planning.
IEEE International Conference on Robotics and Automation (ICRA), 2000.
[10] L. J.-C. Hsu, D. and R. Motwani. Path planning in expansive configuration spaces. International
Journal of Computational Geometry and Applications, 4:495–512, 1999.
[11] W. J. Gang, L. Prm path planning optimization algorithm research. WSEAS TRANSACTIONS on
SYSTEMS and CONTROL, 11:148–153, 2016. E-ISSN: 2224-2856.
[12] J. C. L. L. E. Kavraki, P. Svestka and M. H. Overmars. Probabilistic roadmaps for path planning in
highdimensional configuration space. volume 12, pages 566–580. IEEE Transactions on Robotics
and Automation, Aug 1996. doi: 10.1109/70.508439.
47
BIBLIOGRAPHY
[13] A. Hornung, K. M. Wurm, M. Bennewitz, C. Stachniss, and W. Burgard. Octomap: An efficient
probabilistic 3d mapping framework based on octrees. Autonnomous Robots, 34:186–206, April
2013. doi:10.1007/s10514-012-9321-0.
[14] J. Stuckler and S. Behnke. Integrating indoor mobility, object manipulation, and intuitive interaction
for domestic service tasks. In 2009 9th IEEE-RAS International Conference on Humanoid Robots,
pages 506–513, Dec 2009. doi: 10.1109/ICHR.2009.5379529.
[15] M. Nieuwenhuisen, D. Droeschel, D. Holz, J. Stuckler, A. Berner, J. Li, R. Klein, and S. Behnke.
Mobile bin picking with an anthropomorphic service robot. In 2013 IEEE International Conference
on Robotics and Automation, pages 2327–2334, May 2013. doi: 10.1109/ICRA.2013.6630892.
[16] I. A. Sucan and L. E. Kavraki. Kinodynamic Motion Planning by Interior-Exterior Cell Exploration,
pages 449–464. Springer Berlin Heidelberg, Berlin, Heidelberg, 2010. ISBN 978-3-642-00312-7.
doi: 10.1007/978-3-642-00312-7 28.
[17] S. Chitta, E. G. Jones, M. Ciocarlie, and K. Hsiao. Mobile manipulation in unstructured environ-
ments: Perception, planning, and execution. IEEE Robotics Automation Magazine, 19(2):58–71,
June 2012. ISSN 1070-9932. doi: 10.1109/MRA.2012.2191995.
[18] I. A. Sucan, M. Moll, and L. E. Kavraki. The open motion planning library. IEEE Robotics Automation
Magazine, 19(4):72–82, Dec 2012. ISSN 1070-9932. doi: 10.1109/MRA.2012.2205651.
[19] J. Stuckler, R. Steffens, D. Holz, and S. Behnke. Efficient 3d object perception and grasp planning
for mobile manipulation in domestic environments. Robotics and Autonomous Systems, 61(10):
1106 – 1115, 2013. ISSN 0921-8890. doi: https://doi.org/10.1016/j.robot.2012.08.003. Selected
Papers from the 5th European Conference on Mobile Robots (ECMR 2011).
[20] M. Ciocarlie, K. Hsiao, E. G. Jones, S. Chitta, R. B. Rusu, and I. A. Sucan. Towards Reliable
Grasping and Manipulation in Household Environments, pages 241–252. Springer Berlin Heidel-
berg, Berlin, Heidelberg, 2014. ISBN 978-3-642-28572-1. doi: 10.1007/978-3-642-28572-1 17.
[21] R. B. Rusu, I. A. Sucan, B. Gerkey, S. Chitta, M. Beetz, and L. E. Kavraki. Real-time perception-
guided motion planning for a personal robot. In 2009 IEEE/RSJ International Conference on Intel-
ligent Robots and Systems, pages 4245–4252, Oct 2009. doi: 10.1109/IROS.2009.5354396.
[22] B. Siciliano and O. Khatib. Springer Handbook of Robotics. Springer Science & Business Media,
2008. ISBN:9783540239574.
[23] Cyton gamma 1500 arm specification. Robai Corporation, 2015.
[24] K. C. M. Quigley and R. Wheeler. Ros: An open-source robot operating system. 2009.
48