a semi-autonomous tracked robot system for rescue missions
TRANSCRIPT
-
8/12/2019 A Semi-Autonomous Tracked Robot System for Rescue Missions
1/4- 2066 -
A semi-autonomous tracked robot system for rescue missions
Daniele Calisi1, Daniele Nardi1, Kazunori Ohno2 and Satoshi Tadokoro2
1Department of Computer and Systems Science, Sapienza University of Rome, Italy
(E-mail: [email protected])2Graduate School of Information Science, Tohoku University, Sendai, Japan
(E-mail: [email protected])
Abstract: In this paper we describe our work in integrating the results of two previous researches carried on on two
different robots. Both robots are designed to help human operators in rescue missions. In the first case, a wheeled robot
has been used to develop some software modules. These modules provide the capabilities for autonomously explore an
unknown environment and build a metric map while navigating. The second system has been designed to face more
challenging scenarios: the robot has tracks and flippers, and features a semi-autonomous behavior to overcome small
obstacles and avoid rollovers while moving on debris and unstructured ground. We show how the set of software modules
of the first system has been installed on the tracked robot and merged with its semi-autonomous behavior algorithm, and
we discuss the results of this integration with a set of experiments.
Keywords: tracked robot, rescue mission, autonomy
1. INTRODUCTION
In recent years, increasing attention has been devoted
to rescue robotics, both from the research community and
from the rescue operators. Robots can consistently help
humans in dangerous tasks during rescue operations in
several ways.
A rescue scenario is usually unstructured and unstable,
thus requiring the use of complex mechanical and loco-
motive design of the robots involved. On the other hand,
communication unreliability and the difficulty of direct
control of complex robots require some degree of auton-omy.
The robotic system described in this paper is the inte-
gration of two previous works developed in our laborato-
ries:
a wheeled robot system that is able to autonomously
explore a flat environment;
a tracked robot that features a semi-autonomous behav-
ior to overcome small obstacles, debris and stairs.
The software modules of the first system has been in-
stalled on the tracked robot, and merged with its semi-
autonomous behavior. In this way, the robot is able to
autonomously explore more challenging scenarios, with
respect to the wheeled robot. Experiments have been con-ducted on the real tracked robot and on a simulator, and
show that the resulting system is able to explore an un-
known environment and to deal with debris and small ob-
stacles, with a very limited operator support.
2. SYSTEM DESCRIPTION
The whole system can be divided into two conceptual
parts: the hardware part, that comprises the mechanical
design and realization of the robot and the sensors that
have been used, and the software part, that includes the
software implementation of the algorithms. The overall
goal of the system is to show effective autonomous andsemi-autonomous behaviors in a rescue environment. In
particular, we want to provide the robot with:
2D mapping capability;
autonomous and semi-autonomous navigation and ex-
ploration capabilities;
detection of motion hazards and the ability of address
them (we will call this ability semi-autonomous obstacle
overcoming behavior);
communication infrastructure and human-robot inter-
face that make it possible for a human operator to super-
vise the robot and take control of it in critical situations.
2.1 Hardware
The unstructured and unstable environment in a rescuescenario makes the mechanical and locomotive design of
the robot critical. Our robot is a 6-DOF crawler robot,
currently under development, called Aladdin (see Fig-
ure 1). The robot has four flippers connected with the
main body and a system of tracks that covers both the flip-
pers and the main body, allowing for forward/backward
movements and turning. The high mobility of this kind
of crawler robots in complex environments is mainly due
to the moving flippers that allow for multi degree-of-
freedom maneuvers.
A 2D laser scanner (Hokuyo URG-04LX) is mounted
on top of the robot and provides a 2D scan of the sur-rounding. This allows for autonomous mapping and lo-
calization in the environment. The sensors used for the
semi-autonomous obstacle overcoming behavior include:
32 touch sensors mounted inside the main body, a set of
current sensors (URD Corp. HPS-10-AS, used to mea-
sure the torque on the front and rear flippers), and a grav-
ity sensor (Crossbow Corp. CXL04LP3).
Other details of this robot system can be found in [7].
2.2 Software
A set of software modules provides the robot with
the requested autonomous capabilities. Among them,
the most important are: two 2D mapping and localiza-tion modules, a ground contact detection mechanism, a
three-level navigation algorithm, and an autonomous ex-
SICE Annual Conference 2008August 20-22, 2008, The University Electro-Communications, Japan
PR0001/08/0000-2066 400 2008 SICE
-
8/12/2019 A Semi-Autonomous Tracked Robot System for Rescue Missions
2/4- 2067 -
Fig. 1 The Aladdin crawler robot
ploration method based on frontiers. A previous ver-
sion of this system, that includes the same exploration
method, the 2D localization and mapping capabilities and
an autonomous navigation module for flat grounds, can
be found in [1]. These modules will be described in de-
tail in the following subsections.
The communication between the robot and the rescue
operators can be extremely unreliable in a rescue sce-
nario. This requires some degree of autonomy for the
robots to accomplish their mission. The exact level of au-
tonomy is decided by many factors such as: the current
communication reliability, the number of robots in the
team, the need of the operator intervention in some crit-
ical situation that the robot is not able to autonomously
deal with. As we will describe in detail in Section 2.3
, the different levels of autonomy are obtained activating
or deactivating some software modules, and giving to the
operator the ability to take more or less control over robot
actions.
2.2.1 Mapping and localization
For mapping and localization purposes, we use the 2D
laser range finder. Some laser beams have been disabled
because they are directed towards the flippers when these
are raised. The localization method is scan-matching
based and does not need odometry data (that, indeed, is
very noisy in tracked robots). The mapping module inte-
grates the current laser scan readings with the map, using
an occupancy grid approach. The localization has been
proven to be robust enough to slight changes in the pitch
or in the roll of the robot pose, and to be able to realign
the robot to the right position in the map when it returnson a horizontal ground. Anyway, during this event, the
resulting map can be damaged. For this reason, when an
excessive roll or pitch is detected (thanks to the gravity
sensor), the mapping module is immediately disabled to
avoid artifact on the map, while the localization module
is kept running.
2.2.2 Exploration
When enabled, this module sends target positions to
the autonomous navigation algorithm. These target posi-
tions lie on the nearest frontier computed on the current
map, i.e., the border between free and unknown space.This method has been proved to be able to guide the robot
to a full exploration of an unknown environment (see [8]).
2.2.3 Navigation
The navigation subsystem has three levels. In the first
level, a topological path is computed on the global 2D
map, in order to guide the next levels and to avoid lo-
cal minima. In the next level, an RRT-based (Rapid-
exploring Random Trees, see [6]) algorithm builds a tree
of plans in order to steer the robot toward the target posi-
tion; each step of each plan takes into account the robot
shape, the collisions with the obstacles, the kinetic (and,
possibly, also the dynamic) constraints of the robot move-
ments. Moreover, the plan building process is interleaved
with the plan execution and correction, the latter being
necessary in order to allow for uncertainty in motion
command effects and new obstacles. Details of these two
levels can be found in [3].
In the third level of the navigation system, a reactive
behavior to overcome obstacles is provided. While the
previous two levels control the speeds of the tracks (i.e.,
steer the robot in 2D), this behavior makes use of a set
of rules and thresholds to decide the position of the flip-pers in order to avoid rollovers and to climb over obsta-
cles. Sensors used for these decisions are the touch sen-
sors (that detect the robot body contact with the ground),
the current sensors (that detect the flipper contact with
some obstacles) and the gravity sensor (that detects roll
and pitch changes). The rules for the front flippers are
shown in Figure 2, an analogous set of rules has been de-
veloped for the rear flippers.
2.3 Autonomy levels and HRI
The system can operate in four different degrees of au-
tonomy and the human operator can switch among them
during the mission:
Full autonomy. In this mode, the exploration module
sends periodically target positions (that lies on a frontier)
to the navigation subsystem. In this case, if the commu-
nication with the operator is working, the system sends
a warning to the operator, but keeps on behaving au-
tonomously.
Partial autonomy. In this case, the exploration mod-
ule is disabled. The operator, through a graphical inter-
face, can send target positions to the robot. The naviga-
tion subsystem autonomously guide the robot to the tar-
gets and builds the map.
Partial tele-operated mode. The exploration is dis-abled as well as the first two modules of the navigation
subsystem. The semi-autonomous obstacle overcoming
-
8/12/2019 A Semi-Autonomous Tracked Robot System for Rescue Missions
3/4- 2068 -
FlipperContact Contact Contact NotContact NotContact
BodyContact RobotPitch Upward Downwar d U pward Down ward
Contact DOWN UP UP UP
NotContact DOWN UP DOWN DOWN
IF (FlipperContact == Contact AND RobotPitch == Upward)
OR (BodyContact == NotContact AND FlipperContact == NotContact)
THEN FrontFlipper = DOWN
ELSE FrontFlipper = UP
Fig. 2 Control rules for the front flipper of the Aladdin robot
behavior is kept running, in this way the operator can de-
cide the speed of the tracks (i.e., the linear speed and turn-
ing speed in 2D), while the module chooses the flipper
positions.
Full tele-operated mode. In this case, the operator
has also full control over the flipper positions. This is
the most difficult and time-consuming method, and in our
experiments has be used only to recover from a flipper
stuck (see below), an operation that usually lasts for a
very short time (less than 5 seconds).
If the communication channel goes down, the robot auto-
matically switches to Full autonomyfrom any other op-
erational mode.
3. EXPERIMENTS
Experiments have been performed both in a real sce-
nario, using the Aladdin robot described before, and us-
ing the USARSim simulator1 [2]. In all experiments, the
robot task is to autonomously explore all the arena; if the
robot is in a critical situation, the operator can intervene
and restore the mission.
Experiments in the real environment have been con-
ducted in a small rescue arena [5] and show the effec-
tive exploration behavior of the robotic system also in
presence of small obstacles and debris. The robot au-
tonomously explores the environment and detects loss of
contact with the ground or other kind of mobility haz-
ards, being able to address them. In Figure 3 we show a
map built during a mission: in the position marked with
1, the robot starts its exploration; the small purple box
labeled with 2 is the mark put by the hazard detection
module (it is the place where contact with the ground is
detected; in that place, a 20 cm wooden bar, not visible
with the laser scanner, lays on the ground); finally, the
robot last position is marked with 3.Another set of experiments have been performed us-
ing the USARSim simulator and a simulated model of the
robot that is similar to Aladdin (i.e., a tracked robot with
four indipendent flippers). A rescue arena has been built
inside the simulated world and contains obstacles and de-
bris that the laser scanner cannot detect (see Figure 5).
The arena is 7x5 m2.
Figure 4 shows the results of the experiments con-
ducted in the simulated arena. In only one case the robot
flipper got blocked in one of the wood frames and the op-
erator intervention could not restore the robot functional-
ity in order to continue the mission. Figure 6 show the
resulting map of one of the missions. On the left side of
1http://usarsim.sf.net
Fig. 3 The map built during the real experiment: 1)
robot start position, 2) the obstacle detected using
touch/torque sensors, 3) robot final position
Fig. 5 The simulated scenario used in the experiments.
the figure, a partial map is shown: purple bars show de-
tected obstacles that lie outside of the 2D laser scannerdetection plane, a green cross shows the current target of
the robot navigation subsystem (the target is on a fron-
tier) and a tree of plans is shown starting from the robot
position. On the right side of the figure, the full map of
the environment is shown, the places marked with op
are where the operator intervention has been necessary
for the robot to continue the mission.
4. CONCLUSION AND FUTURE WORK
The robot system presented in this paper has been
proven to be effective in scenarios where small obstacles
and mobility hazards are present. The set of experimentsshows that the operator intervention is very limited and
is mainly due to the flippers that can get blocked within
-
8/12/2019 A Semi-Autonomous Tracked Robot System for Rescue Missions
4/4- 2069 -
Completed Mission Operator interventions Tele-operated
missions duration per mission time
90% 7.8 min 0.6 min 1.8 11.4 s
Fig. 4 Results of the simulated missions (mean over 10 missions)
Fig. 6 On the left: the tree of motion plans, built while the robot is moving. On the right: the final map built during the
autonomous exploration of the simulated scenario.
obstacles. For this reason, the obstacle overcoming be-
havior has been recently improved in order to avoid flip-
per stucks ([9]). Indeed, this happened often during the
experiments and is the main reason of operator interven-
tions. As a future work, we want to integrate this im-
provement in the presented system.
The major drawback of the presented approach is the
use of a 2D based localization and mapping methods.
These are stable enough to be used in simple situations,
but eventually fail in more unstructured environments. In
order to overcome these limitations, two improvements
are proposed: from the one hand, a hardware mechanismthat is able to keep the laser range finder horizontally
will improve localization (this method is currently in-
vestigated in other research works by the scientific com-
munity); from the other hand, the use of an additional
tilted (i.e., pointing toward the ground) laser can make
it possible to measure elevation maps (the so called 2D
and a half maps) and detect possible mobility hazards
before the robot reaches them. The latter method has
been already developed and applied on a simulated robot
and successfully demonstrated during the RoboCupRes-
cue Virtual Robots competition, in 2007 [4].
REFERENCES
[1] S. Bahadori, D. Calisi, A. Censi, A. Farinelli,
G. Grisetti, L. Iocchi, and D. Nardi. Intelligent sys-
tems for search and rescue. Proc. of IROS Workshop
Urban Search and Rescue: from RoboCup to real
world applications (IROS), Sendai, Japan, 2004.
[2] S. Balakirsky, C. Scrapper, S. Carpin, and M. Lewis.
USARSim: providing a framework for multi-robot
performance evaluation. InProceedings of the Inter-
national Workshop on Performance Metrics for In-
tellingent Systems (PerMIS), 2006.
[3] D. Calisi, A. Farinelli, L. Iocchi, and D. Nardi.Autonomous navigation and exploration in a res-
cue environment. In Proceedings of IEEE Inter-
national Workshop on Safety, Security and Rescue
Robotics (SSRR), pages 5459, Kobe, Japan, June
2005. ISBN: 0-7803-8946-8.
[4] D. Calisi, A. Farinelli, and S. Pellegrini.
RoboCupRescue - Virtual Robots League, Team
SPQR-Virtual, Italy, 2007.
[5] A. Jacoff and E. Messina. DHS/NIST response robot
evaluation exercises. In IEEE International Work-
shop on Safety Security and Rescue Robots, Gaithers-
burg, MD, USA, 2006.
[6] J. Kuffner and S. LaValle. RRT-Connect: An ef-
ficient approach to single-query path planning. InProc. of IEEE Int. Conf. on Robotics and Automation
(ICRA2000), 2000.
[7] K. Ohno, S. Morimura, S. Tadokoro, E. Koyanagi,
and T. Yoshida. Semi-autonomous control of a 6-
DOF crawler robot having flippers for getting over
unknown steps. InProc. of the IEEE/RSJ Int. Conf.
on Intelligent Robots & Systems (IROS), pages 2559
2560, 2007.
[8] B. Yamauchi. A frontier based approach for au-
tonomous exploration. InIEEE International Sympo-
sium on Computational Intelligence in Robotics and
Automation, 1997.
[9] T. Yuzawa, K. Ohno, S. Tadokoro, and Koyanagi E.Development of sensor reflexive flipper control rule
for getting over uknown 3D terrain. Submitted.