nomadic surveillance camera - ulisboa€¦ · the application we envisage for the mobile robots is...
TRANSCRIPT
Nomadic Surveillance Camera
Nuno Manuel Martins Ribeiro
Thesis to obtain the Master of Science Degree in
Electrical and Computer Engineering
Examination Committee
Chairperson: Professor João Fernando Cardoso Silva Sequeira
Supervisor: Professor José António da Cruz Pinto Gaspar
Member: Professor Artur Miguel do Amaral Arsénio
October 2013
This dissertation was performed under the supervision of:
Professor José António da Cruz Pinto Gaspar
Doctor Ricardo Jorge Santos Ferreira
Agradecimentos
Nestes cinco anos de curso no Instituto Superior Técnico, foram muitos os desafios e dificul-
dades que me surgiram, as quais foram superadas pela minha dedicação e esforço contando
sempre com a excelente ajuda de pessoas que não posso deixar de aqui referir. Agradeço desde
já a insdispensável e valiosa ajuda de todos os colegas e professores que me acompanharam du-
rante este percurso. Agradeço também a toda a minha família eamigos, os quais me apoiaram
incondicionalmente a diversos níveis. Em especial, quero agradecer ao meu pai, José Eduardo
Ribeiro, à minha mãe, Maria Clara Martins Fidalgo e ao meu padrinho, Manuel Francisco
Martins Fidalgo, que estiveram sempre presentes e atentos às minhas necessidades, sempre
contribuindo com amizade e carinho, e aos quais dedico esta minha tese. Um especial agradec-
imento também ao Professor José Gaspar que incansávelmenteme ajudou na realização deste
trabalho, sem nunca se ter recusado a prestar ajuda e que contribuiu bastante para a minha for-
mação. Um muito obrigado ao Investigador Ricardo Nunes pelo apoio técnico prestado durante
a realização deste trabalho e ao Doutor Ricardo Ferreira pelaco-orientação do mesmo.
Sem todos eles, os obstáculos na concretização dos meus objectivos, durante a fase académica
da minha vida, teriam sido muito maiores, sendo difícil encontrar palavras para expressar a
minha gratidão. Um muito obrigado a todos.
ii
Resumo
A utilização de robots móveis pessoais, teleoperados e autónomos, que são fáceis de usar,
começa a ser uma realidade. Espera-se que no espaço de algunsanos os robots pessoais tenham
uma presença ubíqua na sociedade humana. Actualmente, fomentar o desenvolvimento dos
robots pessoais ainda significa explorar e testar múltiplastecnologias essenciais, quer de hard-
ware, quer de software. Neste projecto propomos desenvolver as comunicações com um robot
através de uma câmara IP sem fios. A ideia fundamental é a de quea largura de banda necessária
para os comandos de movimento é muito inferior à requerida para streaming de vídeo. Por out-
ras palavras, os comandos para o robot podem ser embebidos nostreaming de vídeo sem que
existam interrupções significativas. Estabelecidas as comunicações, apresentamos em seguida
uma metodologia para controlar o robot através de uma rede e fazê-lo voltar autonomamente
à posição inicial. Nesta tese, detalhamos os componentes para formar o robot em termos de
hardware e software, mostramos experiências de navegação baseadas em SLAM Monocular e
propomos uma metodologia para regular e diminuir o erro de odometria acumulado, associado
à não existência de mapa no algoritmo de SLAM Monocular utilizado. Também apresentamos
estratégias para fazer o robot voltar à posição inicial (homing), correspondendo ao ponto de
onde começou a ser teleoperado.
Palavras chave: Teleoperação, MonoSLAM, Homing, Câmara, Arduino, Robot.
iv
Abstract
Personal, inexpensive, easy-to-use, teleoperated, autonomous, mobile robots are starting to be
a reality and are expected to be widespread within some few years. Fostering the development
of these robots implies testing various hardware and software technologies. In this work we
propose developing the communications with the robot through an IP wireless camera. The
rational is that the required bandwidth for motion commanding and reporting is much lesser
than the one required for video streaming. Therefore, robotcommands can be embedded non-
disruptively within the video streaming. We present a proper methodology to control the robot
over a network and make it return to its initial position. In this thesis we detail the base compo-
nents forming the robot in terms of hardware and software, show navigation experiments based
in Monocular SLAM, and propose a methodology to regulate theaccumulated odometry error
associated with map-less Monocular SLAM. We also present strategies for homing a robot to
the point from where it started being teleoperated.
Keywords: Teleoperation, MonoSLAM, Homing, Wireless IP Camera, Arduino, Car-like Robot.
vi
Contents
Agradecimentos i
Resumo iii
Abstract v
1 Introduction 1
1.1 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 3
1.3 Thesis Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 4
1.4 Publications and Acknowledgments . . . . . . . . . . . . . . . . . .. . . . . 4
2 Hardware and Software Setup 5
2.1 Hardware components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6
2.2 Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.2.1 Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.2.2 Comunications and Software . . . . . . . . . . . . . . . . . . . . . . .12
3 Camera and Car Modeling 17
3.1 Camera Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.2 Camera Calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.3 Kinematic model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.3.1 Continuous model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.3.2 Discrete model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
4 Navigation and Control 25
4.1 MonoSLAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
4.1.1 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
viii CONTENTS
4.1.2 State vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
4.2 Pose estimation from camera matrix . . . . . . . . . . . . . . . . . .. . . . . 28
4.3 Autonomous Homing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
4.3.1 Open Loop Homing . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
4.3.2 Visual Odometry Based Homing . . . . . . . . . . . . . . . . . . . . . 31
4.3.3 Map Based Homing . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
5 Experiments 33
5.1 MonoSLAM, self-localization error in a straight line path . . . . . . . . . . . . 33
5.2 Open Loop Homing experiment . . . . . . . . . . . . . . . . . . . . . . . .. 34
5.3 Map Based Homing simulation . . . . . . . . . . . . . . . . . . . . . . . . .. 35
6 Conclusion and Future Work 39
A Compiling and executing a program on Axis 207w 41
A.1 Building the cross-compiler . . . . . . . . . . . . . . . . . . . . . . . .. . . . 41
A.2 Compiling code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
A.3 Transferring the program to the camera . . . . . . . . . . . . . . .. . . . . . 43
A.4 Executing the program on the camera . . . . . . . . . . . . . . . . . .. . . . 44
B Arduino code 47
C Corner extraction and camera calibration using Bouguet’s Toolbox 51
List of Figures
2.1 Conceptual design of our system. . . . . . . . . . . . . . . . . . . . . .. . . . 6
2.2 (a) Axis 207w camera. (b) 8× 1.2V AA battety pack. (c) Lithium batteries and
USB box. (d) Arduino microcontroller. (e) Arduino motorshield. (f) Arduino
and motorshield assembled . . . . . . . . . . . . . . . . . . . . . . . . . . . .7
2.3 Design of the project encompassing a teleoperation stage. . . . . . . . . . . . . 8
2.4 PWM signal. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.5 (a) Electric scheme of robot using 1.2V AA batteries. (b)Electric scheme of
robot using lithium batteries. . . . . . . . . . . . . . . . . . . . . . . . .. . . 11
2.6 Robot setup. Final version after assembling all hardwarecomponents. . . . . . 12
2.7 Arduino program to receive one command, where x represents the digital input
value received from the camera. State 1 corresponds to wait-for-transition from
HIGH to LOW. State 2 corresponds to wait-for-transition from LOW to HIGH.
State 3 occurs when a completed command was received. State 4is reached
when a crash was detected due to non-detected transition from HIGH to LOW,
within a predefined period of time. . . . . . . . . . . . . . . . . . . . . . .. . 15
3.1 Thin lens model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.2 (a) Pinhole camera model. (b) Alternative representation of Pinhole camera
model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.3 Parameters to describe the car model and motion. . . . . . . .. . . . . . . . . 22
4.1 Design of project at Homing stage. . . . . . . . . . . . . . . . . . . .. . . . . 30
5.1 Straight line experiment. . . . . . . . . . . . . . . . . . . . . . . . . .. . . . 34
x LIST OF FIGURES
5.2 (a) First image of MonoSLAM running in the straight line experiment. (b)
MonoSLAM catching a feature on the white pen. (c) Feature on white pen
getting closer as the camera moves on. (d) Last image of MonoSLAM running
in the straight line experiment. . . . . . . . . . . . . . . . . . . . . . . .. . . 35
5.3 Ruler vs MonoSLAM measurement to get scale factor. . . . . . .. . . . . . . 36
5.4 (a) Ruler vs MonoSLAM measurement in same metric scale. (b) Difference
between MonoSLAM scaled measurements and ruler measurements. . . . . . . 36
5.5 (a) Start of experiment. (b) Start of homing procedure. (c) After reversing, the
car goes to its initial position. (d) End position and orientation measurement. . 37
5.6 (a) Beginning of Map Based Homing simulation. (b) Beginningof homing
procedure. (c) Homing procedure at middle. (d) Final car pose. . . . . . . . . . 38
C.1 (a) One of the images used in calibration. (b) First cornerto be clicked. (c)
Second corner to be clicked. (d) Third corner to be clicked. (e) Fourth corner to
be clicked and guess of square corners position made by the program. (f) Final
image with corrected square corners positions. . . . . . . . . . .. . . . . . . . 52
List of Tables
2.1 Pins of shield, divided by channel . . . . . . . . . . . . . . . . . . .. . . . . 10
5.1 Real and MonoSLAM measurements comparison. . . . . . . . . . . .. . . . . 35
xii LIST OF TABLES
Chapter 1
Introduction
Teleoperated robots are already a reality in a vast number ofsectors, including space, marine
applications and telemedicine. It allows reaching places,harmful or of difficult access to hu-
mans. Examples of such places can be a sank ship in the bottom of the ocean, the core of a
nuclear reactor, unexplored planets in the space or even, a fragile region affected with a tumor
in the human body. One of the areas where robots are getting more interesting is surveillance
and security. The idea of having a robot capable of being controlled remotely or patrolling
autonomously with capability to monitor for certain human behavior patterns is appealing, spe-
cially in environments like prisons, where correctional officers have poor working conditions.
Current personal robots are starting to have affordable prices. Those robots can be vacuum
cleaners such as the iRobot Roomba, can be telepresence robots, as the Anybots’ QA, or can be
simply Mobile Webcams, such as the WowWee’s Rovio. Most of these robots have in common
the combination of mobile robotics, video cameras and wireless communications. Batteries
and communications are still major bottlenecks, since theyrestrict severely the autonomy of
the robot in terms of operation time and space. Wireless surveillance cameras are a promising
tool for building personal robots because they have alreadyefficient radio communications and
minimized energy consumptions.
For the reasons already mentioned, in this work we propose using wireless surveillance
cameras as a basis to build networked mobile robots. The application we envisage for the mobile
robots is home surveillance. More precisely, we explore thepossibility of having a teleoperated
robot working indoors, allowing the owner of the robot to check up his home remotely.
In order for a teleoperated robot to minimize the effort of the teleoperator, and actually
recover from some faults as communication failures, the robot is supposed to have some au-
tonomous features as e.g. self guidance to charge. Self guidance implies having some means of
2 Introduction
self-localization.
We approach self-location using Monocular SLAM. MonocularSLAM allows creating a
3D features based representation of the world using only onecamera. Visual odometry based
in SLAM has potential to determine the pose of robot, despitethe already expected errors.
Decreasing or reseting this error is essential when no othersensory information is available.
In the next chapters we present strategies, not only to navigate the robot, but also to decrease
uncertainty about the estimated localization.
1.1 Related Work
Camera calibration can be done nowadays with software freelyavailable as J. Y. Bouguet’s
toolbox [7]. The resulting intrinsic parameters fulfill therequirement of having an accurate
projection model for our camera. In case of being necessary to estimate also the extrinsic
parameters of a camera in a natural scenario, one has to use use other methodologies. From
“Calibrating a Network of Cameras based on Visual Odometry” [17] we extracted information
about decomposing a non-singular matrix using Gram-Schmidt, since we required a strategy to
decompose a projection matrix in order to determine the camera localization. To understand the
kinematics involving our robot we resorted to “Car AutomaticParking” [12], which presents
the Kinematic model for a car-like robot.
In “Effective Vehicle Teleoperation on the World Wide Web” [21] the authors propose a
strategy for web-based vehicle teleoperation and present agraphical user interface. The design
of a teleoperated miniature aerial vehicle is described in “Teleoperation Assistance for an In-
door Quadrotor Helicopter”[15]. “Histogramic in-motion mapping for mobile robot obstacle
avoidance” [6] presents a method for real-time map buildingwith a mobile robot in motion.
“Tele-autonomous Guidance for Mobile Robots” [5] presents astrategy for fast teleoperated
mobile robots with obstacle avoidance. “A Mobile Platform for Nursing Robots” [4] describes
a computer-controlled vehicle which is part of a nursing robot system currently under devel-
opment. “STRIPE: Remote Driving Using Limited Image Data” [16] describes a strategy to
teleoperation when we have very low bandwidth communication link.
“Control an RC car from a computer over a wireless network” [10]presents software and
documentation for teleoperation, and suggests homing as animportant topic for future work,
without developing it on their project. We took advantage ofsome of their software, more
precisely, the communications software used to acquire a video feed from the camera and to
send instructions that can be interpreted inside the camera.
1.2 Problem Formulation 3
“Insect Strategies of Visual Homing in Mobile Robots” [18] proposes a strategy for homing.
If a robot takes snapshots of the surroundings of target location, it can compare the actual image
with stored ones. After this the robot can derive a home direction. In this work we present a
similar strategy. Another homing strategy is presented in “Robot Homing based on Panoramic
Vision” [2] which used corner extraction and feature matching by means of a panoramic camera.
In “Learning Maps for Indoor Mobile Robot Navigation” [23] the grid-based map approach
and topological map approach are integrated. “Mobile Robot Localization using Active Vision”
[11] introduces an automatic system able to detect, store and track suitable landmark features
during goal-directed navigation. “Visual Navigation using a single camera” [8] assesses the
usefulness of monocular recursive motion techniques for vehicle navigation, considering the
absence of an environmental model. In “3D Positional Integration from Image Sequences”
[14] is described a three dimensional representation, constructed from feature points extracted
from a sequence of images. The article “Active Visual Navigation using Non-Metric Structure”
[3] introduces a method of using non-metric visual information from a uncalibrated camera to
navigate a robot.
The article “1-Point RANSAC for EKF Filtering, Application to Real-Time Structure from
Motion and Visual Odometry” [9] shows that is conceptually possible to have monocular vision
doing visual odometry, for example with a robot like ours. The proposed algorithm also allows
self-localization and robust mapping.
1.2 Problem Formulation
In this work we propose assembling a car-like robot using a wireless camera and onboard pro-
cessing. The robot must have the capabilities of being controlled over a network and localize
itself without any previous knowledge of the environment. The robot must be able to return to
its initial position autonomously.
The objectives of this work are therefore threefold: (i) assemble a test platform consisting
on a wireless camera and on a microcontroller in a car; (ii) develop the required communication
software between the different processing modules (PC, IP wireless camera, onboard microcon-
troller); (iii) designing a homing strategy which enables the robot to return autonomously to its
initial position, using a Monocular SLAM solution.
4 Introduction
1.3 Thesis Structure
Chapter 1 introduces the objectives of this work, in particular, presents a short discussion on
the state of the art and our expectations with the project at hand. In Chapter 2, we present our
setup for experiments and describe the Hardware and Software components involved. Chapter
3 presents the theory behind our camera and the Kinematic model of the vehicle. Chapter
4 describes the possible strategies for homing and self-localization. Experimental results are
shown and explained in Chapter 5. Chapter 6 summarizes the workdone and highlights the
main achievements and what we learned with this work. Moreover, this chapter proposes further
work to extend and improve the strategies and experiments described in this document.
1.4 Publications and Acknowledgments
The work described in this thesis has been partially published in RecPad 2013 [19]. This work
has been partially supported by the FCT project PEst-OE / EEI /LA0009 / 2013, and by the
FCT project PTDC / EEACRO / 105413 / 2008 DCCAL.
Chapter 2
Hardware and Software Setup
The first step of our work consists of creating a compact, robust and cost controlled setup. A
number of off-the-shelf hardware components and software elements can be used. They allow
us to acquire sensory information from the world, process data, control our autonomous system
and run communication protocols.
We want to assemble in a mobile platform a camera and processing power. In order to keep
the costs low, we decided to use the body and motors from a commercial RC car. Using such a
vehicle as our mobile platform imposes some restrictions about weight and space. For example,
we can not use a laptop for local image processing. This constraint leaded to consider an off-
board PC, the so-called user PC, and using wireless communications between our robot and the
user PC, for teleoperation and autonomous tasks.
The camera and wireless communications can be combined in a single off-the-shelf compo-
nent, a wireless IP-camera. The rational is that the required bandwidth for motion commanding
and reporting is much lesser than the one required for video streaming. Therefore, robot com-
mands can be embedded nondisruptively within the video streaming. The chosen camera, a
surveillance camera, is not designed to send commands to themotors directly. Hence we added
a microcontroller to receive the commands from the camera and translate them into signals to
send to the motors.
We needed to power up our setup, granting a constant input voltage on hardware (specially
in the camera as will be explained later) without increasingthe weight of the robot too much.
The overall system has been specified also to be cost controlled. Hence we took the decision to
use normal AA batteries. Later on we decided to use a better solution based in lithium batteries.
Another important part of the complete system is the software involved. This includes
the program on the user PC, which acquires and processes data from the camera and sends
6 Hardware and Software Setup
commands to the robot, but also the software within the camera and the Arduino which are
responsible for establishing communication and control the motors. The conceptual design of
our system is shown in fig.2.1.
Figure 2.1: Conceptual design of our system.
2.1 Hardware components
To build the proposed setup, hardware was selected as much aspossible of-the-shelf. All of
these components have strict specifications which are described in the design chapter. Every
element is important for the right behavior of the complete system. The body and DC motors of
the car are from a commercial R/C car. The wireless IP camera used is an Axis 207w, which is a
camera built for the surveillance market. The microcontroller is an Arduino Uno. The Arduino
can not output sufficient current to power up the motors. One has to use the Arduino Motor
Shield mounted in the Arduino Uno. The used hardware components are shown in fig.2.2.
2.1 Hardware components 7
(a) (b) (c)
(d) (e) (f)
Figure 2.2: (a) Axis 207w camera. (b) 8× 1.2V AA battety pack. (c) Lithium batteries andUSB box. (d) Arduino microcontroller. (e) Arduino motorshield. (f) Arduino and motorshieldassembled
8 Hardware and Software Setup
2.2 Design
An important part of this project was the integration of all hardware elements, making the setup
reliable for work. A good configuration allowed us to controlthe car, in real time, and get the
images from the camera to the computer, for image processing. One operational step consists in
establishing the connection between the camera mounted in the RC car and the user’s computer
and creating a way to send commands from the user’s computer to the camera. We need a way
to send commands from the camera to the microcontroller and away to send commands from
the microcontroller to the car motors. No less important, isgetting a real time video feed from
the camera to the user’s computer (to allow real time driving) and creating a friendly graphical
user interface for teleoperation. A more specific design of our project at this stage is shown in
fig.2.3.
Figure 2.3: Design of the project encompassing a teleoperation stage.
2.2.1 Hardware
The hardware functionality is a key aspect of this project. The hardware design is divided into
the elements detailed in the following subsections.
2.2 Design 9
Wireless IP camera
The camera used in this project is an Axis 207w. It has an ARTPEC processor which is running
an ARMv4tl architecture, allows wired and wireless connections and it has one I/O terminal
connector. The processor within the camera allows executing a Linux OS. The I/O terminal
connector is composed of four pins. Two pins are for power sourcing and the others two are a
digital input and a digital output. The digital output has a minimum output interval of 10 ms.
The camera has two main functions. The first is making available a video feed that can be sent
to a user computer via wireless. Then, it is possible to open and process each frame in the user’s
computer. The second main function is to run a TCP/IP server, so a user can get connected and
send commands to control the car. This is possible, once the camera runs a Linux operating
system, which allows us running a TCP server and activating/deactivating the digital output of
the camera and that is the door from where we can send signals to the microcontroller, for speed
and steering controls. The camera requires a constant voltage within the range of 4.9V to 5.1V.
Arduino microcontroller
In order to control the motors, we need a programmable microcontroller. More precisely, the
microcontroller controls the speed and direction of rotation of each motor. We can control the
speed of rotation with a Pulse Width Modulated (PWM) signal, supplied by the microcontroller.
Besides, the camera has a minimum output period of 10 ms, whichis not adequate for supplying
a PWM signal. Instead, the Arduino receives a message from theIP Camera and translates it
into the appropriate signals for the steering and speed control. The frequency of the used PWM
signal is 32KHz. The PWM configuration also allows changing the signal duty cycle, which
translates in middle voltage reference values for the motors (fig.2.4). This way, one can speed
up or turn the robot to levels between its maximum and minimum.
Arduino Motor Shield
As described in Arduino Motor Shield guide [1], the Arduino Motor Shield is based on the
L298, which is a dual full-bridge driver designed to drive inductive loads such as relays, solenoids,
DC and stepping motors. It lets us drive two DC motors with theArduino board, controlling
the speed and direction of each one independently. We can also measure current absorption of
each motor, among other features. It has an operating voltage of 5 V to 12 V. This shield has
two separate channels and each one uses 4 of the Arduino pins to drive or sense the motor. Each
channel supplies a maximum current of 2A. We can use the two DCmotors by connecting the
10 Hardware and Software Setup
Figure 2.4: PWM signal.
two wires of each one in the (+) and (-) screw terminals for each channel. In this way, we can
control its direction by setting HIGH or LOW the direction pins. We can control the speed by
varying the PWM duty cycle values.
Function pins per Ch.A pins per Ch.B
Direction D12 D13
PWM D3 D11
Brake D9 D8
Current Sensing A0 A1
Table 2.1: Pins of shield, divided by channel
DC motors
The two electric motors located in the car are DC, whose principle of functionality we will
not focus here (see [20]). The DC motors are controlled by theArduino (microcontroller) plus
Motor Shield. The current sent to the motors is a high frequency PWM signal. In that way, we
avoid forcing the motors to their maximum, which is extremely relevant when talking about the
front (directional) motor, because this one should not saturate in the maximum turn angle. If
2.2 Design 11
that happens, i.e. the motor constantly forcing the directional end of course, the car will not turn
more and the motor can be damaged. As already stated, a PWM signal allows reaching middle
values for the direction of the vehicle.
Battery
The original RC car from where we used the chassis and the motors, had incorporated a section
for a pack of 4 AA batteries. Initially, we replaced it for onethat holds 8 AA batteries. The car
can work, for some time, with a pack of 8× 1.2V (9.6V) AA batteries at 1000mAh. With this
power source, the camera is powered up by the Arduino, which means that all power required to
feed the robot passes through Arduino. This leads to substantial heating of some sections of the
Arduino. This strategy is shown in fig.2.5(a), where it is visible that the camera is powered by
the 5V output port from Arduino. This electrical scheme works, however, the intense use of our
robot shortens the batteries run-time and the voltage drop quickly becomes critical, decreasing
the motors power, making the car slower.
(a) (b)
Figure 2.5: (a) Electric scheme of robot using 1.2V AA batteries. (b) Electric scheme of robotusing lithium batteries.
To improve reliability and time of operation of our robot, wedecided to use another power
source. Using a box with 2 USB output ports, with four Panasonic NCR18650B lithium-ion
batteries inside (each one deploys a current of 2.5A and has acapacity of 3400mAh), it was
12 Hardware and Software Setup
possible to split the Arduino, Motor Shield and motors powersource from the camera power
source. This way, the camera is no longer powered by the electric system in Arduino, meaning
the noise introduced by the Arduino Motor Shield and motors no longer reaches the camera.
However, this design imposes the use of a DC-DC converter to boost up the voltage inputted
to the MotorShield, since the 5V outputted from the box are not enough to meet the motors
requirements. With this strategy, we can use our robot for a number of hours without having to
recharge. This strategy is shown in fig.2.5(b). A picture of our robot is in fig.2.6.
Figure 2.6: Robot setup. Final version after assembling all hardware components.
2.2.2 Comunications and Software
The hardware set is the foundation of our work, but software is required to fulfill complex tasks.
These involve data processing and communications protocols, which will be explained in the
next chapters. The architecture is centered on a server on the camera which receives digital
commands from the user’s PC program and translates them intoa signal, which is interpreted
by the Arduino program. The user also requests a video feed for image processing, which is
sent by the camera.
User PC
To fulfill the need for a user program that allows us to communicate to the server and get the
video feed from the camera, we decided to use a Java program made by [10], adapted to our
hardware and protocols. Once started, this program opens a GUI window, where we can see
2.2 Design 13
the camera image feed, retrieved with HTTP requests. This window has two tabs, one called
View (actual) and other calledSetup. In theView tab, in addition to the camera view, we can
also control the car using the keyboard. On the setup view we can set the network location (IP
address) where the camera server is located and its corresponding port number.
To adapt the program to our needs, we made some improvements and created new methods.
In order to prevent the case where the user is controlling thecar and decides to close the appli-
cation without stopping the car, we make it stop when the cross button is pressed, by sending
the correct commands to the robot, before the application gets closed. We created a method to
allow a request of the current frame from an outside program (which is useful from the point
we start to use MATLAB for real-time experiments). We created also a method to be called
when we want to send commands to the robot from MATLAB, insteadof from the keyboard
and a method to save the video feed into the hard disk, in orderto create datasets which can be
processed later in MATLAB, without the real-time constraints. For efficiency reasons, this last
method is never executed in real-time experiments.
Connection between user’s PC and camera (video feed)
As was already mentioned, we use the Java software [10], adapted to our case. If the request for
a MJPEG video within the program was successful, the server in the camera returns a continuous
flow of JPEG files. Note that one can see the video feed of the camera at any time by introducing
the IP address associated with the camera in an Internet browser.
Connection between user’s PC and camera (commands)
To send commands to the camera, in order to control the car, weapplied a Client/Server archi-
tecture using a connection between TCP/IP sockets. The Axis 207w camera has an embedded
Linux Operating system. Since it runs on an ARMv4tl processor, the server program had to be
written in C and compiled using the x86 or x64 to ARM cross-compiler. In the Java program
running in the user’s PC, when the connect button is pressed, arequest to start a TCP connection
is made to the server running in the camera. After the camera accepts this connection, the user
is able to start sending commands. A key listener interface is listening the keyboard. Once a key
is pressed, an event is triggered and an integer, corresponding to the command required is sent
to the camera server by TCP/IP connection. Basically, the server program functionality boils
down to open a socket, listening to it, accepting a client whois trying to establish connection
and interpret the received commands.
14 Hardware and Software Setup
Connection between camera and microcontroller
When a connection with a user is accepted, the server in the camera receives message com-
mands, from which it produces signals to send to the microcontroller, corresponding to the right
instruction accordingly to the Arduino protocol implemented by us. The signal comes out of the
digital output in the camera and is received in the Arduino byone of the digital I/O ports. These
signals are created by activating and deactivating the digital output present in the camera. The
digital output can be activated/deactivated using the Linux functionioctl. Note that, unlike
the TCP/IP connection between the user and the camera server,where the protocol requires that
for every byte stream sent the other side answers with acknowledgments, the wired connection
between the camera and the Arduino is much simpler. The Arduino receives bits but never
sends. It detects the right control command to apply to the motors by counting the number of
transitions from LOW to HIGH sent by the camera output. Afteran end of message timeout
is achieved, the program prepares itself to receive a new instruction. If the camera crashes and
the digital output is HIGH, the Arduino program will recognize the crash and will return to
the initial state. After Arduino receives a complete instruction, it will update the signals which
are constantly controlling the motors. The Arduino programis in appendix B and its design is
shown in fig.2.7.
2.2 Design 15
Idle
1
2
4
3
crashtimeout
x=1
x=0
EOMtimeout
x=1 x=1
Parse Word
Figure 2.7: Arduino program to receive one command, where x represents the digital input valuereceived from the camera. State 1 corresponds to wait-for-transition from HIGH to LOW. State2 corresponds to wait-for-transition from LOW to HIGH. State 3 occurs when a completedcommand was received. State 4 is reached when a crash was detected due to non-detectedtransition from HIGH to LOW, within a predefined period of time.
16 Hardware and Software Setup
Chapter 3
Camera and Car Modeling
The camera is the sensory device of our setup, from where we get information from the world,
and from that we can compute the localization of the camera and choose the next action for
the robot to take. Since the motors of the car are the actuators, it is important to know the
kinematic equations of the car to predict its movement and, therefore, improving the estimation
of localization measurement and the knowledge of what king of trajectories the car is able to
do. In this chapter, we detail how 3D points in a generic reference frame translate into 2D
points in an image plane and how the camera parameters can be estimated. We also describe
the continuous kinematic model of the car and its discrete version, necessary to be processed in
a computer.
3.1 Camera Model
ff’ I
I’
O’
O
Figure 3.1: Thin lens model.
In this section, we are going to detail the camera model, i.e.the model that allows computing
the projection of a 3D point on a 2D plane, the image plane. We start by presenting the thin
lens model, see fig.3.1, where f is the focal length, d is the thickness of the lens andd ≪ f .
18 Camera and Car Modeling
An approximation to this model can be the Pinhole camera, which is a model without any lens,
presented in fig.3.2.
ff’ I
I’
O’(X,Y,Z)
OZ
X
Y
Trueimageplane
Imageplane infront
(x,y)
(a) (b)
Figure 3.2: (a) Pinhole camera model. (b) Alternative representation of Pinhole camera model.
Using the similarity of triangles we can conclude that
{
XZ= x
f
YZ= y
f
(3.1)
where X,Y,Z are the 3D coordinates of a point on camera coordinates and x,y are corresponding
coordinates of the projection on the image plane. Putting inorder to x and y
{
x = f XZ
y = f YZ
(3.2)
and defining
λ = Z (3.3)
one has thatλ denotes a scale factor representing depth. After this, we can write the system
λx
λy
λ
=
f 0 0
0 f 0
0 0 1
X
Y
Z
(3.4)
3.1 Camera Model 19
which can be rewritten for future convenience as
λx
λy
λ
=
f 0 0 0
0 f 0 0
0 0 1 0
X
Y
Z
1
. (3.5)
Considering a generic reference frame which may not be the camera reference frame, we get
λx
λy
λ
=
f 0 0 0
0 f 0 0
0 0 1 0
[
CRWCtW
0 1
]
W
X
Y
Z
1
. (3.6)
To determine the pixel in the image plane corresponding to the 2D image one writes
{
u = αux+ u0
v = αvy + v0(3.7)
whereu andv are in pixels,αu andαv are scale factors andu0 andv0 are the image centre in
pixels. Therefore, we have
u
v
1
=
αu 0 u0
0 αv v0
0 0 1
x
y
1
. (3.8)
Using eq.3.6 and eq.3.8, we can write
λ
u
v
1
=
αu 0 u0
0 αv v0
0 0 1
f 0 0 0
0 f 0 0
0 0 1 0
[
CRWCtW
0 1
]
W
X
Y
Z
1
. (3.9)
In eq.3.9, we can collect the transformations into a single projective matrix P:
P =
αu 0 u0
0 αv v0
0 0 1
f 0 0 0
0 f 0 0
0 0 1 0
[R|t] (3.10)
20 Camera and Car Modeling
which, for future convenience, we write as
P =
ku 0 u0
0 kv v0
0 0 1
[R|t] (3.11)
whereku = αuf andkv = αvf . From this, we have our intrinsic parametersku, kv, u0 and
v0. The parametersku andkv corresponds to the focal distance, expressed in units of vertical
and horizontal pixels, and allow us to convert from meters topixels. These two are usually
very similar. The "aspect ratio",kvku
, is different from 1 iff the pixels are not square pixels. The
parametersu0 andv0 correspond to the principal point (camera centre in pixels). We can write
P in its common form
P = K [R|t] . (3.12)
3.2 Camera Calibration
In order to determine the pixel corresponding to a 3D point, one needs to determine the extrinsic
and intrinsic parameters of the camera. The extrinsic parameters are composed by a rotation
matrix and a translation vector, which allow changing from the object reference frame to the
camera reference frame. In the case of non-varying optics, the intrinsic parameters are fixed,
and to determine them one can use Bouguet’s calibration toolbox [7]. The calibration procedure
using this toolbox is illustrated in appendix C.
In our case we want to determine both the intrinsic and extrinsic parameters. In other words
we want to estimate the camera projection matrix P. In order to do that, we need to know the
2D coordinates (m) of feature points and their corresponding 3D coordinates (M). We start by
writing down the relationship
λm = PM (3.13)
where
M =
X
Y
Z
1
(3.14)
3.2 Camera Calibration 21
andλ is a scale factor that equals depth in 3D(λ = Z). So, from eq.3.13 we have
λ
u
v
1
=
P1:M
P2:M
P3:M
. (3.15)
From here, we can observeλ = P3:M . Dividing byλ on both sides of the equation
{
u = P1:MP3:M
v = P2:MP3:M
(3.16)
multiplying byP3:M on both sides of the equation and making the system equal to zero
{
P1:M − uP3:M = 0
P2:M − vP3:M = 0(3.17)
putting eq.3.17 in matrix form
[
MT1
→
0
→
0
MT1
−u1MT1
−v1MT1
]
2×12
P T1:
P T2:
P T3:
12×1
=
[
0
0
]
2×1
(3.18)
for N points, eq.3.17 gets
MT1
→
0 −u1MT1
→
0 MT1 −v1M
T1
MT2
→
0 −u2MT2
→
0 MT2 −v2M
T2
... ... ...
MTn
→
0 −unMTn
→
0 MTn −vnM
Tn
2N×12
P T1:
P T2:
P T3:
12×1
=
0
0
...
0
2N×1
. (3.19)
We call the first matrixA and the second oneπ. We want minimize the norm of the residualr =
Aπ. To solve this,π must be the normalized eigenvector associated to the smallest eigenvalue of
ATA. To compute the required eignvalue, one can use SVD decomposition [U, S, V ] = svd(A)
whereS is a diagonal matrix of the same dimension asA, with non-negative diagonal elements,
U and V are unitary matrices, so thatA = U ∗ S ∗ V T . In fact, V contains the ordered
22 Camera and Car Modeling
Figure 3.3: Parameters to describe the car model and motion.
eigenvectors corresponding to the ordered eigenvalues in S. The projection matrix elements we
are looking for are in the eigenvector corresponding to the lowest eigenvalue. Then we must
order them as a matrix3× 4.
3.3 Kinematic model
The vehicle we use in this project has rear wheel drive and is able to change direction by turning
its front wheels. So, in a way, it is similar to the cars we see on the road every day. The points
that can be reached by the vehicle are path dependent. One needs for a nonholonomic kinematic
model to describe the system and predict motion. We have based on the model presented in [12].
3.3.1 Continuous model
Knowing that the orientation of the vehicle can be describedas(cos(θ), sin(θ)) and the velocity
of the rear of the vehicle by(xr, yr), we have the nonholonomic constraint:
yr cos(θ) + xr sin(θ) = 0 (3.20)
knowing that the rear of the vehicle it is strictly connectedto the front:
{
xr = xf − L cos(θ)
yr = yf − L sin(θ)(3.21)
3.3 Kinematic model 23
differentiating both sides of eq.3.21, with respect to time:
{
xr = xf + θL sin(θ)
yr = yf − θL cos(θ)(3.22)
replacing eq.3.22 in eq.3.20:
(yf − θL cos(θ)) cos(θ)− (xf + θL sin(θ)) sin(θ) = 0⇔
⇔ yf cos(θ)− θL cos2(θ)− xf sin(θ)− θL sin2(θ) = 0⇔
⇔ yf cos(θ)− xf sin(θ)− θL(cos2(θ) + sin2(θ)) = 0⇔
⇔ yf cos(θ)− xf sin(θ)− θL = 0
(3.23)
from fig.3.3, we know that:{
xf = v cos(θ + φ)
yf = v sin(θ + φ)(3.24)
wherev is the linear velocity inputted to the car. Replacing eq.3.24in eq.3.23, we get:
v sin(θ + φ) cos(θ)− v cos(θ + φ) sin(θ)− θL = 0 (3.25)
knowing thatsin(x− y) = sin(x) cos(y)− sin(y) cos(x), eq.3.25 can be simplified to:
v sin(φ)− θL = 0⇔
⇔ θ = v sin(φ)L
(3.26)
rewriting eq.3.24 and eq.3.26 we achieve the kinematic equations related to the central point of
the front axis:
xf = v cos(θ + φ)
yf = v sin(θ + φ)
θ = v sin(φ)L
. (3.27)
3.3.2 Discrete model
In order to simulate the kinematic model on a computer, we have to translate its equations to
discrete time. To achieve that, we make first order derivative using Leapfrog integration [24],
which gives us a good approximation.
Considering N to be the timestep and n to be the actual discretetime instant, for the veloci-
24 Camera and Car Modeling
ties, we get:
θ [n] = v[n] sin(φ[n])L
xf [n] = v [n] cos(
θ [n] + φ [n] + N2θ [n]
)
yf [n] = v [n] sin(
θ [n] + φ [n] + N2θ [n]
)
. (3.28)
For the positions, we get
θ[n] = θ[n− 1] +Nθ[n]
xf [n] = xf [n− 1] +Nxf [n]
yf [n] = yf [n− 1] +Nyf [n]
. (3.29)
We can expand eq.3.29, resulting in
θ[n] = θ[n− 1] +N v[n] sin(φ[n])L
xf [n] = xf [n− 1] +Nv[n] cos(θ[n] + φ[n] + N2θ[n])
yf [n] = yf [n− 1] +Nv[n] sin(θ[n] + φ[n] + N2θ[n])
. (3.30)
Chapter 4
Navigation and Control
The teleoperation functionality, described in the preceding chapters, is very useful by itself.
However, in our surveillance context, an autonomous functionality is an additional tool that we
are looking to have also in our system. There are a considerable number of cases where self-
capable control is useful. For example, the robot should be able to return to a known position if
the user decides to close the connection and he chooses to notcontinue driving the car, or if, for
some reason, the connection gets lost because of weak signal. Homing is also important for the
case where the robot has to recharge its batteries1. We present in this chapter some strategies
in order to execute the homing procedure.
4.1 MonoSLAM
In order to use the camera as a sensor for homing a robot, more precisely using video data to
compute the pose of the robot, one can use MonoSLAM [9]. By using MonoSLAM we obtain
the vehicle position and orientation in a 3D general frame and the location of important features
from where the robot knows where it is located.
4.1.1 Algorithm
The MonoSLAM proposed in [9] is an algorithm, which consistsin an Extended Kalman Filter
(EKF) with a special Random Sample Consensus (RANSAC) embedded.It assumes previous
knowledge about the model of used camera so, before initiating its propose, camera intrinsic
parameters are inputted to the program. Initially, the algorithm assumes that an a priori proba-
1This is already implemented in some real systems like the Santander Interactive Guest Assistants (SIGAs).
26 Navigation and Control
bility distribution over the model parameters is known which allows reducing the computational
cost in calculating the model. The EKF makes a pose prediction of the vehicle. The update step
is made by using the most voted hypothesis in RANSAC which generates hypothesis from can-
didate features matches. To detect local features in the images a corner extraction procedure is
used.
The MonoSLAM algorithm can be divided in a sequence of steps.Initially a standard EKF
prediction is made
xk|k−1 = F(
xk−1|k−1, uk
)
(4.1)
Pk|k−1 = FkPk−1|k−1FTk +GkQkG
Tk . (4.2)
The estimate for the state vectorxk|k−1 at stepk is computed, by the propagation of the state
vector at stepk−1 modeled as a multidimensional Gaussianxk−1|k−1 ~N(
xk−1|k−1, Pk−1|k−1
)
through the known dynamic modelfk. The termuk is the control inputs for our system at step
k, Fk stands for the Jacobian offk with respect to the state vectorxk|k−1, Qk is the covariance
matrix of the zero-mean Gaussian noise assumed for the dynamic model andGk stands for the
Jacobian of this noise in relation to the state vectorxk|k−1. A Gaussian prediction for each
measurement can be computed by propagating the predicted step through the measurement
modelh:
hi = hi(xk|k−1) (4.3)
Si = HiPk|k−1HTi +Ri (4.4)
whereHi is the Jacobian of the measurementhi with respect to the state vector in relation
to xk|k−1 andRi stands for the covariance of the Gaussian noise assumed for each individ-
ual measurement. By comparison of the chosen local feature descriptor, the actual measure-
mentzi should be searched inside the 99% probability region definedby its predicted Gaussian
N(hi, Si).
The inter-frames point matchings are constrained to be individually compatible with the
predicted statexk|k−1. However, this compatibility has to be confirmed by confronting it with a
global model for the set of individually compatible matcheszIC = (z1...zi...zn)T. Then, random
state hypothesesxi are generated using RANSAC and, by counting measurements inside a
threshold, data support is computed. After that, a partial update is computed by using data
points voting for the most supported hypothesiszli_inliers
xk|k = xk|k−1 +Kk
(
zli_inliers − h(xk|k−1))
(4.5)
4.1 MonoSLAM 27
Pk|k = (I −KkHk)Pk|k−1 (4.6)
Kk = Pk|k−1HTk
(
HkPk|k−1HTk +Rk
)−1(4.7)
whereHk = (H1...Hi...Hn)T stands for the Jacobian of the measurement equationh(xk|k−1)
that projects the low-innovation inliers into the camera space.
After the partial update described in the previous paragraphs, most of the correlated error
in the EKF prediction is corrected and the covariance is considerably reduced. Thereafter,
consensus for the set will not be necessary to compute and individually compatibility will be
sufficient to discard outliers from inliers.
For every matchzj, a Gaussian predictionhj ~ N(hj, Sj) will be computed by propagating
the first partial updatexk|k through the estimated model. If the match lies within the 99%
probability region of the predicted Gaussian, the match will be accepted as an inlier. Next, with
all points classified as inlierszhi_inliers, a second partial update is performed.
4.1.2 State vector
The state vectorW xk is composed by a set of camera parametersW xCkand map parameters
W yCk. All these parameters are referred to a static frame
W ∧x k =
(
W ∧x Ck
W∧y Ck
)
. (4.8)
The estimated mapWy is composed ofn point featuresWyi andWy =(
yWT1 ... yWT
n
)T
.
In our case, we use MonoSLAM as a visual odometer and we are interested in getting the local-
ization of the camera and the 3D features information. We getthis by accessing the Extended
Kalman Filter state vectorW∧x k.
28 Navigation and Control
4.2 Pose estimation from camera matrix
In this section, we describe a methodology to determine the camera position and orientation,
when one knows 2D images of 3D points, in world coordinates, obtained e.g. with SLAM. The
methodology starts by computing the camera projection matrix for each of the robot positions
while moving.
Given a projection matrix, we compute the camera pose by factorizing the matrix into in-
trinsic and extrinsic parameters. More precisely, we decomposeP into K[R|t]. We start by
selecting the first three columns ofP :
P1:3,1:3 = KR (4.9)
whereK is an upper triangular matrix andR is a rotation matrix. Then we use Gram-Schmidt’
ortho-normalization to decomposeP1:3,1:3, a non-singular matrix, into an upper triangular ma-
trix and a rotation. With some more simple operations, in theend we obtain the required form
KR.
In order to use a nomenclature according to what is more common in the literature, letQ
denote the rotation matrix (Q ← R), andR denote an upper triangular matrix (R ← K). We
are going to detail first the factorizationQR, commonly found in the computational tools (e.g.
Matlab), instead ofRQ (actually the desired format, which will be obtained later):
P1:3,1:3 = Q.R =[
q1 q2 q3]
r11 r12 r13
0 r22 r23
0 0 r33
(4.10)
whereqi are orthonormalized vectors ofP1:3,1:3, for exampleq1 = P1:3,1/ ‖P1:3,1‖ andrij are
weighting factors found in the orthonormalization process.
What we have until now is the product of a rotation matrix and anupper-triangular matrix,
where the upper-triangular has the main diagonal entries all positive. However, we want the
reverse, i.e. an upper-triangular times an rotation. So, wetransformQR factorization intoRQ.
In order to do that, we define a matrixS as
S =
0 0 1
0 1 0
1 0 0
. (4.11)
This matrix has some important features. For example,S has the property that left multiplying
4.3 Autonomous Homing 29
a 3 × 3 matrix swaps its lines and right multiplying a3 × 3 matrix swaps its columns. Also,
S = ST andS.S = I (i.e. S = S−1). ApplyingQR factorization toP T1:3,1:3.S
P T1:3,1:3.S = Q.U (4.12)
and doing some algebraic operations starting with a transpose and then using the properties of
S, one hasP1:3,1:3 = S.UT.QT= S.UT.S.S.QT= (SUTS).(SQT) = K.R. Finally,K = (SUTS)
has the necessary line and column swapping to change the lower-left-triangularUT to upper-
triangular andR = (SQT) is still a rotation as the transpose and the operation ofS do not affect
that property. The QR and RQ factorizations can leave a sign ambiguity, which is removed
by requiring that K has positive diagonal entries. Having the signs of the main diagonal of K
defined in a diagonal matrix D
D = diag{sign(K1,1), sign(K2,2), sign(K3,3)} (4.13)
one can correct the signs ofK, and consequently update all the terms of the factorizationwith:
K← K.D, R← D.R, t = K−1.P1:3,4.
4.3 Autonomous Homing
In this section, we describe the homing strategies and discuss the advantages and drawbacks of
each one. In order to have a program that runs the MonoSLAM, while it is capable of receiving
commands to control the car, we decided to call the Java GUI from MATLAB. In the end, we
get a MATLAB program integrating three important blocks: Teleoperation, MonoSLAM and
Controller. The complete project design is shown in fig.4.1.
4.3.1 Open Loop Homing
Following the objectives for this master’s thesis, we want to teleoperate the vehicle for some
time, then give the order to initiate the homing routine, which leads the vehicle to reverse and
come back by a similar way it came in. The path is stored by stacking the robot position in
every iteration. This is possible, since we can access the state vector from the EKF embedded
in MonoSLAM, from which we determine the translation and rotation from camera to world
reference frame.
30 Navigation and Control
Figure 4.1: Design of project at Homing stage.
One extracts from the vectorstatethe position of the car
W tC = state(1 : 3) = [x y z]T (4.14)
and computes the rotation matrix in the world frame
WRC= q2r(state(4 : 7)) (4.15)
whereq2r denotes the conversion for a quaternion to a rotation matrix. Beingθ1 the rotation
angle about the x-axis,θ2 the rotation angle about the z-axis andθ3 the rotation about the y-axis,
4.3 Autonomous Homing 31
one can write [22]
WRC = Rx(θ1)Rz(θ2)Ry(θ3) =
1 0 0
0 c1 s1
0 −s1 c1
c2 0 −s2
0 1 0
s2 0 c2
c3 s3 0
−s3 c3 0
0 0 1
=
c2c3 c2s3 −s2
s1s2c3 − c1s3 s1s2s3 + c1c3 s1c2
c1s2c3 + s1s3 c1s2s3 − s1c3 c1c2
(4.16)
wherec1 = cos (θ1), c2 = cos (θ2), and so on. One can computec2 by
c2 =√
WRC(1, 1)2 + WRC(1, 2)2, and determineθ2 with:
θ2= atan2(−WRC(1, 3), c2). (4.17)
Notice that we are using 3D rotations when we have a 2D problem. The reason behind this,
is that the used MonoSLAM program was made for 3D context and we tried to use as much
as possible what was already developed. After the user activates the homing procedure, the car
reverses and the controller actions take place. Its reference orientation is calculated as
θref = a tan 2
(
yref − yactualxref − xactual
)
. (4.18)
To calculate the vehicle pose in each iteration of homing in order to choose the right com-
mand to send to the car, a routine using the kinematic of the vehicle presented in section3.3 is
used.
This strategy is used in real-time experiments, since the features matching in MonoSLAM
running in MATLAB fail with abrupt movements.
4.3.2 Visual Odometry Based Homing
Like before, in a first stage, we have MonoSLAM running, stacking the camera localization
at every iteration. When homing is activated, the vehicle reverses and, using the controller, it
follows the stored path. Its pose is retrieved from MonoSLAMat every iteration. This strategy
is theoretically better than using the first one, since it uses a sensor at every stage of homing,
despite the consecutive error accumulation caused by visual odometry.
32 Navigation and Control
4.3.3 Map Based Homing
With this strategy, we also store the 3D features coordinates and its corresponding feature de-
scriptors when the vehicle is being teleoperated, buildingin that way a map. In the homing
procedure, we compute the actual vehicle pose, by finding thecamera matrixP and decompose
it to the formK[CRW |CtW ], at every iteration using the process described in section 4.2. We
can computeP becausem = PM , whereM are the features 3D points stored before andm are
the 2D projection points from the camera image, corresponding to the 3D features. We know
the correspondence between 3D points and 2D points by means of feature descriptors compari-
son. In order to determine the vehicle pose fromP , we start by applying QR factorization to P,
than transform from the QR to the RQ factorization and then, correct the sign of K (as detailed
in Sec.4.2). From this we obtain the matricesK, CRW andCtW . Knowing that
WRC = CRTW (4.19)
andW tC = −WRC .
CtW (4.20)
we extract the position fromW tC and the orientation angle fromWRC .
To use this strategy, at least six features have to be detected in each iteration (see eq.3.19), since
we have six unknown parameters. The great advantage in usingthis strategy, compared to the
ones above, is that it allows resetting the visual odometry error, during the homing procedure.
Since MonoSLAM uses EKF, exists always a probability of the filter looses its consistency,
specially with abrupt rotations, meaning that estimated state and its predicted margin of error
diverges from measurements. Finding known locations due toour feature map opens the possi-
bility to reset the error in EKF prediction. A simulation using this strategy is shown in the next
chapter (fig.5.6).
Chapter 5
Experiments
This chapter describes the experiments performed to validate the methodologies introduced in
previous chapters. In order to test our proposed solution functionality, we wanted to make
an experiment where an user remotely controls the car for a time, while the MonoSLAM is
running, than performs an action to make the car achieve its initial position, by reversing its
orientation and then, falling back through a similar way we came in. When trying to run the
MonoSLAM while this is acquiring images from the camera in real time, we noticed that the
features matching between images fails too soon. The reasonfor that was the time difference
between acquired images, once each iteration of the programmain cycle was just too large. In
order to keep using the algorithm proposed by [9], we decidedto use a different user’s PC with
faster hardware. So, instead of using the laptop which has a P6100 processor running at 2.00
GHz, we tried using a desktop which has an i5 processor running at 3.8 GHz. After this, instead
of having the 3 sec per MonoSLAM iteration, we now have 1 sec per iteration, approximately.
By doing this, we were able to make some experiments, but stillvery limited. If the car moves
too fast or turns abruptly, there is a serious risk of the features matching algorithm fails. This
led us to the conclusion that the MonoSLAM could not be used asit is, while the vehicle is
homing.
5.1 MonoSLAM, self-localization error in a straight line path
After being able to run the MonoSLAM in real time, we wanted tohave an idea of the pose
errors. Since the MonoSLAM is very sensitive to the changes in direction, we tested it while
the car was moving along a straight line.
MonoSLAM shows that the car is moving straight (fig.5.2(d)).The final orientation is sim-
34 Experiments
Figure 5.1: Straight line experiment.
ilar to the real one. When a considerable number of features moves its position on the image
plane from one frame to the next, the algorithm understands the movement and realizes if the
camera is sliding right, left, up, down, or if it is actually moving forward or backward. That can
be verified on figures 5.2(b) and 5.2(c) from where we can see a feature located on the cap of
the white pen which moves between iterations of the MonoSLAM, as can be seen in the graphs.
On the graph of figure 5.2(c) the feature is closer to the estimated position of the camera than
in the graph of figure 5.2(b).
After measuring the car position at every 50 cm with a ruler and comparing to the position
given by the MonoSLAM, we plotted the results (fig.5.3) to have an idea of the scale factor be-
tween the MonoSLAM direct measurements and the ruler. Aftercomputing a linear regression
of used data points, we got the line slope. By dividing the MonoSLAM measurements with the
slope value, we got the measurements in the same metric scaleas the ruler measurements. After
plotting the scaled MonoSLAM measurements versus ruler measurements and drawing a line
corresponding to the ideal case, we found how close are the MonoSLAM measurements to the
ideal case (fig.5.4(a)). We can see the difference in fig.5.4(b).
5.2 Open Loop Homing experiment
In this experiment, we tested the Map Based Homing strategy with our setup. We moved the
robot for some time along a straight line (worst case scenario), then the homing procedure was
triggered (fig.5.5(b)) and the robot returned to its initialposition (fig.5.5(c) and (d)).
The difference between the MonoSLAM estimated positions and the real ones is shown in
table 5.1. The results are considerably different. The finalpose of the vehicle is shown in
5.3 Map Based Homing simulation 35
(a) (b)
(c) (d)
Figure 5.2: (a) First image of MonoSLAM running in the straight line experiment. (b)MonoSLAM catching a feature on the white pen. (c) Feature on white pen getting closer asthe camera moves on. (d) Last image of MonoSLAM running in thestraight line experiment.
real estimatedX (cm) 22 5Y (cm) -16 -8θ (o) -55 -41
Table 5.1: Real and MonoSLAM measurements comparison.
fig.5.5(d). From the experiments made, we can see that MonoSLAM easily detects changes
in direction of motion, but its not very accurate at determining the magnitude of the forward
motion of the camera, especially if there is no significant movement in more directions.
The high computational cost of running MonoSLAM in MATLAB makes the job at hand dif-
ficult. No abrupt movements can be done or otherwise the difference between two consecutive
images will lead to no features matches.
Using the kinematic model as the only prediction of pose while homing is possible, but leads
also to a rapid increase of the pose prediction errors.
5.3 Map Based Homing simulation
This experiment was made to test the Map Based Homing strategyin simulation. A wide
number of 3D features were spread according to normal distribution with zero mean and unitary
36 Experiments
Figure 5.3: Ruler vs MonoSLAM measurement to get scale factor.
(a) (b)
Figure 5.4: (a) Ruler vs MonoSLAM measurement in same metric scale. (b) Difference betweenMonoSLAM scaled measurements and ruler measurements.
variance. A set of commands was sent to the simulated robot which ended up by doing a
trajectory. In the meantime, the car was saving its consecutive positions in order to store a path,
and a map of 3D features was being build by storing the 3D features that were in the field of view
of the camera (red points in fig.5.6). When the robot stops receiving commands, the homing
procedure is triggered and the controller takes the car backtrough the inverted stored path
(fig.5.6(b) and (c)). When the robot gets close enough to a checkpoint of the path, the reference
destination becomes the next checkpoint, until the car reaches its destination. If the car can
not reach a point due to restrictions in the degrees of freedom, it makes a reverse maneuver in
order to get closer to the desired checkpoint. The position of the robot during this procedure is
computed by decomposing the camera matrix in translation and rotation. This camera matrix
is computed at every iteration using eq.3.19, where the 2D points (u, v) correspond to the 2D
coordinates of the 3D features in the image plane, captured by the field of view of the camera.
5.3 Map Based Homing simulation 37
(a) (b)
(c) (d)
Figure 5.5: (a) Start of experiment. (b) Start of homing procedure. (c) After reversing, the cargoes to its initial position. (d) End position and orientation measurement.
38 Experiments
(a) (b)
(c) (d)
Figure 5.6: (a) Beginning of Map Based Homing simulation. (b) Beginning of homing proce-dure. (c) Homing procedure at middle. (d) Final car pose.
Chapter 6
Conclusion and Future Work
This work started with the integration of the various hardware components into a mobile robot.
Various positive conclusions can be extracted. The first conclusion is that wireless IP cameras
are practical and functional tools to help building teleoperated robots. Also, we can say that
is already possible to build a teleoperated vehicle with autonomous features at a reasonable
cost. As was expected, batteries were the main component problem in terms of hardware. A
power source capable of granting a stable function of the robot is important where some of the
hardware, like the camera used, are sensitive to voltage-drop issues.
MonoSLAM was tested in a variety of experiments and, we concluded that it is possible
and plausible to acquire odometry measurements using MonoSLAM but, as expected, we have
considerable error accumulation. One possible way that is being considered to future use in a
real time experiment, is to decrease this error accumulation by resetting it, by comparing the
actual image with the ones stored in an images map, as was proposed in Vision Based-navigation
and Environmental Representations with an Omni-directional Camera [13]. An alternative way
is modifying the scenario either by adding landmarks or fixedcameras informing the robot
location.
Changing the features detector mechanism of MonoSLAM from corner extraction to a better
one, like SIFT, would improve the quality of detected features and make localization more
reliable. This is an important step in order to make a Map BasedHoming experiment with our
setup, since this allows having more and better features. The Map Based Homing strategy is also
important to reset the error associated with our EKF prediction, in case it lost its convergence.
40 Conclusion and Future Work
Appendix A
Compiling and executing a program on
Axis 207w
In order to compile a program for the Axis 207w one needs to have the right cross-compiler.
More precisely, one needs the 0x86 to ARM cross-compiler, as the 207w camera has an ART-
PEC processor that is running an armv4tl architecture. The cross compiler can be obtained by
downloadingcomptools-arm-x.y.z.tar.gzwherex.y.zcorresponds to the cross-compiler version.
The tar.gzfile contains all the source code and scripts to build the compiler. The last version
that we found is 1.20050730.1 and you can download it from
ftp://ftp.axis.com/pub_soft/cam_srv/arm_tools/.
We tried to build the cross-compiler on Ubuntu 12.04, but in every attempt, the building process
crashed with a buffer overflow. After installing Debian 6.0.7 and all the packages needed, we
succeeded in building the cross-compiler. One needsbison, flexandgettext, which can be easily
obtained through the Software Center.
A.1 Building the cross-compiler
1. Open a terminal.
2. Go to the directory wherecomptools-arm-x.y.z.tar.gzis.
3. Unpack the files:
1 \%:~ > t a r −zxv f comptools−arm−x . y . z . t a r . gz
42 Compiling and executing a program on Axis 207w
4. Go to the recently created directory, by default it will be/comptools-arm-x.y.z:
1 \%:~ > cd comptools−arm−x . y . z
5. To be able to install the cross-compiler, you need root access:
1 \%: ~ / comptools−arm−x . y . z> su
\%: ~ / comptools−arm−x . y . z> < r o o t password >
6. Install the compiler to a directory of your choice, for example: /usr/local/arm. In order
to do this type:
\%: ~ / comptools−arm−x . y . z> . / i n s t a l l−arm− t o o l s / u s r / l o c a l / arm
7. Exit from root access:
1 \%:~ > e x i t
The build can take several minutes. After successful build and installation, the message
“Build succeeded: ARM GCC compiler tools ver 1.20050730.1 installed at /usr/local/arm”
appears on the terminal. Once you have the cross-compiler installed on your disk, you should
locate the/bin directory, in which you have the gcc compiler with arm targetset. To run a
program on the camera, you need to go over the steps compilingcode, transfer the executable
to the camera and executing the program on the camera.
A.2 Compiling code
1. Open a terminal.
2. Go to the/bin directory in your cross-compiler folder. For example:
1 \%:~ > cd ~ / arm / b in
A.3 Transferring the program to the camera 43
3. Compile your C code with:
1 \%:~ > . / arm−l i nux−gcc −o myprog m y _ s c r i p t . c
A.3 Transferring the program to the camera
1. Open a terminal.
2. Open a ftp connection with the camera by typing:
1 \%:~ > f t p c a m e r a _ i p _ a d r e s s
3. If you type:
1 \%:~ > l s
Something like this should appear:
1 −rw−r−−r−− 1 r o o t r o o t 30720 Dec 14 2009 . va r . t a r
drwxr−xr−x 1 r o o t r o o t 2464 Dec 14 2009 b in
3 drwxr−xr−x 1 r o o t r o o t 1596 Dec 14 2009 dev
lrwxrwxrwx 1 r o o t r o o t 13 Dec 14 2009 e t c−> mnt /
f l a s h / e t c /
5 drwxr−xr−x 1 r o o t r o o t 1224 Dec 14 2009 l i b
−rwxr−xr−x 1 r o o t r o o t 1539 Dec 14 2009 l i n u x r c
7 drwxr−xr−x 1 r o o t r o o t 104 Dec 14 2009 mnt
dr−xr−xr−x 75 r o o t r o o t 0 Jan 1 00:00 proc
9 drwx−−−−−− 1 r o o t r o o t 0 Dec 14 2009 r o o t
drwxr−xr−x 1 r o o t r o o t 660 Dec 14 2009 s b i n
11 drwxr−xr−x 1 r o o t r o o t 16 Dec 14 2009 s h a r e
drwxr−xr−x 10 r o o t r o o t 0 Jan 1 00:00 sys
13 l rwxrwxrwx 1 r o o t r o o t 7 Dec 14 2009 tmp−> var / tmp /
drwxr−xr−x 1 r o o t r o o t 120 Dec 14 2009 u s r
15 drwxr−xr−x 10 r o o t r o o t 200 Jan 1 00:00 va r
44 Compiling and executing a program on Axis 207w
4. Go to a directory corresponding to non volatile memory, like mnt/flash/. In order to do
this, type:
1 \%:~ > cd mnt / f l a s h /
5. Put in this directory the executable and any libraries required to execute the program. To
add the current directory to the library path, type:
1 \%:~ > e x p o r t LD_LIBRARY_PATH=.
or, if you have the libraries in other path, type:
1 \%:~ > e x p o r t LD_LIBRARY_PATH=/ l i b r a r i e s _ p a t h
To transfer libraries and executable programs to the camera, you need to use the put
command:
1 \%:~ > pu t f i l e n a m e
A.4 Executing the program on the camera
1. Open a terminal.
2. Use Telnet protocol. To do this, type:
1 \%:~ > t e l n e t c a m e r a _ i p _ a d d r e s s
3. Go to the directory where you previously have placed the executable. If you used the
directory suggested above, type:
1 \%:~ > cd . . / mnt / f l a s h
A.4 Executing the program on the camera 45
4. Give permissions for program execution, for example:
1 \%:~ > chmod 750 myprog
wheremyprogis your executable. Typeman chmodin a linux terminal for more details
on how to use it.
5. Execute the program by typing:
1 \%:~ > . / myprog
Note: If you want access the camera over a NAT (Network Address Translation), you need
to configure your router in order to port forward the FTP and Telnet protocols.
46 Compiling and executing a program on Axis 207w
Appendix B
Arduino code
1 i n t d i rB =13;
i n t pwmB=11;
3 i n t brakeB =8;
i n t cu r_senso rB =A1 ;
5 i n t sen =0;
7 i n t d i rA =12;
i n t pwmA=3;
9 i n t brakeA =9;
i n t cu r_sensorA =A0 ;
11
13 i n t s e n s o r =2;
uns igned long t i m e o u t _ t i m e =500000;
15 uns igned long EOM_time =2500;
i n t c n t =0;
17 i n t gea r =0;
19 / / I n t e r p r e t i n s t r u c t i o n from camera
i n t get_word ( ) {
21 s t a t i c uns igned longt 1 f ;
s t a t i c uns igned longt_max ;
23 i n t x ;
i n t r e t =0;
25 s t a t i c i n t s t a t e =0;
x= d i g i t a l R e a d ( s e n s o r ) ;
27 s w i t c h ( s t a t e ) {
case 0 : / / i d l e s t a t e
48 Arduino code
29 i f ( x==1) {
s t a t e =1;
31 t 1 f = mic ros ( ) + t i m e o u t _ t i m e ;
c n t =1;
33 break;
}
35 break;
case 1 : / / wai t−f o r− t r a n s i t i o n from HIGH t o LOW
37 i f ( t 1 f <= mic ros ( ) ) {
s t a t e =4; / / c r a s h
39 break;
}
41 i f ( x==0) {
s t a t e =2;
43 t_max= mic ros ( ) +EOM_time ;
b reak;
45 }
b reak;
47 case 2 : / / wai t−f o r− t r a n s i t i o n from LOW t o HIGH
i f ( m ic ros ( ) >t_max ) {
49 s t a t e =3;
}
51 i f ( x==1) {
s t a t e =1;
53 c n t ++;
}
55 break;
57 case 3 : / /EOM s t a t e
r e t = c n t ;
59 c n t =0;
i f ( x==1) {
61 t 1 f = mic ros ( ) + t i m e o u t _ t i m e ;
s t a t e =1;
63 c n t =1;
}
65 break;
67 case 4 : / / c r a s h s t a t e
s t a t e =0;
69 }
r e t u r n r e t ;
49
71 }
73 / / Send motors commands
vo id parse_word (i n t w) {
75 i f (w==1) {
S e r i a l . p r i n t l n (" I n i t " ) ;
77 }
i f (w==2) {
79 S e r i a l . p r i n t l n (" esq ") ;
d i g i t a l W r i t e ( di rB , HIGH) ;
81 ana logWr i t e (pwmB, 180) ;
83 }
i f (w==3) {
85 S e r i a l . p r i n t l n (" d i r " ) ;
d i g i t a l W r i t e ( di rB , LOW) ;
87 ana logWr i t e (pwmB, 180) ;
}
89 i f (w==4) {
S e r i a l . p r i n t l n (" up " ) ;
91 gea r ++;
ana logWr i t e (pwmB, 0) ;
93 }
i f (w==5) {
95 S e r i a l . p r i n t l n ("down" ) ;
gear−−;
97 ana logWr i t e (pwmB, 0) ;
}
99 i f (w==6) {
S e r i a l . p r i n t l n (" s t o p s p e e d ") ;
101 gea r =0;
ana logWr i t e (pwmB, 0) ;
103 }
i f ( gea r ==1) {
105 d i g i t a l W r i t e ( dirA , LOW) ;
ana logWr i t e (pwmA, 200) ;
107 }
i f ( gea r ==−1){
109 d i g i t a l W r i t e ( dirA , HIGH) ;
ana logWr i t e (pwmA, 200) ;
111 }
i f ( gea r ==0) {
50 Arduino code
113 d i g i t a l W r i t e (pwmA, LOW) ;
}
115 }
117 / / C o n f i g u r i n g PWM f r e q u e n c y s i g n a l
vo id pwm_config (i n t f r e q I d ) {
119 / / PWM p i n s 6 & 5 , 9 & 10 or 11 & 3 a r e c o n t r o l l e d by TCCR0B, TCCR1Bor
TCCR2B
/ / f r e q I d 1 . . 5 <=> 32kHz , 4kHz , 500Hz , 125Hz , 31 .25 Hz
121 i f ( f r e q I d <1 | | f r e q I d >5) {
S e r i a l . p r i n t l n (" E r r o r i n PWM c o n f i g u r a t i o n ") ;
123 r e t u r n;
}
125 i n t p r e s c a l e r V a l = 0x07 ; / / c l e a r t h e l a s t 3 b i t s mask
TCCR2B &= ~ p r e s c a l e r V a l ; / / c o n f i g u r i n g p i n s 11 and 3
127 p r e s c a l e r V a l = f r e q I d ;
TCCR2B | = p r e s c a l e r V a l ;
129 }
131 vo id s e t u p ( ) {
S e r i a l . beg in (9600 ) ;
133 pinMode ( senso r , INPUT ) ;
pinMode ( dirA , OUTPUT) ;
135 pinMode ( dirB , OUTPUT) ;
/ / C o n f i g u r i n g PWM f r e q u e n c y s i g n a l
137 pwm_config ( 1 ) ;
}
139
vo id loop ( ) {
141 i n t w=0;
i f (w=get_word ( ) ) {
143 parse_word (w) ;
}
145 }
Appendix C
Corner extraction and camera calibration
using Bouguet’s Toolbox
First, we need to acquire a sequence of images with the camera, showing a chess board in a
variety of positions. Then, we execute the toolbox and the first thing we need to do is loading
the images. After that, we start the corner extraction feature. The program will eventually
ask the number of squares in both directions and to click on the outer 4 corners of our chess
presented in the first image. Then, we are asked to enter the size of the chess rectangles in
each axis component, which are equal if the rectangles are squares. With this information, the
program can infer the position of the all corners inside the chess, considering no lens distortion.
We have to repeat the process of clicking in the 4 outer corners of the chess board for every
image, as shown in fig.C.1(a) to C.1(e) for one of the images. If the program fails in estimating
the corners position, we have to guess the value of the lens distortion (typically between -1 and
1) until the inferred corners overlap the true ones on the image plane, as shown in fig.C.1(f).
In the end, we execute the calibration feature, where we obtain values for the focal length,
principal point and skew.
52 Corner extraction and camera calibration using Bouguet’s Toolbox
(a) (b) (c)
(d) (e) (f)
Figure C.1: (a) One of the images used in calibration. (b) First corner to be clicked. (c) Secondcorner to be clicked. (d) Third corner to be clicked. (e) Fourth corner to be clicked and guessof square corners position made by the program. (f) Final image with corrected square cornerspositions.
Bibliography
[1] Arduino. Arduino motor shield.
http://arduino.cc/en/Main/ArduinoMotorShieldR3.
[2] Antonis Argyros, Kostas Bekris, and Stelios Orphanoudakis. Robot homing based on
panoramic vision.FORTH-ICS, 2001.
[3] P. A. Beardsley, I. D. Reid, A. Zisserman, and D. W. Murray. Active visual navigation
using non-metric structure.IEEE Computer Society Press, pages 58–65, 1995.
[4] J. Borenstein and Y. Koren. A mobile platform for nursing robots. IEEE Transactions on
Industrial Electronics, 32:158–165, 1985.
[5] J. Borenstein and Y. Koren. Tele-autonomous guidance formobile robots.IEEE Transac-
tions on Systems, Man, and Cybernetics, Special Issue on Unmanned Systems and Vehi-
cles, 20:1437–1443, 1990.
[6] J. Borenstein and Y. Koren. Histogramic in-motion mapping for mobile robot obstacle
avoidance.IEEE Journal of Robotics and Automation, 7:535–539, 1991.
[7] Jean-Yves Bouguet. Camera calibration toolbox for matlab.
http://www.vision.caltech.edu/bouguetj.
[8] Jean Yves Bouguet and Pietro Perona. Visual navigation using a single camera.ICCV,
1995.
[9] J. Civera, O. Grasa, A. Davison, and J. Montiel. 1-point ransac for ekf filtering. application
to real-time structure from motion and visual odometry.J. Field Robot., 27(5):609–631,
2010.
[10] James Crosetto, Jeremy Ellison, and Seth Schwiethale. Control an rc car from a computer
over a wireless network.https://code.google.com/p/networkrccar.
54 BIBLIOGRAPHY
[11] Andrew J Davison and David W Murray. Mobile robot localisation using active vision.
Proc 5th European Conference on Computer Vision, pages 809–825, 1998.
[12] Mario Filipe Florêncio and Pedro Emanuel Agostinho.Parqueamento Automóvel Au-
tomático. Instituto Superior Técnico, 2005.
[13] J. Gaspar, N. Winters, and J. Santos-Victor. Vision based-navigation and environmental
representations with an omni-directional camera.IEEE Transactions on Robotics and
Automation, 16:890–898, 2000.
[14] C. G. Harris and J. M. Pike. 3d positional integration from image sequences.Proc 3rd
Alvey Vision Conf, Cambridge UK, pages 233–236, 1987.
[15] Christoph H"rzeler, Jean-Claude Metzger, Andreas Nussberger, Florian Hänni, Adrian
Murbach, Christian Bermes, and Roland Siegwart Samir Bouabdallah, Dario Schafroth.
Teleoperation assistance for an indoor quadrotor helicopter. Swiss Federal Institute of
Technology, pages 464–471, 2008.
[16] Jennifer S. Kay. Stripe: Remote driving using limited image data. Carnegie Mellon
University, Pittsburgh, 1997.
[17] N. Leite, A. Del Bue, and J. Gaspar. Calibrating a network of cameras based on visual
odometry. InProc. of IV Jornadas de Engenharia Electrónica e Telecomunica cões e de
Computadores, Portugal, pages pp174–179, 2008.
[18] Ralf Möller, Dimitrios Lambrinos, Rolf Pfeifer, and Rüdiger Wehner. Insect strategies of
visual homing in mobile robots.AI Lab, Dept. of Computer Science, 1998.
[19] Ricardo Ferreira Nuno Ribeiro, José Gaspar. Homing a teleoperated car using monocular
slam.RecPad, 2013.
[20] José Pedro Sucena Paiva.Redes de Energia Eléctrica. IST Press, 2007.
[21] Terrence Fong Sébastien Grange and Charles Baur. Effective vehicle teleoperation on the
world wide web.IEEE International Conference on Robotics and Automation, 2000.
[22] Bruno Siciliano, Lorenzo Sciavicco, Luigi Villani, andGiuseppe Oriolo.Robotics: Mod-
elling, Planning and Control. Springer, 2008.
[23] Sebastian Thrun and Arno Bücken. Learning maps for indoor mobile robot navigation.
CMU-CS, 1996.
BIBLIOGRAPHY 55
[24] Peter Young. The leapfrog method and other symplectic algorithms for integrating new-
ton laws of motion.http://young.physics.ucsc.edu/115/leapfrog.pdf,
2013.