[ieee 2013 international conference on individual and collective behaviors in robotics (icbr) -...

6
An FPGA Based Platform For Real Time Robot Localization Agn` es Ghorbel, Mohamed Jallouli and Nader Ben Amor Computer and Embedded Systems Laboratory Ecole Nationale d’Ing´ enieurs de Sfax Universit´ e de Sfax [email protected] Lobna Amouri Control and Energy Management Laboratory Ecole Nationale d’Ing´ enieurs de Sfax Universit´ e de Sfax [email protected] Abstract—The paper presents a hardware architecture for implementing a mobile robot localization approach through encoders’ measurements and absolute localization using cam- era tracking images. This technique has been developed and implemented for the motion of the robot from an initial posi- tion towards another desired position, taking into account the tradeoff between complexity (reducing computation time) and performance (reducing traveling time). The proposed method is implemented on FPGA (Field Programmable Gate Array) board through mixed hardware / software implementation in order to ensure fluid robot navigation. Experimental tests on Xilinx Virtex 5 ML507 proved the effectiveness of the proposed architecture since we have obtained a reduction of 85 % in execution time compared to initial MATLAB version developed on a PC platform. Index Terms– Robot localization, HW/SW implementation, autonomous navigation, Xilinx FPGA, camera tracking images. I. INTRODUCTION In recent years, the most advanced digital technologies were introduced into robot control applications: FPGA and DSP. The emergence of FPGA provides a flexible and re- alistic approach for control system designer to design high quality services in a faster time frame and at lower costs than ever before. FPGA makes it possible to define the user programmable hardware subsystem that can be easily reloaded and modified on-line. Thus, the integration of FPGA can perform a lot of computing-intensive and time-critical tasks such as information acquisition and data processing [1], as well as, its internal structure favors the development of parallel architecture suitable for robot control applications . Certain FPGA-based solutions have been reported in the area of robotics. A fuzzy logic controller for robotics has been carried out using an FPGA in [2]. A neural network controller on FPGA for a humanoid robot arm has been implemented in [3]. An embedded robust adaptive controller using FPGA for mobile robotics has proposed in [4] and [5] and a fuzzy logic and neural network based controller has been applied in [6]. Another application, in [7], consists in interfacing all the modules used by the robot to detect obstacle and be able to control the speed. These different modules will be actuators, sensors, wirless transmissions. The undeniable interest of mobile robotics is to consider- ably boost our knowledge of navigation and localization for autonomous systems. It’s becoming increasingly important to be able to accurately determine the position of a robot in its environment. However, we can’t expect to precisely locate a robot using only inertial sensor data (odometer, accelerometer, gyroscope,...) due to the problems of drift, sensor accuracy and uncertainties associated with modeling. The position esti- mation takes into account all previous estimates. This means that errors are also incorporated and will grow over time. In order to compensate the drawbacks, substantial im- provement is provided by applying multi-sensors data fusion approach. In [8], Konolige proposed a data fusion method for correcting visual odometry measurements by inertial measure- ments unit using an Extended Kalman Filter (EKF). Another approach of a multi-sensor data fusion is presented in [9] which uses an indirect information filter for fusing inertial measure- ments with relative translation and rotation measurements from a 3D visual odometry and 3D leg odometry. Another method, presented in [10], consists in using an hybrid sensing system for mobile robot localization in large-scale indoor environment by switching omni-directional vision and laser scanning. In [11], a self adaptive Monte Carlo localization algorithm using range finders to solve localization problems is presented. The shortcoming are either the short range of used sensors or the need to know the starting position of the robot. As such algorithms used in robotic applications are computationally intensive, commonly are implemented in Personal Computer (PC) platform. To enable the development of small robotic platforms with real time tracking and the development of rapid applications, an specific architecture is required. Thus, in our approach, we present a complete platform for robot navigation, with a robot tracking method on a Xilinx FPGA embedded system [12]. This complet navigation-localization platform contains a camera fixed on the ceiling of the environment for external tracking and a camera onto the robot itself. Our proposed approach for robot localization developed in this paper uses two different sensors. Firstly, with encoders to ensure relative positioning using the kinematic model, a fuzzy controller is used to reach a target. Secondly, a camera placed on a parallel plane to the robot navigation, ensures the absolute localization to reduce the encoders position errors regardless of a retiming visual control.The advantage of this absolute system is to generate automatically the initial position od the robot. In this paper, we are interested to implement the absolute International Conference on Individual and Collective Behaviors in Robotics 978-1-4799-2813-2/13/$31.00 ©2013 IEEE 56

Upload: lobna

Post on 27-Feb-2017

213 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: [IEEE 2013 International Conference on Individual and Collective Behaviors in Robotics (ICBR) - Sousse, Tunisia (2013.12.15-2013.12.17)] 2013 International Conference on Individual

An FPGA Based Platform For Real Time RobotLocalization

Agnes Ghorbel, Mohamed Jallouliand Nader Ben Amor

Computer and Embedded Systems LaboratoryEcole Nationale d’Ingenieurs de Sfax

Universite de [email protected]

Lobna AmouriControl and Energy Management Laboratory

Ecole Nationale d’Ingenieurs de SfaxUniversite de Sfax

[email protected]

Abstract—The paper presents a hardware architecture forimplementing a mobile robot localization approach throughencoders’ measurements and absolute localization using cam-era tracking images. This technique has been developed andimplemented for the motion of the robot from an initial posi-tion towards another desired position, taking into account thetradeoff between complexity (reducing computation time) andperformance (reducing traveling time). The proposed methodis implemented on FPGA (Field Programmable Gate Array)board through mixed hardware / software implementation inorder to ensure fluid robot navigation. Experimental tests onXilinx Virtex 5 ML507 proved the effectiveness of the proposedarchitecture since we have obtained a reduction of 85 % inexecution time compared to initial MATLAB version developedon a PC platform.

Index Terms– Robot localization, HW/SW implementation,autonomous navigation, Xilinx FPGA, camera tracking images.

I. INTRODUCTION

In recent years, the most advanced digital technologieswere introduced into robot control applications: FPGA andDSP. The emergence of FPGA provides a flexible and re-alistic approach for control system designer to design highquality services in a faster time frame and at lower coststhan ever before. FPGA makes it possible to define the userprogrammable hardware subsystem that can be easily reloadedand modified on-line. Thus, the integration of FPGA canperform a lot of computing-intensive and time-critical taskssuch as information acquisition and data processing [1], aswell as, its internal structure favors the development of parallelarchitecture suitable for robot control applications .

Certain FPGA-based solutions have been reported in thearea of robotics. A fuzzy logic controller for robotics has beencarried out using an FPGA in [2]. A neural network controlleron FPGA for a humanoid robot arm has been implementedin [3]. An embedded robust adaptive controller using FPGAfor mobile robotics has proposed in [4] and [5] and a fuzzylogic and neural network based controller has been applied in[6]. Another application, in [7], consists in interfacing all themodules used by the robot to detect obstacle and be able tocontrol the speed. These different modules will be actuators,sensors, wirless transmissions.

The undeniable interest of mobile robotics is to consider-ably boost our knowledge of navigation and localization for

autonomous systems. It’s becoming increasingly important tobe able to accurately determine the position of a robot in itsenvironment. However, we can’t expect to precisely locate arobot using only inertial sensor data (odometer, accelerometer,gyroscope,...) due to the problems of drift, sensor accuracyand uncertainties associated with modeling. The position esti-mation takes into account all previous estimates. This meansthat errors are also incorporated and will grow over time.

In order to compensate the drawbacks, substantial im-provement is provided by applying multi-sensors data fusionapproach. In [8], Konolige proposed a data fusion method forcorrecting visual odometry measurements by inertial measure-ments unit using an Extended Kalman Filter (EKF). Anotherapproach of a multi-sensor data fusion is presented in [9] whichuses an indirect information filter for fusing inertial measure-ments with relative translation and rotation measurements froma 3D visual odometry and 3D leg odometry. Another method,presented in [10], consists in using an hybrid sensing systemfor mobile robot localization in large-scale indoor environmentby switching omni-directional vision and laser scanning. In[11], a self adaptive Monte Carlo localization algorithm usingrange finders to solve localization problems is presented. Theshortcoming are either the short range of used sensors orthe need to know the starting position of the robot. As suchalgorithms used in robotic applications are computationallyintensive, commonly are implemented in Personal Computer(PC) platform. To enable the development of small roboticplatforms with real time tracking and the development of rapidapplications, an specific architecture is required. Thus, in ourapproach, we present a complete platform for robot navigation,with a robot tracking method on a Xilinx FPGA embeddedsystem [12]. This complet navigation-localization platformcontains a camera fixed on the ceiling of the environment forexternal tracking and a camera onto the robot itself.

Our proposed approach for robot localization developed inthis paper uses two different sensors. Firstly, with encoders toensure relative positioning using the kinematic model, a fuzzycontroller is used to reach a target. Secondly, a camera placedon a parallel plane to the robot navigation, ensures the absolutelocalization to reduce the encoders position errors regardless ofa retiming visual control.The advantage of this absolute systemis to generate automatically the initial position od the robot.

In this paper, we are interested to implement the absolute

International Conference on Individual and Collective Behaviors in Robotics

978-1-4799-2813-2/13/$31.00 ©2013 IEEE 56

Page 2: [IEEE 2013 International Conference on Individual and Collective Behaviors in Robotics (ICBR) - Sousse, Tunisia (2013.12.15-2013.12.17)] 2013 International Conference on Individual

localization technique (based on camera data) on FPGA board.Thus, the section 2 presents the system overview used for robotabsolute localization. The section 3 details the required stepsfor this robotic application and exposes the different obtainedresults. And finally, the section 4 presents the different toolsused to ameliorate time processing.

II. SYSTEM OVERVIEW

It is proposed to implement on an embedded board theabsolute localization technique (provided by camera) throughmixed hardware/software implementation to ensure a fluidrobot navigation. The proposed platform is shown in figure1.

Fig. 1. The proposed platform for robot navigation

The used board is Xilinx Virtex 5 ML507 FPGA whichhas dotted with a hard core processor ”PowerPC-440” up to550 MHz. It is possible also to use a second CPU as a softcore from the board development tools like ”MicroBlaze” up to125 MHz as frequency. The project aims to design a real-timevideo capture (30 ms as average time processing per imagefor 40 mm/s as robot speed) and image processing system forrobot localization.

III. THE ABSOLUTE LOCALIZATION’S STEPS

A. Camera acquisition and display module

In this section, we will detail the architecture used for thevideo capture and its projection on a screen LCD.This realization allows to acquire video through the VGAinput of ML507. The effective resolution of the video inputis 640 ∗ 480 pixel. This input will be assured either by aninternal webcam or a transmitter video converter if we have acamcorder or a camera whose output is a video output.

The camera acquisition module will be made on theMicroBlaze processor and uses two IPs like shown in thearchitecture described in figure 2.

These two IPs are:

• The VGA input block: This block is constituted ofAD9980 encoder that converts the received signal intoa YUV signals and of a VFBC (Video Frame BufferConroller) that allows the user to read and write data,whatever the size or the organization of memory.

Fig. 2. The block diagram architecture for camera acquisition

• The XPS TFT block which displays the result that itreceives on a DVI or VGA interface.

After generating the bitstream, Xilinx Platform Studio(XPS) allows us to summarize the used capacity in our FPGA.The table I below summarizes it.

TABLE I. SYNTHESIS RESULTS

Device Utilization Summary

Slice Logic Utilization Used Available Utilization

Number of Slice Registers 9.301 44.800 20%

Number of Slice LUTs 8.658 44.800 19%

Number used as logic 7.930 44.800 17%

Number of occupied Slices 4.803 11.200 42%

Total Memory used (KB) 1.782 5.328 33%

We can notice that less than 30 % of FPGA availableresources are consumed, which allows us the opportunity tohave other applications in our architecture.

The figure 3 shows the validation of the video acquisitionand display module in which a rate of 25 frames per secondis performed.

Fig. 3. Test of video acquisition

B. The proposed absolute localization’s method

1) Hardware specification: The main objective is to ensurea continuous real-time robot navigation. So, we decided toexecute our treatment (absolute localization algorithm) onthe PowerPC processor in order to gain execution time asthe frequency of the PowerPC is greater than MicroBlaze

57

Page 3: [IEEE 2013 International Conference on Individual and Collective Behaviors in Robotics (ICBR) - Sousse, Tunisia (2013.12.15-2013.12.17)] 2013 International Conference on Individual

frequency. Hence, we define a dual core architecture likedescribed in figure 4.

Fig. 4. The block diagram architecture of the dual core.

We can see through this figure that the two processors sharecommon resources. In fact, both have access to external DDRmemory via the MPMC module interface. One, to put in thevideo streaming pixels and the other to make on it treatment.

After adding all the needed components to achieve thelocalization purposes and lead to an adequate multiprocessorarchitecture, we can generate our bitstream. XPS permits tosummarize the used capacity in our FPGA like presented intable II.

TABLE II. SYNTHESIS RESULTS

Device Utilization Summary

Slice Logic Utilization Used Available Utilization

Number of Slice Registers 10, 871 44, 800 24%

Number of Slice LUTs 9, 929 44, 800 22%

Number used as logic 9, 081 44, 800 20%

Number of occupied Slices 5, 523 11, 200 49%

Total Memory used (KB) 2, 610 5, 328 48%

2) The adopted mobile robot: The proposed method hasbeen implemented and tested on the mobile robot Khepera II(see figure 5). This robot is designed at the Swiss FederalInstitute of Technology in Lausanne. It is widely used forresearch and teaching purposes because it allows real-worldtesting of algorithms developed in simulation. This KheperaII is circular of 55 mm in diameter and 30 mm in height. Itsweight is only 80 g and its small size allows experiments tobe performed in small work area.

Fig. 5. Basic module of the robot Khepera II

The communication between the FPGA and the robot ismade through a serial connection. The FPGA is connected tothe interface module by using a standard RS−232 connection,while the interface module converts this signal into an outputserial signal S to communicate with the robot.

3) The absolute localization algorithm: The camera pro-vides a 640 ∗ 480 color image which will be processed byan image processing program, written in C, to determine theposition of the robot.

The approach used for the robot localization is based onthree steps like shown in figure 6:

Fig. 6. The adopted approach

• The steps 1: The acquisition of a reference image thatcontains four landmarks like shown in figure 7(a).

• The steps 2: The acquisition of images during themotion of robot to determine the robot position onthe image (in pixels) like shown in figure 7(b).

• The steps 3: The estimation of robot position on theplatform (in centimeters) like shown in figure 8.

(a) The reference image (b) The robot position image

Fig. 7. The original input images

The identification of the four markers allows us to define acartesian coordinates system (figure 8) against which the realrobot position will be calculated. This position is obtained bya simple difference between the pixel coordinates either of therobot or of the reference system.

58

Page 4: [IEEE 2013 International Conference on Individual and Collective Behaviors in Robotics (ICBR) - Sousse, Tunisia (2013.12.15-2013.12.17)] 2013 International Conference on Individual

Fig. 8. The positioning architecture adopted

The different algorithms used to achieve this goal arerespectively: the grayscaling, the Sobel filter, the thresholding,erosion and Hough Circle Transform. The results of thesetreatments are shown in figure 9.

(a) The color image (b) The grayscale image

(c) The edged image (d) The thresholded image

(e) The eroded image (f) The robot position image

(g) The reference output image

Fig. 9. The obtained results on applying image processing algorithms

After applying this treatment, we have succeed to locate inthe one hand, four landmarks (defining the reference system ofthe robot) (see figure 9(g)) and in the other hand, the positionof the robot (see figure 9(f)). In fact, the first step consists inconverting a color image to a graylevel (figure 9(b)) in whichthe red, green and blue components have equal intensity inRGB space . Then, the second step consists in transforming thegrayscale image into a black image unless at the points wherea contour is detected that is marked in white (figure 9(c)).After that, we must reduce in figure 9(c) a large quantity ofinformations (the white lines) while conserving in figure 9(d)nearly all pertinent informations (essential to its comprehen-sion) to separate objects from background. And to isolate therobot from the four markers (figure 9(e)), we apply the erosionon the binary image. Finally, the last step is to locate the fourlandmarks and the robot and that’s by applying the HoughCircle Transform algorithm.

4) Experimental result: Robot’s position estimation: Theaim of this experiment is to make simultaneous localization ofthe robot in by both methods (encoders and camera).In this experiment, the robot must move from an initial point(xo = 0mm, yo = 0mm) to a target point defined by (xT =250mm, yT = 250mm).We should note that the theoretical trajectory is the trajectoryto be covered by the robot if there are no errors in encoders.The real trajectory is the trajectory tracked by the camera anddetermines the robot’s real position.

Fig. 10. Robot localization using both encoders and camera measurement.

It was observed in figure 10 that the robot was notable to reach the desired target and the position providedby the camera are much closed than those obtained withencoders. This result is due to the incremental movementof the encoders and the wheels slippage problem. Thus, wehave two localization methods: one is easy to accomplishbut imprecise while the other is more complex but precise.So, to further improve the localization task, a combinationof these two method is suggested by using retiming pointin order to correct the encoders measurements by the visualsystem measures. As the absolute localization algorithm iscomputationally intensive and complexe and the navigationof the robot must be continuous, we decide to reduce thealgorithm’s processing time.

59

Page 5: [IEEE 2013 International Conference on Individual and Collective Behaviors in Robotics (ICBR) - Sousse, Tunisia (2013.12.15-2013.12.17)] 2013 International Conference on Individual

IV. USED TECHNIQUES FOR ACCELERATINGTIME PROCESSING

A. Purely Software Implementation

In order to measure performance of our application, weinsert the XPS timer module attached to the plb bus andwe have obtained 800 milliseconds as time processing. Thisobtained time is too large to ensure real time navigation (wehave to stop the robot) because the processing time for fluidrobot navigation must be less than 0.48 seconds. So, wedecide to enhance the system performance and reduce its timeexecution by various techniques.

B. Tools for accelerating time processing

To ameliorate the performance of our application, we usedifferent methods:

1) The use of the APU floating-point unit : The Virtex 5Auxiliary Processor Unit floating point unit is an enhanceddesign for the PowerPC 440 CPU in the Virtex-5 FXT FPGAfamily. It gives facilities to support floating-point arithmeticoperations in single or double precision. The FPU is tightlycoupled to the PowerPC processor core using the APU inter-face. The utilization of this native PowerPC processor floating-point instructions make us to achieve typical speedups of 8xover software emulation. In fact, the add of this acceleratedtool leads to 97 milliseconds as time execution.

2) The Creation of hardware accelerator: To evaluate theperformance of our technique and determine which portionsof code to be optimized, we calculated the execution time ofeach function. The assembly of time execution is resumed intable III.

TABLE III. PROFILING RESULTS

Time processing (ms)Edge detection 67.16

Thresholding 3.35

erosion 12.56

Circle Hough Transform 13.86

We notice that the edge detection function monopolizes70% since it contains complex operations like equation (1):

|Mag| =√(Eh)2 + (Ev)2 (1)

where:Mag: the gradient magnitude.Eh: the intensity along horizontal directionsEv: the intensity along vertical directions.

So, we design hardware accelerator ”SQUARE Block” thatcomputes this function to HW in order to accelerate executiontime. There are different ways to connect this hardwareblock to the PowerPC. These ways will be discussed in thefollowing part.

• Attached SQUARE block via Plb bus

The designing of a custom processor peripheral usingXilinx EDK shows that it was quite simple to create custom

Fig. 11. Attached the ”SQUARE” block to the plb bus

processor peripherals to be attached to the PLB bus (figure11), but a bus connection is not the best option for thisapplication. In fact, each bus transaction made between thePowerPC processor, the hardware block and the register tostore data requires 5 five cycles. This method would be afunctional solution, but hardly an efficient one.

• Attached SQUARE block via APU interface

The Auxiliary Processor Unit (APU) controller allows usto extend the native PowerPC instruction set with custominstructions like shown in figure 12. These instructions areexecuted by an FPGA Fabric Co-processor Module (FCM).

Fig. 12. Attached the ”SQUARE” block to the APU interface

The main reason using the APU instead connecting hard-ware blocks to the CPU through the PLB bus is the superiorbandwidth and lower latency between the PowerPC and theAPU/FCM. An additional advantage is that the APU is inde-pendent of the CPU-to-peripheral interface and therefore doesnot add an extra load to the PLB bus, which the system needsfor fast peripheral access.With the add of this hardware accelerator, we pass from 97milliseconds to 30 milliseconds.

60

Page 6: [IEEE 2013 International Conference on Individual and Collective Behaviors in Robotics (ICBR) - Sousse, Tunisia (2013.12.15-2013.12.17)] 2013 International Conference on Individual

C. Time processing comparaison

Experimental tests on Xilinx Virtex 5 ML507 provedthe effectiveness of the proposed architecture since we haveobtained a reduction of 85 % in execution time (see tableIV) compared to initial MATLAB version developed on a PCplatform.

TABLE IV. COMPARAISON RESULTS

PC platform (Matlab) FPGA platform (C)

Time processing (ms) 480 30

V. CONCLUSION

In this work, we present a complete platform for robotlocalization and navigation study. This platform allows therapid design of various and efficient robot navigation appli-cations. In this paper, we show an improvement of an absolutelocalization technique based on hardware acceleration usingFPGA ressources. In fact, we embarked our application onan architecture based on MicroBlaze and PowerPC processorsand we have optimized the performance of our system byintegrating accelerator and co-processor (Floating Point Hard-ware) in order to reduce execution time and provide fluid robotnavigation. In our ongoing work, we envisage to focus onusing a multi-sensors fusion approach to improve environmentperception in a real life scenario would involve a much largerarea since the use of the camera is restricted to an internalenvironment (indoor) to be able to fix it, while in an externalenvironment, it will be harder to do it.

REFERENCES

[1] P. He, M. H. Jin, L. Yang, R. Wei, Y. W. Liu, H. G. Cai, H. Liu,N. Seitz, J. Butterfass, G. Hirzinger, ”High Performance DSP / FPGAController for implementation of HIT/DLR Dexterous Robot Hand”, in

Proc. IEEE International Conference on Robotics and Automation, NewOrleans, April 2004, pp. 3397-3402.

[2] S. Sanchez-Solano, AJ. Cabrera, I. Baturone, FJ. Moreno-Velo, M.Brox, ”FPGA Implementation of Embedded Fuzzy Controllers forRobotic Applications”, IEEE Transactions on Industrial Electronics, vol.54, no. 4, 2007, pp. 1937-1945.

[3] JS. Kim, S. Jung, ”Hardware implementation of a neural network con-troller on FPGA for a humanoid robot arm”. in Proc. IEEE InternationalConference on Advanced Intelligent Mechatronics, Xian, 2008, pp.1164-1169.

[4] HC. Huang, CC. Tsai, ”FPGA implementation of an embedded robustadaptive controller for autonomous omnidirectional mobile platform”,IEEE Transactions on Industrial Electronics, vol. 56, no. 5, 2009, pp.1604-1616.

[5] L. Vachhani, AD. Mahindrakar, K. Sridharan, ”Mobile robotic naviga-tion through a hardware-efficient implementation for control-law-basedconstruction of generalized voronoi diagram”, IEEE/ASME Transac-tions on Mechatronics, vol. 16, no. 6, 2011, pp. 1083-1095.

[6] W. Tsui, F. Karray, I. Song, M. Masmoudi, ”Soft-computing-based embedded design of an intelligent wall/lane-following vehicle”,IEEE/ASME Transc. on Mechatronics, vol. 13, no. 1, 2008, pp. 125-135.

[7] S. Kale, S. S. Shriramwar, ”FPGA-based Controller for a mobile robot”,proceeding in (IJCSIS) International Journal of Computer Science andInformation Security, USA, July 2009, Vol. 3, No. 1, p. 5.

[8] K. Konolige, M. Agrawal, J. Sola, ”Large scale visual odometry forrough terrain”. In Proc. In Proc. International Symposium on RoboticsResearch, November 2007.

[9] A. Chilian, H. Hirschmuller, M. Gorner, ”Multisensor data fusion forrobust pose estimation of a six-Legged walking robot”. Published in theIEEE/RSJ International Conference on Intelligent Robots and Systems.San Francisco, September 2011, pp. 25-30.

[10] Y. Zhuang, K. Wang, W. Wang, H. Huosheng, ”A hybrid sensingapproach to mobile robot localization in complex indoor environments”.International Journal of Robotics and Automation, vol. 27, no. 2, 2012.

[11] L. Zhang, R. Zapata and P. Lpinay, ”Self-adaptive Monte Carlo Local-ization for Mobile Robots Using Range Finders”. Published in Robotica,vol. 30, no. 2, 2012, pp. 229-244.

[12] http:www.xilinx.com/products/boards-and-kits/HW-V5-ML507-UNI-G.htm

61