a tof-camera as a 3d vision sensor for autonomous mobile

15
ARTICLE International Journal of Advanced Robotic Systems A ToF-camera as a 3D Vision Sensor for Autonomous Mobile Robotics Regular Paper Sobers Lourdu Xavier Francis 1 *, Sreenatha G. Anavatti 1 , Matthew Garratt 1 and Hyunbgo Shim 2 1 University of New South Wales, Canberra, Australia 2 Seoul National University, Seoul, Korea *Corresponding author(s) E-mail: [email protected] Received 03 February 2015; Accepted 25 August 2015 DOI: 10.5772/61348 © 2015 Author(s). Licensee InTech. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Abstract The aim of this paper is to deploy a time-of-flight (ToF) based photonic mixer device (PMD) camera on an Auton‐ omous Ground Vehicle (AGV) whose overall target is to traverse from one point to another in hazardous and hostile environments employing obstacle avoidance without human intervention. The hypothesized approach of applying a ToF Camera for an AGV is a suitable approach to autonomous robotics because, as the ToF camera can provide three-dimensional (3D) information at a low computational cost, it is utilized to extract information about obstacles after their calibration and ground testing, and is mounted and integrated with the Pioneer mobile robot. The workspace is a two-dimensional (2D) world map which has been divided into a grid/cells, where the collision-free path defined by the graph search algorithm is a sequence of cells the AGV can traverse to reach the target. PMD depth data is used to populate traversable areas and obstacles by representing a grid/cells of suitable size. These camera data are converted into Cartesian coordinates for entry into a workspace grid map. A more optimal camera mounting angle is needed and adopted by analysing the camera’s performance discrepancy, such as pixel detection, the detection rate and the maximum perceived distances, and infrared (IR) scattering with respect to the ground surface. This mounting angle is recommended to be half the vertical field-of-view (FoV) of the PMD camera. A series of still and moving tests are conducted on the AGV to verify correct sensor operations, which show that the postulated application of the ToF camera in the AGV is not straightforward. Later, to stabilize the moving PMD camera and to detect obstacles, a tracking feature detection algorithm and the scene flow technique are implemented to perform a real-time experiment. Keywords 3D ToF Camera, PMD Camera, Pioneer Mobile Robots, Path-planning 1. Introduction As an AGV must be able to adequately sense its surround‐ ings in order to operate in unknown environments and execute autonomous tasks, vision sensors provide the necessary information required for it to perceive and avoid any obstacles to accomplish autonomous path-planning. Hence, the perception sensor becomes the key sensory device for perceiving the environment in intelligent mobile robots, and the perception objective depends on three basic system qualities, namely rapidity, compactness and robustness [1]. 1 Int J Adv Robot Syst, 2015, 12:156 | doi: 10.5772/61348

Upload: others

Post on 03-Jan-2022

4 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: A ToF-Camera as a 3D Vision Sensor for Autonomous Mobile

ARTICLE

International Journal of Advanced Robotic Systems

A ToF-camera as a 3D Vision Sensor forAutonomous Mobile RoboticsRegular Paper

Sobers Lourdu Xavier Francis1*, Sreenatha G. Anavatti1, Matthew Garratt1 andHyunbgo Shim2

1 University of New South Wales, Canberra, Australia2 Seoul National University, Seoul, Korea*Corresponding author(s) E-mail: [email protected]

Received 03 February 2015; Accepted 25 August 2015

DOI: 10.5772/61348

© 2015 Author(s). Licensee InTech. This is an open access article distributed under the terms of the Creative Commons Attribution License(http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided theoriginal work is properly cited.

Abstract

The aim of this paper is to deploy a time-of-flight (ToF)based photonic mixer device (PMD) camera on an Auton‐omous Ground Vehicle (AGV) whose overall target is totraverse from one point to another in hazardous and hostileenvironments employing obstacle avoidance withouthuman intervention. The hypothesized approach ofapplying a ToF Camera for an AGV is a suitable approachto autonomous robotics because, as the ToF camera canprovide three-dimensional (3D) information at a lowcomputational cost, it is utilized to extract informationabout obstacles after their calibration and ground testing,and is mounted and integrated with the Pioneer mobilerobot. The workspace is a two-dimensional (2D) world mapwhich has been divided into a grid/cells, where thecollision-free path defined by the graph search algorithmis a sequence of cells the AGV can traverse to reach thetarget. PMD depth data is used to populate traversableareas and obstacles by representing a grid/cells of suitablesize. These camera data are converted into Cartesiancoordinates for entry into a workspace grid map. A moreoptimal camera mounting angle is needed and adopted byanalysing the camera’s performance discrepancy, such aspixel detection, the detection rate and the maximumperceived distances, and infrared (IR) scattering with

respect to the ground surface. This mounting angle isrecommended to be half the vertical field-of-view (FoV) ofthe PMD camera. A series of still and moving tests areconducted on the AGV to verify correct sensor operations,which show that the postulated application of the ToFcamera in the AGV is not straightforward. Later, to stabilizethe moving PMD camera and to detect obstacles, a trackingfeature detection algorithm and the scene flow techniqueare implemented to perform a real-time experiment.

Keywords 3D ToF Camera, PMD Camera, Pioneer MobileRobots, Path-planning

1. Introduction

As an AGV must be able to adequately sense its surround‐ings in order to operate in unknown environments andexecute autonomous tasks, vision sensors provide thenecessary information required for it to perceive and avoidany obstacles to accomplish autonomous path-planning.Hence, the perception sensor becomes the key sensorydevice for perceiving the environment in intelligent mobilerobots, and the perception objective depends on three basicsystem qualities, namely rapidity, compactness androbustness [1].

1Int J Adv Robot Syst, 2015, 12:156 | doi: 10.5772/61348

Page 2: A ToF-Camera as a 3D Vision Sensor for Autonomous Mobile

Over the last few decades, many different types of sensors[2] have been developed in the context of AGV path-planning to avoid obstacles [3] such as infrared sensors [4],ultrasonic sensors, sonar [5], LADAR [6], laser rangefinders[7], camera data fused with radar [8] and stereo cameraswith a projector [9]. These sensors’ data along with dataprocessing techniques are used to update the positions anddirections of the vertices of obstacles. However, thesesensor systems are unable to provide necessary informa‐tion without ease about any surroundings.

As the world’s attention is increasingly focusing onautomation in every field, extracting 3D information aboutan obstacle has become a topical and challenging task. Assuch, an appropriate sensor is required to obtain 3Dinformation that has small dimensions, low power con‐sumption and real-time performance. The main limitationof a 2D camera is that, as it is the projection of 3D informa‐tion onto a 2D image plane, it cannot provide completeinformation of the entire scene. Thus, the processing ofthese images will depend upon the view point (rather thanthe actual information about the object). In order toovercome this drawback, the use of 3D information hasemerged. In general, researchers use a setup consisting ofa charge-coupled device (CCD) camera and light projectorin order to obtain a 3D image, such as that of the 3Dvisualization of rock [9].

A 3D sensor is selected for our work, which is based on thephotonic mixer device (PMD) technology that deliversrange and intensity data with low computational cost aswell as compactness and economical design with a highframe rate. This camera system delivers absolute geomet‐rical dimensions of obstacles without depending upon theobject surface, distance, rotation or illumination. Hence, itis rotation-, translation- and illumination-invariant [10].Nowadays, RGBD cameras (e.g., Kinect, Asus Xtion,Carmine) have been widely used in object recognition andmobile robotics applications. However, these RGBDcameras cannot operate in outdoor environments [11]. APMD camera with a working range of (0.2 - 7 metres)provides better depth precision compared to Kinect (0.7 -3.5 metres) and Carmine (0.35 - 1.4 metres).

However, the PMD camera is constrained by its limitedFoV, namely its need to adjust the camera mountingdownwards to obtain a greater view of the ground. Thespecific angle of this mounting is explained in this paper;nonetheless, it was unexpected that light incident atdifferent angles to the ground would result in significantreceiver loss and distortion of distance measurements dueto scattering. The camera is mounted on the front of therobot through brackets that enable variable cameramounting angles at a static angle of tilt - it is perceived thatthis configuration would enable the best compromisebetween the ground conception and a straight aheadconception as a function of the camera tilt angle ψ. Due tobeing mounted above the robot, it would be necessary toflag closer obstacles so as to reduce the blind spot in frontof the robot. Because of these considerations, a more

optimal camera angle is adopted. To ensure that the top-most pixels are observed directly ahead of the robot,thereby giving an adequate conception of obstacles andmaximizing the ground plane conception, various analysesof the camera performance are carried out in this paper.

Later, the parametric calibration for the PMD cameras isperformed by obtaining necessary camera parameters toderive the true information of the imaging scene. Theimaging technology of the PMD camera is better under‐stood whereby camera pixels provide a complete image ofthe scene with each pixel detecting the range data stored ina 2D array, which are utilized and interpreted in this paperto extract information about the surroundings. A fewexperiments are carried to measure the camera’s parame‐ters and distance errors with respect to each pixel.

To determine the difference between the function of thecamera in an environment similar to that claimed by themanufacturer data sheet, white surface testing and grasssurface testing are conducted for the PMD camera. This alsoprovided a means to compare the performance of thecamera from a flat white surface to a flat grassy surface.Later, the camera data is synchronized with the instanta‐neous orientation and position of the platform (and thusthe camera), which translates the Cartesian coordinatesinto squares of grids. It reconstructed the ground region(extremities that the camera could see) into a grid ofsuitable grid cells which are imputed to the path-planningalgorithm.

During real-time experimentation, the grid-based EfficientD* Lite path-planning algorithm and scene-flow techniquewere programmed on the Pioneer onboard computersusing the OpenCV and OpenGL libraries. In order tocompensate for the ego-motion of the PMD camera, whichis aligned with the AGV coordinates, a features detectionalgorithm using Good −Features − to −Track from the OpenCVlibrary is adopted.

The paper is organized as follows: a brief comparison of the3D sensors and their fundamental principles is presentedin Section II. In Section III, the calibration of the PMDcamera is performed by parametric calibration. Section IVpresents the manipulation of the PMD camera data. InSection V, several standardized tests are devised tocharacterize the PMD camera and Section VI describes theexperimental results.

2. ToF-based 3D Cameras - state-of-the-art

Nowadays, 3D data are required in the automationindustries for analysing the visible space/environment.The rapid acquisition of 3D data by a robotic system fornavigation and control applications is required. New 3Dcameras at affordable prices which have been successful‐ly developed using the ToF principle to resemble LIDARscanners. In a ToF camera unit, a modulated light pulseis transmitted by the illumination source and the targetdistance is measured from the time taken by the pulse to

2 Int J Adv Robot Syst, 2015, 12:156 | doi: 10.5772/61348

Page 3: A ToF-Camera as a 3D Vision Sensor for Autonomous Mobile

reflect from the target back to the receiving unit. PMDtechnologies have developed 3D sensors using the ToFprinciple, which provide for a wide range of fieldapplications with high integration and cost-effectiveproduction [12].

ToF cameras do not suffer from missing texture in the sceneor bad lighting conditions, are computationally lessexpensive than stereo vision systems and - compared withlaser scanners - have higher frame rates and more compactsensors, advantages which make them ideally suited for 3Dperception and motion reconstruction [13]. The followingadvantages of ToF cameras are found in the literature:

• Simplicity: compared with 3D vision systems, a ToF-based system is very simple and compact, consisted nomoving parts and having built-in illumination adjacentto its lens.

• Efficiency: only a small amount of processing power isrequired to extract distance information using a ToFcamera.

• Speed: in contrast to laser scanners that move andmeasure point-by-point, ToF cameras measure a com‐plete scene with one shot at up to 100 frames-per-second(fps), much faster than their laser alternatives.

In addition, ToF cameras have been applied in roboticapplications for obstacle avoidance [14, 15]. There are manyother applications in various fields [16] that have gainedsubstantial research interest following the advent of therange sensor, such as robotic and machine vision in the fieldof mobile robotics search and rescue [17, 18], path-planningfor manipulators [19], the acquisition of 3D scene geometry[20], 3D sensing for automated vehicle guidance and safetysystems, wheelchair assistance [21], outdoor surveillance[22], simultaneous localization and mapping (SLAM) [23],map building [24], medical respiratory motion detection[25], robot navigation [26], semantic scene analysis [27],mixed/augmented reality [28], gesture recognition [29],markerless human motion tracking [30], human bodytracking and activity recognition [31], 3D reconstruction[32], domestic cleaning tasks [33] and human-machineinteraction [34, 35].

The authors in [36] used stereo cameras to identify the 3Dorientation of the object and ToF cameras have been usedfor 3D object scanning, employing different approaches,such as passive image-based and super-resolution methodswith probabilistic scan alignment [37]. The 3D rangecamera obtains range data and locates the positions ofobjects in its FoV [38], while active sensing technologieshave been used as an active safety system for constructionapplications and accident avoidance [39].

The ToF camera has been used to detect the standard sizeof obstacles and it can be used for obstacle and travel pathdetection applications for the blind and visually impairedby combining 3D range data with stereo audio feedback[21], using the algorithm to segment obstacles according to

intensity and the 3D structure of the range data. Gesturerecognition has been performed based on shape contextsand simple binary matching [40] whereby motion informa‐tion is extracted by matching the difference in the rangedata of different image frames. The measurement of shapesand deformations of metal objects and structures using ToFrange information and heterodyne imaging are discussedin [41].

2.1 The Basic ToF Principle

A single pixel consists of a photo-sensitive element (e.g., aphoto diode) which converts incoming light into current.The distance between the camera and an object is deter‐mined by the ToF principle and the time taken by the lightto travel from the illumination unit to the receiver is directlyproportional to the distance travelled by the light [42], withthe delay time tD given by

= 2 objD

o

Dt

where Dobj is the object distance in metres and co is thevelocity of light in m/s.

The pulse width of the illumination determines themaximum range that the camera can handle, which can bedetermined by

1=2max oD c t´ ´

The distance between the sensor and the object is half thetotal distance travelled by the radiation. The two differentdistance measurement methods described by T. Kahlmannet al. [42]) are the pulse run time and the phase shiftdetermination.

In the ToF camera, the distance between the camera and theobstacle is calculated by the autocorrelation function of theelectrical and optical signals, which is analysed by a phase-shift algorithm (Figure 1). Using four samples, A1, A2, A3and A4 (each shifted by 90 degrees), the phases of thereceived signals - which are proportional to the distance -can be calculated using the following equation[43].

The phase shift, φ, is

1 3

2 4=

A Aarctan

A Aj

æ ö-ç ÷ç ÷-è ø

(1)

In addition to the phase shift of the signal, ar , the signalstrength of the received signal (amplitude) and br offsetsfrom the samples which represent the greyscale value ofeach pixel can be determined by

3Sobers Lourdu Xavier Francis, Sreenatha G. Anavatti, Matthew Garratt and Hyunbgo Shim:A ToF-camera as a 3D Vision Sensor for Autonomous Mobile Robotics

Page 4: A ToF-Camera as a 3D Vision Sensor for Autonomous Mobile

2 21 3 2 4( ) ( )

=2r

A A A Aa

- - - (2)

1 2 3 4=4r

A A A Ab

+ + +(3)

The depth data is calculated from the phase information φas

=2 2

or

mod

cd

fjp

´ (4)

where f mod is the modulation frequency and co is the speedof light.

The PMD CamCube 2.0 and PMD S3 cameras are shown inFigures 2 and 3 and their specifications are listed in Tables1 and 2.

tracking and activity recognition [31], 3D reconstruction[32], domestic cleaning tasks [33] and human-machineinteraction [34, 35].

The authors in [36] used stereo cameras to identify the 3Dorientation of the object and ToF cameras have been usedfor 3D object scanning, employing different approaches,such as passive image-based and super-resolutionmethods with probabilistic scan alignment [37]. The 3Drange camera obtains range data and locates the positionsof objects in its FoV [38], while active sensing technologieshave been used as an active safety system for constructionapplications and accident avoidance [39].

The ToF camera has been used to detect the standard sizeof obstacles and it can be used for obstacle and travel pathdetection applications for the blind and visually impairedby combining 3D range data with stereo audio feedback[21], using the algorithm to segment obstacles accordingto intensity and the 3D structure of the range data.Gesture recognition has been performed based on shapecontexts and simple binary matching [40] whereby motioninformation is extracted by matching the difference in therange data of different image frames. The measurement ofshapes and deformations of metal objects and structuresusing ToF range information and heterodyne imaging arediscussed in [41].

2.1. The basic ToF principle

A single pixel consists of a photo-sensitive element(e.g., a photo diode) which converts incoming light intocurrent. The distance between the camera and an object isdetermined by the ToF principle and the time taken by thelight to travel from the illumination unit to the receiver isdirectly proportional to the distance travelled by the light[42], with the delay time tD given by

tD = 2 × Dobj

co

where Dobj is the object distance in metres and co is thevelocity of light in m/s.

The pulse width of the illumination determines themaximum range that the camera can handle, which can bedetermined by

Dmax =12× co × t

The distance between the sensor and the object is half thetotal distance travelled by the radiation. The two differentdistance measurement methods described by T. Kahlmannet al. [42]) are the pulse run time and the phase shiftdetermination.

In the ToF camera, the distance between the camera andthe obstacle is calculated by the autocorrelation functionof the electrical and optical signals, which is analysed bya phase-shift algorithm (Figure 1). Using four samples,A1, A2, A3 and A4 (each shifted by 90 degrees), thephases of the received signals - which are proportionalto the distance - can be calculated using the followingequation[43].

The phase shift, ϕ, is

ϕ = arctan(

A1 − A3A2 − A4

)(1)

Emitted signal

Received signal

f = 20MHz

A1 A1A2A2A3 A3A4

intensityoffset

Amplitude

mod

Time

Intensityf

Figure 1. Phase shift distance measurement principle: opticalsinusoidally modulated input signal, sampled with four samplingpoints per modulation period [11].

In addition to the phase shift of the signal, ar, the signalstrength of the received signal (amplitude) and br offsetsfrom the samples which represent the greyscale value ofeach pixel can be determined by

ar =√

(A1 − A3)2 − (A2 − A4)2

2(2)

br =A1 + A2 + A3 + A4

4(3)

The depth data is calculated from the phase information ϕas

dr =co

2 fmod× ϕ

2π(4)

where fmod is the modulation frequency and co is the speedof light.

The PMD CamCube 2.0 and PMD S3 cameras are shown inFigures 2 and 3 and their specifications are listed in Tables1 and 2.

Figure 2. PMD[vision] CamCube 2.0 camera.

Table 1. PMD CamCube 2.0 specification.

Integration time 200 msModulation frequency 23 MHzWavelength 13 mPixel resolution 200[H]x200[V]FoV V/H [˚] 40/40Internal illumination[nm] 870

www.intechopen.com :A ToF-camera as a 3D Vision Sensor for Autonomous Mobile Robotics

3

Figure 2. PMD[vision] CamCube 2.0 camera

Figure 3. PMD[vision] S3 Camera.

Table 2. PMD[vision]S3 specification.

Integration Time 200 msModulation frequency 23 MHz

Pixel resolution 48[H]x64[V]FoV H/V [˚] 40/30

Internal illumination[nm] 850

3. Parametric calibration

The PMD CamCube 2.0 and PMD S3 cameras, developedby PMD Technologies Inc. and which are based onthe ToF principle, are used. The PMD CamCube 2.0camera’s receiver optics has 200 by 200 pixels with anFoV of 40(H)/40(V) degrees. The PMD S3 has an FoV of40(H)/30(V) degrees with 64 HP and 48 VP, and thus intotal is able to return 3,072 distance measurements in asquare array.In this section, the calibration of the PMD cameras isperformed by a parametric calibration procedure thatprovides necessary camera parameters. These parameterscan be used to derive the true information of the imagingscene. The procedure can also provide the translationalvector and rotational matrix on the 3D space for the camerawhich can be used to find the true information aboutthe coordinate positions of the camera and the imagingscenario.Basically, the intrinsic parameters of the camera providethe transformation between the image coordinates and thepixel coordinates of the camera [44, 45]. The intrinsicmatrix is given by

⎡⎣ fcam/Sx 0 Cx

0 fcam/Sy Cy0 0 1

⎤⎦ (5)

where fcam is the focal length of the camera, Sx and Syare the pixel sizes of the camera in the x and y axesrespectively, and Cx and Cy are the principal points ofthe sensor array. To obtain the camera parameters, anexperimental setup is developed in which the calibrationprocess of the PMD CamCube 2.0 and PMD S3 camerais performed by capturing intensity and depth images ofthe chequerboard with different orientations, as shown inFigures 4 and 5. The chequerboard has black and whitesquares of known dimensions, each printed on a 21 cm x29.7 cm A4 paper. The ‘OpenCV’2, and is used to estimatethe intrinsic parameters as well as the radial distortion ina Debian 6.0 installed laptop.

2 OpenCV (Open Source Computer Vision Library) is a library ofprogramming functions mainly aimed at real-time computer vision,developed by Intel[46].

The calibration test is also conducted with three differentchequerboard squares of dimensions (9 x 6), (5 x 7) and(6 x 4) and corner spacing of 27 mm, 40 mm, 100 mm,respectively. The manufacturer’s values for the PMDCamCube 2.0 camera are:

• pixel size, Sx = Sy = 45 μm,

• Aspect ratio = 1, Skew =0,• focal length, fcam = 12.8 mm and

The manufacturer’s value for the PMD S3 camera are:

• pixel size, Sx = Sy = 100 μm,• Aspect ratio = 1.333, Skew =0,• focal length, fcam = 8.4 mm and

Figure 4. Parametric calibration (corners: x=9, y=6; spacingx=y=27 mm).

Figure 5. Parametric calibration (corners: x=5, y=7; spacingx=y=40 mm).

The main parameters of the intrinsic matrix are the focallength, the principal point, the skew coefficient anddistortions. It defines the optical, geometric and digitalcharacteristics of the camera which are calculated usingEquations 6 and 7.

Icube2.0 =

⎡⎣ 2.8951e2 0 1.0297e2

0 2.89268e2 1.016920e2

0 0 1

⎤⎦ (6)

4 2015, Vol. No, No:2015 www.intechopen.com

Figure 3. PMD[vision] S3 Camera

Integration time 200 ms

Modulation frequency 23 MHz

Wavelength 13 m

Pixel resolution 200[H]x200[V]

FoV V/H [] 40/40

Internal illumination[nm] 870

Table 1. PMD CamCube 2.0 specification

Integration Time 200 ms

Modulation frequency 23 MHz

Pixel resolution 48[H]x64[V]

FoV H/V [] 40/30

Internal illumination[nm] 850

Table 2. PMD[vision]S3 specification

3. Parametric Calibration

The PMD CamCube 2.0 and PMD S3 cameras, developedby PMD Technologies Inc. and which are based on the ToFprinciple, are used. The PMD CamCube 2.0 camera’sreceiver optics has 200 by 200 pixels with an FoV of 40(H)/40(V) degrees. The PMD S3 has an FoV of 40(H)/30(V)degrees with 64 HP and 48 VP, and thus in total is able toreturn 3,072 distance measurements in a square array.

In this section, the calibration of the PMD cameras isperformed by a parametric calibration procedure thatprovides necessary camera parameters. These parameterscan be used to derive the true information of the imagingscene. The procedure can also provide the translationalvector and rotational matrix on the 3D space for the camerawhich can be used to find the true information about thecoordinate positions of the camera and the imagingscenario.

Basically, the intrinsic parameters of the camera providethe transformation between the image coordinates and thepixel coordinates of the camera [44, 45]. The intrinsic matrixis given by

tracking and activity recognition [31], 3D reconstruction[32], domestic cleaning tasks [33] and human-machineinteraction [34, 35].

The authors in [36] used stereo cameras to identify the 3Dorientation of the object and ToF cameras have been usedfor 3D object scanning, employing different approaches,such as passive image-based and super-resolutionmethods with probabilistic scan alignment [37]. The 3Drange camera obtains range data and locates the positionsof objects in its FoV [38], while active sensing technologieshave been used as an active safety system for constructionapplications and accident avoidance [39].

The ToF camera has been used to detect the standard sizeof obstacles and it can be used for obstacle and travel pathdetection applications for the blind and visually impairedby combining 3D range data with stereo audio feedback[21], using the algorithm to segment obstacles accordingto intensity and the 3D structure of the range data.Gesture recognition has been performed based on shapecontexts and simple binary matching [40] whereby motioninformation is extracted by matching the difference in therange data of different image frames. The measurement ofshapes and deformations of metal objects and structuresusing ToF range information and heterodyne imaging arediscussed in [41].

2.1. The basic ToF principle

A single pixel consists of a photo-sensitive element(e.g., a photo diode) which converts incoming light intocurrent. The distance between the camera and an object isdetermined by the ToF principle and the time taken by thelight to travel from the illumination unit to the receiver isdirectly proportional to the distance travelled by the light[42], with the delay time tD given by

tD = 2 × Dobj

co

where Dobj is the object distance in metres and co is thevelocity of light in m/s.

The pulse width of the illumination determines themaximum range that the camera can handle, which can bedetermined by

Dmax =12× co × t

The distance between the sensor and the object is half thetotal distance travelled by the radiation. The two differentdistance measurement methods described by T. Kahlmannet al. [42]) are the pulse run time and the phase shiftdetermination.

In the ToF camera, the distance between the camera andthe obstacle is calculated by the autocorrelation functionof the electrical and optical signals, which is analysed bya phase-shift algorithm (Figure 1). Using four samples,A1, A2, A3 and A4 (each shifted by 90 degrees), thephases of the received signals - which are proportionalto the distance - can be calculated using the followingequation[43].

The phase shift, ϕ, is

ϕ = arctan(

A1 − A3A2 − A4

)(1)

Emitted signal

Received signal

f = 20MHz

A1 A1A2A2A3 A3A4

intensityoffset

Amplitude

mod

Time

Intensityf

Figure 1. Phase shift distance measurement principle: opticalsinusoidally modulated input signal, sampled with four samplingpoints per modulation period [11].

In addition to the phase shift of the signal, ar, the signalstrength of the received signal (amplitude) and br offsetsfrom the samples which represent the greyscale value ofeach pixel can be determined by

ar =√

(A1 − A3)2 − (A2 − A4)2

2(2)

br =A1 + A2 + A3 + A4

4(3)

The depth data is calculated from the phase information ϕas

dr =co

2 fmod× ϕ

2π(4)

where fmod is the modulation frequency and co is the speedof light.

The PMD CamCube 2.0 and PMD S3 cameras are shown inFigures 2 and 3 and their specifications are listed in Tables1 and 2.

Figure 2. PMD[vision] CamCube 2.0 camera.

Table 1. PMD CamCube 2.0 specification.

Integration time 200 msModulation frequency 23 MHzWavelength 13 mPixel resolution 200[H]x200[V]FoV V/H [˚] 40/40Internal illumination[nm] 870

www.intechopen.com :A ToF-camera as a 3D Vision Sensor for Autonomous Mobile Robotics

3

Figure 1. Phase shift distance measurement principle: optical sinusoidally modulated input signal, sampled with four sampling points per modula‐tion period [11]

4 Int J Adv Robot Syst, 2015, 12:156 | doi: 10.5772/61348

Page 5: A ToF-Camera as a 3D Vision Sensor for Autonomous Mobile

/ 00 /0 0 1

cam x x

cam y y

f S Cf S C

é ùê úê úê úë û

(5)

where f cam is the focal length of the camera, Sx and Sy arethe pixel sizes of the camera in the x and y axes respectively,and Cx and Cy are the principal points of the sensor array.To obtain the camera parameters, an experimental setup isdeveloped in which the calibration process of the PMDCamCube 2.0 and PMD S3 camera is performed by captur‐ing intensity and depth images of the chequerboard withdifferent orientations, as shown in Figures 4 and 5. Thechequerboard has black and white squares of knowndimensions, each printed on a 21 cm x 29.7 cm A4 paper.The ‘OpenCV’1, and is used to estimate the intrinsicparameters as well as the radial distortion in a Debian 6.0installed laptop.

Figure 3. PMD[vision] S3 Camera.

Table 2. PMD[vision]S3 specification.

Integration Time 200 msModulation frequency 23 MHz

Pixel resolution 48[H]x64[V]FoV H/V [˚] 40/30

Internal illumination[nm] 850

3. Parametric calibration

The PMD CamCube 2.0 and PMD S3 cameras, developedby PMD Technologies Inc. and which are based onthe ToF principle, are used. The PMD CamCube 2.0camera’s receiver optics has 200 by 200 pixels with anFoV of 40(H)/40(V) degrees. The PMD S3 has an FoV of40(H)/30(V) degrees with 64 HP and 48 VP, and thus intotal is able to return 3,072 distance measurements in asquare array.In this section, the calibration of the PMD cameras isperformed by a parametric calibration procedure thatprovides necessary camera parameters. These parameterscan be used to derive the true information of the imagingscene. The procedure can also provide the translationalvector and rotational matrix on the 3D space for the camerawhich can be used to find the true information aboutthe coordinate positions of the camera and the imagingscenario.Basically, the intrinsic parameters of the camera providethe transformation between the image coordinates and thepixel coordinates of the camera [44, 45]. The intrinsicmatrix is given by

⎡⎣ fcam/Sx 0 Cx

0 fcam/Sy Cy0 0 1

⎤⎦ (5)

where fcam is the focal length of the camera, Sx and Syare the pixel sizes of the camera in the x and y axesrespectively, and Cx and Cy are the principal points ofthe sensor array. To obtain the camera parameters, anexperimental setup is developed in which the calibrationprocess of the PMD CamCube 2.0 and PMD S3 camerais performed by capturing intensity and depth images ofthe chequerboard with different orientations, as shown inFigures 4 and 5. The chequerboard has black and whitesquares of known dimensions, each printed on a 21 cm x29.7 cm A4 paper. The ‘OpenCV’2, and is used to estimatethe intrinsic parameters as well as the radial distortion ina Debian 6.0 installed laptop.

2 OpenCV (Open Source Computer Vision Library) is a library ofprogramming functions mainly aimed at real-time computer vision,developed by Intel[46].

The calibration test is also conducted with three differentchequerboard squares of dimensions (9 x 6), (5 x 7) and(6 x 4) and corner spacing of 27 mm, 40 mm, 100 mm,respectively. The manufacturer’s values for the PMDCamCube 2.0 camera are:

• pixel size, Sx = Sy = 45 μm,

• Aspect ratio = 1, Skew =0,• focal length, fcam = 12.8 mm and

The manufacturer’s value for the PMD S3 camera are:

• pixel size, Sx = Sy = 100 μm,• Aspect ratio = 1.333, Skew =0,• focal length, fcam = 8.4 mm and

Figure 4. Parametric calibration (corners: x=9, y=6; spacingx=y=27 mm).

Figure 5. Parametric calibration (corners: x=5, y=7; spacingx=y=40 mm).

The main parameters of the intrinsic matrix are the focallength, the principal point, the skew coefficient anddistortions. It defines the optical, geometric and digitalcharacteristics of the camera which are calculated usingEquations 6 and 7.

Icube2.0 =

⎡⎣ 2.8951e2 0 1.0297e2

0 2.89268e2 1.016920e2

0 0 1

⎤⎦ (6)

4 2015, Vol. No, No:2015 www.intechopen.com

Figure 4. Parametric calibration (corners: x=9, y=6; spacing x=y=27 mm)

The calibration test is also conducted with three differentchequerboard squares of dimensions (9 x 6), (5 x 7) and (6x 4) and corner spacing of 27 mm, 40 mm, 100 mm, respec‐tively. The manufacturer’s values for the PMD CamCube2.0 camera are:

• pixel size, Sx = Sy = 45 μm,

• Aspect ratio = 1, Skew =0,

• focal length, f cam = 12.8 mm and

The manufacturer’s value for the PMD S3 camera are:

• pixel size, Sx = Sy = 100 μm,

• Aspect ratio = 1.333, Skew =0,

• focal length, f cam = 8.4 mm and

Figure 3. PMD[vision] S3 Camera.

Table 2. PMD[vision]S3 specification.

Integration Time 200 msModulation frequency 23 MHz

Pixel resolution 48[H]x64[V]FoV H/V [˚] 40/30

Internal illumination[nm] 850

3. Parametric calibration

The PMD CamCube 2.0 and PMD S3 cameras, developedby PMD Technologies Inc. and which are based onthe ToF principle, are used. The PMD CamCube 2.0camera’s receiver optics has 200 by 200 pixels with anFoV of 40(H)/40(V) degrees. The PMD S3 has an FoV of40(H)/30(V) degrees with 64 HP and 48 VP, and thus intotal is able to return 3,072 distance measurements in asquare array.In this section, the calibration of the PMD cameras isperformed by a parametric calibration procedure thatprovides necessary camera parameters. These parameterscan be used to derive the true information of the imagingscene. The procedure can also provide the translationalvector and rotational matrix on the 3D space for the camerawhich can be used to find the true information aboutthe coordinate positions of the camera and the imagingscenario.Basically, the intrinsic parameters of the camera providethe transformation between the image coordinates and thepixel coordinates of the camera [44, 45]. The intrinsicmatrix is given by

⎡⎣ fcam/Sx 0 Cx

0 fcam/Sy Cy0 0 1

⎤⎦ (5)

where fcam is the focal length of the camera, Sx and Syare the pixel sizes of the camera in the x and y axesrespectively, and Cx and Cy are the principal points ofthe sensor array. To obtain the camera parameters, anexperimental setup is developed in which the calibrationprocess of the PMD CamCube 2.0 and PMD S3 camerais performed by capturing intensity and depth images ofthe chequerboard with different orientations, as shown inFigures 4 and 5. The chequerboard has black and whitesquares of known dimensions, each printed on a 21 cm x29.7 cm A4 paper. The ‘OpenCV’2, and is used to estimatethe intrinsic parameters as well as the radial distortion ina Debian 6.0 installed laptop.

2 OpenCV (Open Source Computer Vision Library) is a library ofprogramming functions mainly aimed at real-time computer vision,developed by Intel[46].

The calibration test is also conducted with three differentchequerboard squares of dimensions (9 x 6), (5 x 7) and(6 x 4) and corner spacing of 27 mm, 40 mm, 100 mm,respectively. The manufacturer’s values for the PMDCamCube 2.0 camera are:

• pixel size, Sx = Sy = 45 μm,

• Aspect ratio = 1, Skew =0,• focal length, fcam = 12.8 mm and

The manufacturer’s value for the PMD S3 camera are:

• pixel size, Sx = Sy = 100 μm,• Aspect ratio = 1.333, Skew =0,• focal length, fcam = 8.4 mm and

Figure 4. Parametric calibration (corners: x=9, y=6; spacingx=y=27 mm).

Figure 5. Parametric calibration (corners: x=5, y=7; spacingx=y=40 mm).

The main parameters of the intrinsic matrix are the focallength, the principal point, the skew coefficient anddistortions. It defines the optical, geometric and digitalcharacteristics of the camera which are calculated usingEquations 6 and 7.

Icube2.0 =

⎡⎣ 2.8951e2 0 1.0297e2

0 2.89268e2 1.016920e2

0 0 1

⎤⎦ (6)

4 2015, Vol. No, No:2015 www.intechopen.com

Figure 5. Parametric calibration (corners: x=5, y=7; spacing x=y=40 mm)

The main parameters of the intrinsic matrix are the focallength, the principal point, the skew coefficient anddistortions. It defines the optical, geometric and digitalcharacteristics of the camera which are calculated usingEquations 6 and 7.

2 2

2 22.0

2.8951 0 1.0297= 0 2.89268 1.016920

0 0 1cube

e eI e e

é ùê úê úê úê úë û

(6)

3

83.88370514 0 31.98480415= 0 84.15943909 29.34420967

0 0 1SI

é ùê úê úê úë û

(7)

The distortion is caused mainly by the camera optics and isdirectly proportional to its focal length. The radial distor‐tion and tangential distortion are the lens distortion effectsintroduced by real lenses. Generally, there are four distor‐tion parameters, i.e., two radial and two tangential distor‐tion coefficients, which are calculated as:

01 03 032.0 0.4256 0.8573 1.3939 2.0328cubeDistortion e e e- - -é ù= - -ë û (8)

03 053 0.4394 1.0112 4.4002 7.8725sDistortion e e- -é ù= - -ë û (9)

1 OpenCV (Open Source Computer Vision Library) is a library of programming functions mainly aimed at real-time computer vision, developed by Intel [46].

5Sobers Lourdu Xavier Francis, Sreenatha G. Anavatti, Matthew Garratt and Hyunbgo Shim:A ToF-camera as a 3D Vision Sensor for Autonomous Mobile Robotics

Page 6: A ToF-Camera as a 3D Vision Sensor for Autonomous Mobile

Intrinsic parameters Calibration parameters

Principle point ( Cx,Cy ) 102.97, 101.69

Radial distortion -0.4256, 0.0857

Tangential distortion -0.001394, 0.002033

Table 3. Average calibration results of a 200 X 200 PMD CamCube2.0 camera

Intrinsic parameters Calibration parameters

Radial distortion -0.4394, 1.0112

Tangential distortion -0.0044002, -7.8725e-0.5

Table 4. Average calibration results of a PMD S3 camera

By calibrating the camera, intrinsic and extrinsic cameraparameters are detected to eliminate the geometric distor‐tion of the images.

Thus, the transformation between a point in the scene/world coordinates system Pwor and the image planecoordinate system Ocam is provided through a rotationalmatrix Rext and a translational vector Text . The joint rotation-translation matrix Rext |Text determines the extrinsicparameters of the camera.

Rotation matrix:

0.437390661 1.0 0.0 0.0= 4.42766333 0.0 1.0 0.0

0.35243243 0.0 0.0 1.0extR

é ù-ê ú-ê úê úë û

(10)

Translation vector:

0.31.0 0.0 0.0 7.70650.0 1.0 0.0 0.04777=0.0 0.0 1.0 0.00740.0 0.0 0.0 1.0

ext

e

T

-é ùê ú

-ê úê úê úê úë û

(11)

= |

1

wocam

wocam ext ext

wocam

xx

yy R T

zz

é ùé ù ê úê ú ê úé ùë ûê ú ê úê ú ê úë û ê úë û

(12)

The relationship between the scene/world point Pwo and itsprojection pcam can be written as

= [ | ]cam cam ext ext woP I R T P (13)

where (xwo, ywo, zwo) are the coordinates of a point Pwo in thescene/world coordinate system and (xcam, ycam, zcam) are thecoordinates in the camera coordinate system.

4. Manipulation of PMD Camera Data

The camera’s pixels provide a complete image of the scene,with each pixel detecting the intensity or projection of the

optical image, respectively, while the range data obtainedfrom each pixel is stored in a 2D array. The camera alsoprovides the signal strength of the illumination andintensity information, which can be used to determine thequality of the distance value (a low amplitude indicates thelow accuracy of the measured distance in a pixel). Thecoordinates of the object with respect to the PMD cameraare obtained as a 2D matrix, with each element correspond‐ing to a pixel. As the dimensions of the image frame dependupon its distance from the camera and the camera’s field ofview, an object’s height and width can be calculated usingits pixel elements [47].

Two different range frames of a rectangular object, i.e., Xd

- near and Xd - far from the camera, are representedpictorially in Figure 6. The pixel-size projected on the objectis different for the two frames and decreases as the distancebetween the camera and image frame increases since theobject’s area projected on the near frame (ixn

× iyn) is different

from that (ixf× ixf

) on the far frame. As can be seen, exceptfor the projected pixel size of each pixel, all the otherparameters (Equations (14)-(19)) increase with an increasein the distance of the object frame from the camera.

IS3 =

⎡⎣ 83.88370514 0 31.98480415

0 84.15943909 29.344209670 0 1

⎤⎦ (7)

The distortion is caused mainly by the camera optics andis directly proportional to its focal length. The radialdistortion and tangential distortion are the lens distortioneffects introduced by real lenses. Generally, there are fourdistortion parameters, i.e., two radial and two tangentialdistortion coefficients, which are calculated as:Distortioncube2.0=[−0.4256 0.8573e−01 −1.3939e−03 2.0328e−03 ]

(8)

Distortions3=[−0.4394 1.0112 4.4002e−03 −7.8725e−05 ]

(9)

Table 3. Average calibration results of a 200 X 200 PMDCamCube2.0 camera.

Intrinsic parameters Calibration parametersPrinciple point (Cx, Cy) 102.97, 101.69

Radial distortion -0.4256, 0.0857Tangential distortion -0.001394, 0.002033

Table 4. Average calibration results of a PMD S3 camera.

Intrinsic parameters Calibration parametersRadial distortion -0.4394, 1.0112

Tangential distortion -0.0044002, −7.8725e−05

By calibrating the camera, intrinsic and extrinsic cameraparameters are detected to eliminate the geometricdistortion of the images.

Thus, the transformation between a point in thescene/world coordinates system Pwor and the imageplane coordinate system Ocam is provided through arotational matrix Rext and a translational vector Text. Thejoint rotation-translation matrix [Rext|Text] determines theextrinsic parameters of the camera.

Rotation matrix:

Rext =

⎡⎣−0.437390661 1.0 0.0 0.0

−4.42766333 0.0 1.0 0.00.35243243 0.0 0.0 1.0

⎤⎦ (10)

Translation vector:

Text =

⎡⎢⎢⎣

1.0 0.0 0.0 7.7065e−03

0.0 1.0 0.0 −0.047770.0 0.0 1.0 0.00740.0 0.0 0.0 1.0

⎤⎥⎥⎦ (11)

⎡⎣xcam

ycamzcam

⎤⎦ =

[Rext|Text

]⎡⎢⎢⎣

xwoywozwo1

⎤⎥⎥⎦ (12)

The relationship between the scene/world point Pwo andits projection pcam can be written as

Pcam = Icam[Rext|Text]Pwo (13)

where (xwo, ywo, zwo) are the coordinates of a point Pwo inthe scene/world coordinate system and (xcam, ycam, zcam)are the coordinates in the camera coordinate system.

4. Manipulation of PMD camera data

The camera’s pixels provide a complete image of the scene,with each pixel detecting the intensity or projection of theoptical image, respectively, while the range data obtainedfrom each pixel is stored in a 2D array. The cameraalso provides the signal strength of the illumination andintensity information, which can be used to determine thequality of the distance value (a low amplitude indicatesthe low accuracy of the measured distance in a pixel).The coordinates of the object with respect to the PMDcamera are obtained as a 2D matrix, with each elementcorresponding to a pixel. As the dimensions of the imageframe depend upon its distance from the camera and thecamera’s field of view, an object’s height and width can becalculated using its pixel elements [47].

Two different range frames of a rectangular object, i.e.,Xd-near and Xd- f ar from the camera, are representedpictorially in Figure 6. The pixel-size projected on theobject is different for the two frames and decreases as thedistance between the camera and image frame increasessince the object’s area projected on the near frame (ixn ×iyn ) is different from that (ix f × ix f ) on the far frame. Ascan be seen, except for the projected pixel size of each pixel,all the other parameters (Equations (14)-(19)) increase withan increase in the distance of the object frame from thecamera.

Far

VerticalFOV

Horizontal FOV

Near

CameraXd -farX -near

Far Frame

Near Frame

hf

lf

ln

hn

Xd - Distance Between the Camera and Image frame.

ln , lf - Total length of the near, far image frame

hn , hf - Total height of the near, far image frame.

- Length of the near, far object in the respective frames.

, - Height of the near, far object in the respective frames.

d

fyi

fxi

nyi

nxi

nyi fyinxi fxi

Figure 6. PMD camera’s FoV and image frames.

www.intechopen.com :A ToF-camera as a 3D Vision Sensor for Autonomous Mobile Robotics

5

Figure 6. PMD camera’s FoV and image frames

Using geometry;

= ( ( )) dl tan FoV H X´ (14)

= ( ( )) dh tan FoV V X´ (15)

=No.of pixels( )h

lPH (16)

=No.of pixels( )v

hPV (17)

6 Int J Adv Robot Syst, 2015, 12:156 | doi: 10.5772/61348

Page 7: A ToF-Camera as a 3D Vision Sensor for Autonomous Mobile

=y h vi P O´ (18)

=x v hi P O´ (19)

where l is the length of the image frame, h is the height ofthe image frame, Xd is the distance between the imageframe and the camera, Ph is the projected pixel size of eachpixel in the object along the horizontal axis, Pv is theprojected pixel size of each pixel in the object along thevertical axis, Oh is the number of object-occupied pixelsalong the horizontal axis in the image frame, Ov is thenumber of object-occupied pixels along the vertical axis inthe image frame, ix is the total length of the object in theimage frame, and iy is the total height of the object in theimage frame.

To obtain accurate readings, an experimental setup is madeas shown in Figure 7(a). The camera and test object are placedon a horizontal rail, as shown in Figure 7, and the test objectis moved to different distances without altering the positionof the camera. A rectangular box of known dimensions (157mm (length) x 80 mm (height)) is used as the reference objectand its lengths and heights at different distances calculat‐ed using the readings from the camera, as previouslydiscussed and illustrated in Figure 8, which are comparedwith the actual dimensions and the relative error, arecalculated. Relative accuracies with respect to distance areshown in Figure 9, with the calculated lengths and heightsof the test object plotted as blue and green lines, respective‐ly. It can be seen that the relative error is less than ±3%.

Using geometry;

l = tan(FoV(H))× Xd (14)

h = tan(FoV(V))× Xd (15)

Ph =l

No.of pixels(H)(16)

Pv =h

No.of pixels(V)(17)

iy = Ph × Ov (18)

ix = Pv × Oh (19)

where l is the length of the image frame, h is the heightof the image frame, Xd is the distance between the imageframe and the camera, Ph is the projected pixel size ofeach pixel in the object along the horizontal axis, Pv is theprojected pixel size of each pixel in the object along thevertical axis, Oh is the number of object-occupied pixelsalong the horizontal axis in the image frame, Ov is thenumber of object-occupied pixels along the vertical axis inthe image frame, ix is the total length of the object in theimage frame, and iy is the total height of the object in theimage frame.To obtain accurate readings, an experimental setup is madeas shown in Figure 7(a). The camera and test object areplaced on a horizontal rail, as shown in Figure 7, and thetest object is moved to different distances without alteringthe position of the camera. A rectangular box of knowndimensions (157 mm (length) × 80 mm (height)) is used asthe reference object and its lengths and heights at differentdistances calculated using the readings from the camera,as previously discussed and illustrated in Figure 8, whichare compared with the actual dimensions and the relativeerror, are calculated. Relative accuracies with respect todistance are shown in Figure 9, with the calculated lengthsand heights of the test object plotted as blue and greenlines, respectively. It can be seen that the relative error isless than ± 3%.

(a) (b)

Figure 7. Experimental setup for calibration.

4.1. Camera mounting angle

The 3D model of the Pioneer 3DX with the camerasmounted at a tilt angle of Ψ=0 is created in GoogleSketchUp, and the dimensions of the AGV and thecamera’s FoV extended to 7,000 mm are drawn to closelyresemble the actual situation in order to assist theperception of the work (Figure 10).

As can be seen, for a camera orientation of ψ = 0, a largenumber of captured data would not be necessary to flagobstacles (due to being above the robot), and the blind spot

(a) 350 mm (b) 450 mm

(c) 600 mm (d) 850 mm

Figure 8. Experiment 1: rectangular box placed at differentdistances.

300 400 500 600 700 800 900 1000 1100−10

−8

−6

−4

−2

0

2

4

6

8

10Distance Vs Error

Distance in mm

Rel

ativ

e E

rror

in %

Length (157 mm)

Height (80 mm)

Figure 9. Plots of distance vs relative error.

Field of view (40 H x 40 V)

Camera range

Figure 10. 3D model depicting P3DX AGV.

in front of the robot could ideally be reduced. Becauseof these considerations, a more optimal camera angle wasadopted - this angle was determined as half the verticalFoV of the PMD.

ψ = 0.5 ∗ FoV (20)

6 2015, Vol. No, No:2015 www.intechopen.com

Figure 7. Experimental setup for calibration

Using geometry;

l = tan(FoV(H))× Xd (14)

h = tan(FoV(V))× Xd (15)

Ph =l

No.of pixels(H)(16)

Pv =h

No.of pixels(V)(17)

iy = Ph × Ov (18)

ix = Pv × Oh (19)

where l is the length of the image frame, h is the heightof the image frame, Xd is the distance between the imageframe and the camera, Ph is the projected pixel size ofeach pixel in the object along the horizontal axis, Pv is theprojected pixel size of each pixel in the object along thevertical axis, Oh is the number of object-occupied pixelsalong the horizontal axis in the image frame, Ov is thenumber of object-occupied pixels along the vertical axis inthe image frame, ix is the total length of the object in theimage frame, and iy is the total height of the object in theimage frame.To obtain accurate readings, an experimental setup is madeas shown in Figure 7(a). The camera and test object areplaced on a horizontal rail, as shown in Figure 7, and thetest object is moved to different distances without alteringthe position of the camera. A rectangular box of knowndimensions (157 mm (length) × 80 mm (height)) is used asthe reference object and its lengths and heights at differentdistances calculated using the readings from the camera,as previously discussed and illustrated in Figure 8, whichare compared with the actual dimensions and the relativeerror, are calculated. Relative accuracies with respect todistance are shown in Figure 9, with the calculated lengthsand heights of the test object plotted as blue and greenlines, respectively. It can be seen that the relative error isless than ± 3%.

(a) (b)

Figure 7. Experimental setup for calibration.

4.1. Camera mounting angle

The 3D model of the Pioneer 3DX with the camerasmounted at a tilt angle of Ψ=0 is created in GoogleSketchUp, and the dimensions of the AGV and thecamera’s FoV extended to 7,000 mm are drawn to closelyresemble the actual situation in order to assist theperception of the work (Figure 10).

As can be seen, for a camera orientation of ψ = 0, a largenumber of captured data would not be necessary to flagobstacles (due to being above the robot), and the blind spot

(a) 350 mm (b) 450 mm

(c) 600 mm (d) 850 mm

Figure 8. Experiment 1: rectangular box placed at differentdistances.

300 400 500 600 700 800 900 1000 1100−10

−8

−6

−4

−2

0

2

4

6

8

10Distance Vs Error

Distance in mm

Rel

ativ

e E

rror

in %

Length (157 mm)

Height (80 mm)

Figure 9. Plots of distance vs relative error.

Field of view (40 H x 40 V)

Camera range

Figure 10. 3D model depicting P3DX AGV.

in front of the robot could ideally be reduced. Becauseof these considerations, a more optimal camera angle wasadopted - this angle was determined as half the verticalFoV of the PMD.

ψ = 0.5 ∗ FoV (20)

6 2015, Vol. No, No:2015 www.intechopen.com

Figure 8. Experiment 1: rectangular box placed at different distances

Using geometry;

l = tan(FoV(H))× Xd (14)

h = tan(FoV(V))× Xd (15)

Ph =l

No.of pixels(H)(16)

Pv =h

No.of pixels(V)(17)

iy = Ph × Ov (18)

ix = Pv × Oh (19)

where l is the length of the image frame, h is the heightof the image frame, Xd is the distance between the imageframe and the camera, Ph is the projected pixel size ofeach pixel in the object along the horizontal axis, Pv is theprojected pixel size of each pixel in the object along thevertical axis, Oh is the number of object-occupied pixelsalong the horizontal axis in the image frame, Ov is thenumber of object-occupied pixels along the vertical axis inthe image frame, ix is the total length of the object in theimage frame, and iy is the total height of the object in theimage frame.To obtain accurate readings, an experimental setup is madeas shown in Figure 7(a). The camera and test object areplaced on a horizontal rail, as shown in Figure 7, and thetest object is moved to different distances without alteringthe position of the camera. A rectangular box of knowndimensions (157 mm (length) × 80 mm (height)) is used asthe reference object and its lengths and heights at differentdistances calculated using the readings from the camera,as previously discussed and illustrated in Figure 8, whichare compared with the actual dimensions and the relativeerror, are calculated. Relative accuracies with respect todistance are shown in Figure 9, with the calculated lengthsand heights of the test object plotted as blue and greenlines, respectively. It can be seen that the relative error isless than ± 3%.

(a) (b)

Figure 7. Experimental setup for calibration.

4.1. Camera mounting angle

The 3D model of the Pioneer 3DX with the camerasmounted at a tilt angle of Ψ=0 is created in GoogleSketchUp, and the dimensions of the AGV and thecamera’s FoV extended to 7,000 mm are drawn to closelyresemble the actual situation in order to assist theperception of the work (Figure 10).

As can be seen, for a camera orientation of ψ = 0, a largenumber of captured data would not be necessary to flagobstacles (due to being above the robot), and the blind spot

(a) 350 mm (b) 450 mm

(c) 600 mm (d) 850 mm

Figure 8. Experiment 1: rectangular box placed at differentdistances.

300 400 500 600 700 800 900 1000 1100−10

−8

−6

−4

−2

0

2

4

6

8

10Distance Vs Error

Distance in mm

Rel

ativ

e E

rror

in %

Length (157 mm)

Height (80 mm)

Figure 9. Plots of distance vs relative error.

Field of view (40 H x 40 V)

Camera range

Figure 10. 3D model depicting P3DX AGV.

in front of the robot could ideally be reduced. Becauseof these considerations, a more optimal camera angle wasadopted - this angle was determined as half the verticalFoV of the PMD.

ψ = 0.5 ∗ FoV (20)

6 2015, Vol. No, No:2015 www.intechopen.com

Figure 9. Plots of distance vs relative error

4.1 Camera Mounting Angle

The 3D model of the Pioneer 3DX with the camerasmounted at a tilt angle of Ψ =0 is created in Google Sketch‐Up, and the dimensions of the AGV and the camera’s FoVextended to 7,000 mm are drawn to closely resemble theactual situation in order to assist the perception of the work(Figure 10).

Using geometry;

l = tan(FoV(H))× Xd (14)

h = tan(FoV(V))× Xd (15)

Ph =l

No.of pixels(H)(16)

Pv =h

No.of pixels(V)(17)

iy = Ph × Ov (18)

ix = Pv × Oh (19)

where l is the length of the image frame, h is the heightof the image frame, Xd is the distance between the imageframe and the camera, Ph is the projected pixel size ofeach pixel in the object along the horizontal axis, Pv is theprojected pixel size of each pixel in the object along thevertical axis, Oh is the number of object-occupied pixelsalong the horizontal axis in the image frame, Ov is thenumber of object-occupied pixels along the vertical axis inthe image frame, ix is the total length of the object in theimage frame, and iy is the total height of the object in theimage frame.To obtain accurate readings, an experimental setup is madeas shown in Figure 7(a). The camera and test object areplaced on a horizontal rail, as shown in Figure 7, and thetest object is moved to different distances without alteringthe position of the camera. A rectangular box of knowndimensions (157 mm (length) × 80 mm (height)) is used asthe reference object and its lengths and heights at differentdistances calculated using the readings from the camera,as previously discussed and illustrated in Figure 8, whichare compared with the actual dimensions and the relativeerror, are calculated. Relative accuracies with respect todistance are shown in Figure 9, with the calculated lengthsand heights of the test object plotted as blue and greenlines, respectively. It can be seen that the relative error isless than ± 3%.

(a) (b)

Figure 7. Experimental setup for calibration.

4.1. Camera mounting angle

The 3D model of the Pioneer 3DX with the camerasmounted at a tilt angle of Ψ=0 is created in GoogleSketchUp, and the dimensions of the AGV and thecamera’s FoV extended to 7,000 mm are drawn to closelyresemble the actual situation in order to assist theperception of the work (Figure 10).

As can be seen, for a camera orientation of ψ = 0, a largenumber of captured data would not be necessary to flagobstacles (due to being above the robot), and the blind spot

(a) 350 mm (b) 450 mm

(c) 600 mm (d) 850 mm

Figure 8. Experiment 1: rectangular box placed at differentdistances.

300 400 500 600 700 800 900 1000 1100−10

−8

−6

−4

−2

0

2

4

6

8

10Distance Vs Error

Distance in mm

Rel

ativ

e E

rror

in %

Length (157 mm)

Height (80 mm)

Figure 9. Plots of distance vs relative error.

Field of view (40 H x 40 V)

Camera range

Figure 10. 3D model depicting P3DX AGV.

in front of the robot could ideally be reduced. Becauseof these considerations, a more optimal camera angle wasadopted - this angle was determined as half the verticalFoV of the PMD.

ψ = 0.5 ∗ FoV (20)

6 2015, Vol. No, No:2015 www.intechopen.com

Figure 10. 3D model depicting P3DX AGV

As can be seen, for a camera orientation of ψ = 0, a largenumber of captured data would not be necessary to flagobstacles (due to being above the robot), and the blind spotin front of the robot could ideally be reduced. Because ofthese considerations, a more optimal camera angle wasadopted - this angle was determined as half the vertical FoVof the PMD.

= 0.5 FoVy ´ (20)

Setting ψ ensured that the top-most pixels were observeddirectly ahead of the robot, thereby providing an adequatedetection of obstacles while maximizing the ground planeconception. The sketch shown below illustrates the mount‐ing concept, the grid (localized) and the cameras’ FoVprojected as understood.

7Sobers Lourdu Xavier Francis, Sreenatha G. Anavatti, Matthew Garratt and Hyunbgo Shim:A ToF-camera as a 3D Vision Sensor for Autonomous Mobile Robotics

Page 8: A ToF-Camera as a 3D Vision Sensor for Autonomous Mobile

Figure 11. Sketch of the configuration with FoV

Figure 12. Interpretation of PMD camera data into the occupancy grid

Following the creation of the 3D model to depict thedevelopmental robot and its FoV, the MATLAB trigonom‐etry toolbox for spherical co-ordinates was engaged todetermine the specific placement of the ToF cameras’ datapoints. The distance measurements returned per pixelcould be thought of as individual measurements at aknown elevation and azimuth, and thus could be projectedto intersect the ground plane if they are within the maxi‐mum range.

The ToF camera data are converted into Cartesian coordi‐nates for the grid mapping using the following transfor‐mation,

= ( )x r cos cosq y g´ - ´ (21)

= ( )y r cos sinq y g´ - ´ (22)

= [ ( ) ]z r sin hq y´ - + (23)

where x is the distance in front of the AGV [mm], y is thedistance left of the AGV’s midpoint [mm], z is the heightabove the AGV’s ground level [mm], θ is the elevation fromthe camera midpoint [degrees], γ is the azimuth from thecamera midpoint [degrees], ψ is the downwards camera tilt[degrees] and h is the camera height above the ground fromthe centre of the receiver [mm].

Figure 13. Pictorial representation of the co-ordinates

The following figures produced in MATLAB illustrate theprojection of the individual pixels for a mounting angle of-15 degrees. The points on the plots show the projectedposition of the pixels in 3D space prior to the ground planebeing considered and after it has been used as a boundary.The blue pixels show pixels that are above 0 degrees andthe green pixels reflect the pixels that are below or on theground plane.

Figure 14. Non-truncated by the ground plane (-15 degrees)

Figure 15. Truncated by the ground plane (-15 degrees)

8 Int J Adv Robot Syst, 2015, 12:156 | doi: 10.5772/61348

Page 9: A ToF-Camera as a 3D Vision Sensor for Autonomous Mobile

5. Experimentation and Analysis

A PMD CamCube 2.0 camera is mounted on the Pioneer3DX mobile robot as depicted in Figure 16, whereas thePMD S3 camera is hinged on the Pioneer AT robot. ThesePMD cameras provide Cartesian coordinates expressed inmetres, with a correction which compensates for the radialdistortions of the optics [48]. The coordinate system is right-handed, with the Zcam coordinate increasing along theoptical axis away from the camera, the Y cam coordinateincreasing vertically upwards and the X cam coordinateincreasing horizontally to the left - all from the point of viewof the camera (or someone standing behind it). The originof the coordinate system (0, 0, 0) is at the intersection of theoptical axis and the front of the camera. As the cameras arestatically fixed on top of the AGV’s at a static angle of tiltof Ψ, their coordinates are aligned with those of the AGV’swhich provide the +12V power to run the camera.

PMD S3 camera is hinged on the Pioneer AT robot. ThesePMD cameras provide Cartesian coordinates expressed inmetres, with a correction which compensates for the radialdistortions of the optics [48]. The coordinate system isright-handed, with the Zcam coordinate increasing alongthe optical axis away from the camera, the Ycam coordinateincreasing vertically upwards and the Xcam coordinateincreasing horizontally to the left - all from the point ofview of the camera (or someone standing behind it). Theorigin of the coordinate system (0, 0, 0) is at the intersectionof the optical axis and the front of the camera. As thecameras are statically fixed on top of the AGV’s at a staticangle of tilt of Ψ, their coordinates are aligned with thoseof the AGV’s which provide the +12V power to run thecamera.

90 deg

(a) Pioneer 3DX (b) Pioneer 3AT

Figure 16. PMD mounted on the Pioneer robots.

For these two mobile robots, the camera heights h weremeasured from the centre of the camera receiver to a levelground plane as hS3onP3AT = 347.9 mm and hCamcubeonP3DX= 286.50 mm.

In this section, several standardized tests are devised tocharacterize the PMD camera. The code that is writtencomprehensively analyses the camera mounting anglesweep tests when the AGV is stationary. Code has not beenwritten to handle the reading of the moving tests with andwithout objects; however, results for these scenarios wereused by applying the raw functionality of the program.This section intends to explain in-depth how to conductthe still angle sweep tests as well as how the moving testsand object recognition tests were performed.

To obtain a greater view of the ground, and necessarily,the camera mounting angle pointing downwards wasadjusted. The specific angle of this mounting is explained;nonetheless, it was unexpected that the IR light incident onthe angles to the ground plain would result in significantreceiver loss and distortion of the distance measurements.

5.1. Stationary angle sweeps

To perform this type of testing, the PMD camera wasmounted at a constant height such that the anglesubtended by the camera’s FoV to the ground could bevaried. The ground plane is also flat and uniform (a flatgrass field fits this criteria). This concept is illustratedin the figures below - in these figures, the camera ismounted on the AGV and a hinge-type bracket enables themounting angle to be changed.

In addition, a white surface was recreated using plainwhite paper and a sweep of the camera angles was

Mounting angle 0 Degrees Mounting angle -30 Degrees Mounting angle 45 Degrees

Figure 17. Sweep of mounting angles.

conducted. Testing the camera on a white surface recreatedthe conditions outlined by the PMD S3 data sheet, wherebythe white, grey and black surface error versus the distancedata were defined.

As testing involved capturing a 60-second exposure, thetiming was synchronized via the implementation of acounting routine on the Pioneer. Ten capture sets weretaken for a sweep of camera angles from 0 to -45 degrees.From this point, the spherical coordinate system data weremanipulated to Cartesian coordinates for the purpose ofplotting in 3D space. The captured frames were averagedover the 60-second duration and a comparison of theexpected data with actual data on the same axis wasplotted. It was found that the averaged distance valuesobtained from the 60-second exposures were significantlydifferent when considering both the number of data pointsdetected on the ground and the detected distance of thedata points. The plots below show the difference betweenthe simulation distance measurements and the actualdistance measurements for ψ = 0 and ψ = -15 degrees. Forthe following plots, the black and red points are the actualdata while the green and blue points are simulated.

0

1000

2000

3000

4000

5000

6000

−3000−2000

−10000

10002000

3000−500

0

500

1000

1500

2000

Distance from front of robot (mm)

0 Degrees Simulated and Actual PMD S3 Camera data comparison in Grass Colour indicates detection [R>70%] [B<70] [G=Simulation]

Distance Left/Right of robot (+/− mm)

Dis

tanc

e A

bove

/Bel

ow G

roun

d pl

ane

(+/−

mm

)

Figure 18. Isometric view comparing the simulation and actualPMD S3 data for a 0-degree mounting angle.

As can be seen, a fraction of the data points (black pixels)was detected for the 0-degree mounting angle. Thefact that predominately black pixels were detected alsoindicates that the measurements taken occurred for lessthan 70% of the capture frames.

Building upon what was noticed for zero degrees,low-confidence pixels appear to flare above the groundplane. High-confidence pixels, conversely, can be seen tocurve below the ground plane.

The trend revealed by these plots is consistent throughoutthe angles and for the same survey that was taken of agrass surface. From close inspection of the data returned

8 2015, Vol. No, No:2015 www.intechopen.com

Figure 16. PMD mounted on the Pioneer robots

For these two mobile robots, the camera heights h weremeasured from the centre of the camera receiver to a levelground plane as h S 3onP3AT = 347.9 mm and h CamcubeonP3DX =286.50 mm.

In this section, several standardized tests are devised tocharacterize the PMD camera. The code that is writtencomprehensively analyses the camera mounting anglesweep tests when the AGV is stationary. Code has not beenwritten to handle the reading of the moving tests with andwithout objects; however, results for these scenarios wereused by applying the raw functionality of the program. Thissection intends to explain in-depth how to conduct the stillangle sweep tests as well as how the moving tests and objectrecognition tests were performed.

To obtain a greater view of the ground, and necessarily, thecamera mounting angle pointing downwards was adjust‐ed. The specific angle of this mounting is explained;nonetheless, it was unexpected that the IR light incident onthe angles to the ground plain would result in significantreceiver loss and distortion of the distance measurements.

5.1 Stationary Angle Sweeps

To perform this type of testing, the PMD camera wasmounted at a constant height such that the angle subtendedby the camera’s FoV to the ground could be varied. The

ground plane is also flat and uniform (a flat grass field fitsthis criteria). This concept is illustrated in the figures below- in these figures, the camera is mounted on the AGV anda hinge-type bracket enables the mounting angle to bechanged.

PMD S3 camera is hinged on the Pioneer AT robot. ThesePMD cameras provide Cartesian coordinates expressed inmetres, with a correction which compensates for the radialdistortions of the optics [48]. The coordinate system isright-handed, with the Zcam coordinate increasing alongthe optical axis away from the camera, the Ycam coordinateincreasing vertically upwards and the Xcam coordinateincreasing horizontally to the left - all from the point ofview of the camera (or someone standing behind it). Theorigin of the coordinate system (0, 0, 0) is at the intersectionof the optical axis and the front of the camera. As thecameras are statically fixed on top of the AGV’s at a staticangle of tilt of Ψ, their coordinates are aligned with thoseof the AGV’s which provide the +12V power to run thecamera.

90 deg

(a) Pioneer 3DX (b) Pioneer 3AT

Figure 16. PMD mounted on the Pioneer robots.

For these two mobile robots, the camera heights h weremeasured from the centre of the camera receiver to a levelground plane as hS3onP3AT = 347.9 mm and hCamcubeonP3DX= 286.50 mm.

In this section, several standardized tests are devised tocharacterize the PMD camera. The code that is writtencomprehensively analyses the camera mounting anglesweep tests when the AGV is stationary. Code has not beenwritten to handle the reading of the moving tests with andwithout objects; however, results for these scenarios wereused by applying the raw functionality of the program.This section intends to explain in-depth how to conductthe still angle sweep tests as well as how the moving testsand object recognition tests were performed.

To obtain a greater view of the ground, and necessarily,the camera mounting angle pointing downwards wasadjusted. The specific angle of this mounting is explained;nonetheless, it was unexpected that the IR light incident onthe angles to the ground plain would result in significantreceiver loss and distortion of the distance measurements.

5.1. Stationary angle sweeps

To perform this type of testing, the PMD camera wasmounted at a constant height such that the anglesubtended by the camera’s FoV to the ground could bevaried. The ground plane is also flat and uniform (a flatgrass field fits this criteria). This concept is illustratedin the figures below - in these figures, the camera ismounted on the AGV and a hinge-type bracket enables themounting angle to be changed.

In addition, a white surface was recreated using plainwhite paper and a sweep of the camera angles was

Mounting angle 0 Degrees Mounting angle -30 Degrees Mounting angle 45 Degrees

Figure 17. Sweep of mounting angles.

conducted. Testing the camera on a white surface recreatedthe conditions outlined by the PMD S3 data sheet, wherebythe white, grey and black surface error versus the distancedata were defined.

As testing involved capturing a 60-second exposure, thetiming was synchronized via the implementation of acounting routine on the Pioneer. Ten capture sets weretaken for a sweep of camera angles from 0 to -45 degrees.From this point, the spherical coordinate system data weremanipulated to Cartesian coordinates for the purpose ofplotting in 3D space. The captured frames were averagedover the 60-second duration and a comparison of theexpected data with actual data on the same axis wasplotted. It was found that the averaged distance valuesobtained from the 60-second exposures were significantlydifferent when considering both the number of data pointsdetected on the ground and the detected distance of thedata points. The plots below show the difference betweenthe simulation distance measurements and the actualdistance measurements for ψ = 0 and ψ = -15 degrees. Forthe following plots, the black and red points are the actualdata while the green and blue points are simulated.

0

1000

2000

3000

4000

5000

6000

−3000−2000

−10000

10002000

3000−500

0

500

1000

1500

2000

Distance from front of robot (mm)

0 Degrees Simulated and Actual PMD S3 Camera data comparison in Grass Colour indicates detection [R>70%] [B<70] [G=Simulation]

Distance Left/Right of robot (+/− mm)

Dis

tanc

e A

bove

/Bel

ow G

roun

d pl

ane

(+/−

mm

)

Figure 18. Isometric view comparing the simulation and actualPMD S3 data for a 0-degree mounting angle.

As can be seen, a fraction of the data points (black pixels)was detected for the 0-degree mounting angle. Thefact that predominately black pixels were detected alsoindicates that the measurements taken occurred for lessthan 70% of the capture frames.

Building upon what was noticed for zero degrees,low-confidence pixels appear to flare above the groundplane. High-confidence pixels, conversely, can be seen tocurve below the ground plane.

The trend revealed by these plots is consistent throughoutthe angles and for the same survey that was taken of agrass surface. From close inspection of the data returned

8 2015, Vol. No, No:2015 www.intechopen.com

Figure 17. Sweep of mounting angles

In addition, a white surface was recreated using plain whitepaper and a sweep of the camera angles was conducted.Testing the camera on a white surface recreated theconditions outlined by the PMD S3 data sheet, whereby thewhite, grey and black surface error versus the distance datawere defined.

As testing involved capturing a 60-second exposure, thetiming was synchronized via the implementation of acounting routine on the Pioneer. Ten capture sets weretaken for a sweep of camera angles from 0 to -45 degrees.From this point, the spherical coordinate system data weremanipulated to Cartesian coordinates for the purpose ofplotting in 3D space. The captured frames were averagedover the 60-second duration and a comparison of theexpected data with actual data on the same axis wasplotted. It was found that the averaged distance valuesobtained from the 60-second exposures were significantlydifferent when considering both the number of data pointsdetected on the ground and the detected distance of thedata points. The plots below show the difference betweenthe simulation distance measurements and the actualdistance measurements for ψ = 0 and ψ = -15 degrees. Forthe following plots, the black and red points are the actualdata while the green and blue points are simulated.

PMD S3 camera is hinged on the Pioneer AT robot. ThesePMD cameras provide Cartesian coordinates expressed inmetres, with a correction which compensates for the radialdistortions of the optics [48]. The coordinate system isright-handed, with the Zcam coordinate increasing alongthe optical axis away from the camera, the Ycam coordinateincreasing vertically upwards and the Xcam coordinateincreasing horizontally to the left - all from the point ofview of the camera (or someone standing behind it). Theorigin of the coordinate system (0, 0, 0) is at the intersectionof the optical axis and the front of the camera. As thecameras are statically fixed on top of the AGV’s at a staticangle of tilt of Ψ, their coordinates are aligned with thoseof the AGV’s which provide the +12V power to run thecamera.

90 deg

(a) Pioneer 3DX (b) Pioneer 3AT

Figure 16. PMD mounted on the Pioneer robots.

For these two mobile robots, the camera heights h weremeasured from the centre of the camera receiver to a levelground plane as hS3onP3AT = 347.9 mm and hCamcubeonP3DX= 286.50 mm.

In this section, several standardized tests are devised tocharacterize the PMD camera. The code that is writtencomprehensively analyses the camera mounting anglesweep tests when the AGV is stationary. Code has not beenwritten to handle the reading of the moving tests with andwithout objects; however, results for these scenarios wereused by applying the raw functionality of the program.This section intends to explain in-depth how to conductthe still angle sweep tests as well as how the moving testsand object recognition tests were performed.

To obtain a greater view of the ground, and necessarily,the camera mounting angle pointing downwards wasadjusted. The specific angle of this mounting is explained;nonetheless, it was unexpected that the IR light incident onthe angles to the ground plain would result in significantreceiver loss and distortion of the distance measurements.

5.1. Stationary angle sweeps

To perform this type of testing, the PMD camera wasmounted at a constant height such that the anglesubtended by the camera’s FoV to the ground could bevaried. The ground plane is also flat and uniform (a flatgrass field fits this criteria). This concept is illustratedin the figures below - in these figures, the camera ismounted on the AGV and a hinge-type bracket enables themounting angle to be changed.

In addition, a white surface was recreated using plainwhite paper and a sweep of the camera angles was

Mounting angle 0 Degrees Mounting angle -30 Degrees Mounting angle 45 Degrees

Figure 17. Sweep of mounting angles.

conducted. Testing the camera on a white surface recreatedthe conditions outlined by the PMD S3 data sheet, wherebythe white, grey and black surface error versus the distancedata were defined.

As testing involved capturing a 60-second exposure, thetiming was synchronized via the implementation of acounting routine on the Pioneer. Ten capture sets weretaken for a sweep of camera angles from 0 to -45 degrees.From this point, the spherical coordinate system data weremanipulated to Cartesian coordinates for the purpose ofplotting in 3D space. The captured frames were averagedover the 60-second duration and a comparison of theexpected data with actual data on the same axis wasplotted. It was found that the averaged distance valuesobtained from the 60-second exposures were significantlydifferent when considering both the number of data pointsdetected on the ground and the detected distance of thedata points. The plots below show the difference betweenthe simulation distance measurements and the actualdistance measurements for ψ = 0 and ψ = -15 degrees. Forthe following plots, the black and red points are the actualdata while the green and blue points are simulated.

0

1000

2000

3000

4000

5000

6000

−3000−2000

−10000

10002000

3000−500

0

500

1000

1500

2000

Distance from front of robot (mm)

0 Degrees Simulated and Actual PMD S3 Camera data comparison in Grass Colour indicates detection [R>70%] [B<70] [G=Simulation]

Distance Left/Right of robot (+/− mm)

Dis

tanc

e A

bove

/Bel

ow G

roun

d pl

ane

(+/−

mm

)

Figure 18. Isometric view comparing the simulation and actualPMD S3 data for a 0-degree mounting angle.

As can be seen, a fraction of the data points (black pixels)was detected for the 0-degree mounting angle. Thefact that predominately black pixels were detected alsoindicates that the measurements taken occurred for lessthan 70% of the capture frames.

Building upon what was noticed for zero degrees,low-confidence pixels appear to flare above the groundplane. High-confidence pixels, conversely, can be seen tocurve below the ground plane.

The trend revealed by these plots is consistent throughoutthe angles and for the same survey that was taken of agrass surface. From close inspection of the data returned

8 2015, Vol. No, No:2015 www.intechopen.com

Figure 18. Isometric view comparing the simulation and actual PMD S3 datafor a 0-degree mounting angle

9Sobers Lourdu Xavier Francis, Sreenatha G. Anavatti, Matthew Garratt and Hyunbgo Shim:A ToF-camera as a 3D Vision Sensor for Autonomous Mobile Robotics

Page 10: A ToF-Camera as a 3D Vision Sensor for Autonomous Mobile

As can be seen, a fraction of the data points (black pixels)was detected for the 0-degree mounting angle. The fact thatpredominately black pixels were detected also indicatesthat the measurements taken occurred for less than 70% ofthe capture frames.

1200 1400 1600 1800 2000 2200

0

10

20

30

40

50

60

70

80

90

100

0 Degrees Simulated and Actual PMD S3 Camera data comparison in Grass Colour indicates detection [R>70%] [B<70] [G=Simulation]

Dis

tanc

e A

bove

/Bel

ow G

roun

d pl

ane

(+/−

mm

)

Distance from front of Robot (mm)

Figure 19. Side view comparing the simulation and the actualPMD S3 data for a 0-degree mounting angle.

0

1000

2000

3000

4000

5000

6000

−3000−2000

−10000

10002000

3000−50

0

50

100

150

200

250

300

350

Distance from front of robot (mm)

−15 Degrees Simulated and Actual PMD S3 Camera data comparison in Grass Colour indicates detection [R>70%] [B<70] [G=Simulation]

Distance Left/Right of robot (+/− mm)

Dis

tanc

e A

bove

/Bel

ow G

roun

d pl

ane

(+/−

mm

)

Figure 20. Isometric view comparing the simulation and actualPMD S3 data for a -15-degree mounting angle.

600 800 1000 1200 1400 1600 1800

−20

−10

0

10

20

30

40

50

60

70

−15 Degrees Simulated and Actual PMD S3 Camera data comparison in Grass Colour indicates detection [R>70%] [B<70] [G=Simulation]

Dis

tanc

e A

bove

/Bel

ow G

roun

d pl

ane

(+/−

mm

)

Distance from front of Robot (mm)

Figure 21. Side view comparing the simulation and actual PMDS3 data for a -15-degree mounting angle.

by the ToF camera, it seems that the discrepancy inperformance conforms to some form of a trend. Thedifference in the measurements can be seen to conformto a curve down along the x and y axes. Ordinarily, thiswould not be significant but the data points vary from100 mm brackets. This is significant for the AGV becauseit cannot traverse obstacles greater than 150 mm. If theAGV’s normal conception of the ground plain consistedof consistent curves (as shown), the compilation of errorwould compound and lead to improper functionality.Hence, from here, an in-depth process of analysis,characterization and devising corrections was initiated.

5.2. Investigating the ToF performance discrepancy

The function of a ToF camera was researched in the earlystages of this work; however, no risk was perceived fromthe point of view of the physics via which the camerafunctions. The error seen in the plots was attributed to awell-known phenomenon, namely ‘light scattering’.

The key difference between indoor and outdoor ToFapplications was that of gaining a conception of theground required for IR light to be incident on angles muchcloser to 0 degrees compared to 90 degrees. This meant thesusceptibility to IR scattering needed to be considered.

Essentially, IR emitted from the ToF camera was subject toless volume-return to the receiver, thus tricking the deviceinto thinking the distance measurements were furtheraway. Fortunately, the difference in performance wasuniform and predictable. Because of this it was possibleto develop correction, thereby making the camera usefulfor obstacle avoidance.

5.3. Quantifying the ToF performance discrepancy

Several methods were used in the analysis of the camera’sperformance: histograms were produced for sample pixelsin the HPxVP grid to ensure that normally distributederrors were present for the camera sampling. Histogramswere also applied to analyse the pixel detection, whichwas grouped. For most angles, it could be seen that amajority of the pixels were within the > 90% detectionbracket and a second large group was located in the < 50%bracket. Unfortunately, > 70% the detected pixels (highconfidence) were seen to contribute to curve up alongthe extremities of the detected distance in front of thecamera - this consideration was important in developingthe correction.

−5−4−3−2−1012340

100

200

300Pixel at(1,1)

−5−4−3−2−1012340

100

200

300Pixel at(1,16)

−5−4−3−2−1012340

100

200

300Pixel at(1,32)

−5−4−3−2−1012340

100

200

300Pixel at(1,48)

−5−4−3−2−1012340

100

200

300Pixel at(1,64)

−5−4−3−2−1012340

100

200

300Pixel at(1,16)

0 1000 2000 30000

50

100

150Pixel at(16,16)

0 1000 2000 30000

50

100

150

200Pixel at(32,16)

0 1000 2000 30000

50

100

150

200Pixel at(48,16)

−5−4−3−2−1012340

100

200

300Pixel at(64,16)

800 1000 12000

20

40

60

80Pixel at(1,32)

900 950 10000

20

40

60Pixel at(16,32)

900 950 10000

20

40

60

80Pixel at(32,32)

950 1000 10500

20

40

60

80Pixel at(48,32)

800 900 1000 11000

20

40

60Pixel at(64,32)

600 700 800 9000

20

40

60

80Pixel at(1,48)

650 700 7500

20

40

60

80Pixel at(16,48)

660 670 680 6900

20

40

60Pixel at(32,48)

670 680 690 7000

20

40

60Pixel at(48,48)

600 700 8000

20

40

60

80Pixel at(64,48)

Figure 22. Sample of the distance measurement histograms forvarious ψ = -15 grass surfaces.

Another method of holistic analysis was to plot thedetection rate and maximum perceived distances againstthe camera mounting. This also enabled a directcomparison of the performance of the camera from a grasssurface with a white surface. These plots are shown inFigure 23.

www.intechopen.com :A ToF-camera as a 3D Vision Sensor for Autonomous Mobile Robotics

9

Figure 19. Side view comparing the simulation and the actual PMD S3 datafor a 0-degree mounting angle

1200 1400 1600 1800 2000 2200

0

10

20

30

40

50

60

70

80

90

100

0 Degrees Simulated and Actual PMD S3 Camera data comparison in Grass Colour indicates detection [R>70%] [B<70] [G=Simulation]

Dis

tanc

e A

bove

/Bel

ow G

roun

d pl

ane

(+/−

mm

)

Distance from front of Robot (mm)

Figure 19. Side view comparing the simulation and the actualPMD S3 data for a 0-degree mounting angle.

0

1000

2000

3000

4000

5000

6000

−3000−2000

−10000

10002000

3000−50

0

50

100

150

200

250

300

350

Distance from front of robot (mm)

−15 Degrees Simulated and Actual PMD S3 Camera data comparison in Grass Colour indicates detection [R>70%] [B<70] [G=Simulation]

Distance Left/Right of robot (+/− mm)

Dis

tanc

e A

bove

/Bel

ow G

roun

d pl

ane

(+/−

mm

)

Figure 20. Isometric view comparing the simulation and actualPMD S3 data for a -15-degree mounting angle.

600 800 1000 1200 1400 1600 1800

−20

−10

0

10

20

30

40

50

60

70

−15 Degrees Simulated and Actual PMD S3 Camera data comparison in Grass Colour indicates detection [R>70%] [B<70] [G=Simulation]

Dis

tanc

e A

bove

/Bel

ow G

roun

d pl

ane

(+/−

mm

)

Distance from front of Robot (mm)

Figure 21. Side view comparing the simulation and actual PMDS3 data for a -15-degree mounting angle.

by the ToF camera, it seems that the discrepancy inperformance conforms to some form of a trend. Thedifference in the measurements can be seen to conformto a curve down along the x and y axes. Ordinarily, thiswould not be significant but the data points vary from100 mm brackets. This is significant for the AGV becauseit cannot traverse obstacles greater than 150 mm. If theAGV’s normal conception of the ground plain consistedof consistent curves (as shown), the compilation of errorwould compound and lead to improper functionality.Hence, from here, an in-depth process of analysis,characterization and devising corrections was initiated.

5.2. Investigating the ToF performance discrepancy

The function of a ToF camera was researched in the earlystages of this work; however, no risk was perceived fromthe point of view of the physics via which the camerafunctions. The error seen in the plots was attributed to awell-known phenomenon, namely ‘light scattering’.

The key difference between indoor and outdoor ToFapplications was that of gaining a conception of theground required for IR light to be incident on angles muchcloser to 0 degrees compared to 90 degrees. This meant thesusceptibility to IR scattering needed to be considered.

Essentially, IR emitted from the ToF camera was subject toless volume-return to the receiver, thus tricking the deviceinto thinking the distance measurements were furtheraway. Fortunately, the difference in performance wasuniform and predictable. Because of this it was possibleto develop correction, thereby making the camera usefulfor obstacle avoidance.

5.3. Quantifying the ToF performance discrepancy

Several methods were used in the analysis of the camera’sperformance: histograms were produced for sample pixelsin the HPxVP grid to ensure that normally distributederrors were present for the camera sampling. Histogramswere also applied to analyse the pixel detection, whichwas grouped. For most angles, it could be seen that amajority of the pixels were within the > 90% detectionbracket and a second large group was located in the < 50%bracket. Unfortunately, > 70% the detected pixels (highconfidence) were seen to contribute to curve up alongthe extremities of the detected distance in front of thecamera - this consideration was important in developingthe correction.

−5−4−3−2−1012340

100

200

300Pixel at(1,1)

−5−4−3−2−1012340

100

200

300Pixel at(1,16)

−5−4−3−2−1012340

100

200

300Pixel at(1,32)

−5−4−3−2−1012340

100

200

300Pixel at(1,48)

−5−4−3−2−1012340

100

200

300Pixel at(1,64)

−5−4−3−2−1012340

100

200

300Pixel at(1,16)

0 1000 2000 30000

50

100

150Pixel at(16,16)

0 1000 2000 30000

50

100

150

200Pixel at(32,16)

0 1000 2000 30000

50

100

150

200Pixel at(48,16)

−5−4−3−2−1012340

100

200

300Pixel at(64,16)

800 1000 12000

20

40

60

80Pixel at(1,32)

900 950 10000

20

40

60Pixel at(16,32)

900 950 10000

20

40

60

80Pixel at(32,32)

950 1000 10500

20

40

60

80Pixel at(48,32)

800 900 1000 11000

20

40

60Pixel at(64,32)

600 700 800 9000

20

40

60

80Pixel at(1,48)

650 700 7500

20

40

60

80Pixel at(16,48)

660 670 680 6900

20

40

60Pixel at(32,48)

670 680 690 7000

20

40

60Pixel at(48,48)

600 700 8000

20

40

60

80Pixel at(64,48)

Figure 22. Sample of the distance measurement histograms forvarious ψ = -15 grass surfaces.

Another method of holistic analysis was to plot thedetection rate and maximum perceived distances againstthe camera mounting. This also enabled a directcomparison of the performance of the camera from a grasssurface with a white surface. These plots are shown inFigure 23.

www.intechopen.com :A ToF-camera as a 3D Vision Sensor for Autonomous Mobile Robotics

9

Figure 20. Isometric view comparing the simulation and actual PMD S3 datafor a -15-degree mounting angle

Building upon what was noticed for zero degrees, low-confidence pixels appear to flare above the ground plane.High-confidence pixels, conversely, can be seen to curvebelow the ground plane.

The trend revealed by these plots is consistent throughoutthe angles and for the same survey that was taken of a grasssurface. From close inspection of the data returned by theToF camera, it seems that the discrepancy in performanceconforms to some form of a trend. The difference in themeasurements can be seen to conform to a curve downalong the x and y axes. Ordinarily, this would not besignificant but the data points vary from 100 mm brackets.This is significant for the AGV because it cannot traverseobstacles greater than 150 mm. If the AGV’s normal

conception of the ground plain consisted of consistentcurves (as shown), the compilation of error would com‐pound and lead to improper functionality. Hence, fromhere, an in-depth process of analysis, characterization anddevising corrections was initiated.

5.2 Investigating the ToF Performance Discrepancy

The function of a ToF camera was researched in the earlystages of this work; however, no risk was perceived fromthe point of view of the physics via which the camerafunctions. The error seen in the plots was attributed to awell-known phenomenon, namely ‘light scattering’.

The key difference between indoor and outdoor ToFapplications was that of gaining a conception of the groundrequired for IR light to be incident on angles much closerto 0 degrees compared to 90 degrees. This meant thesusceptibility to IR scattering needed to be considered.

Essentially, IR emitted from the ToF camera was subject toless volume-return to the receiver, thus tricking the deviceinto thinking the distance measurements were furtheraway. Fortunately, the difference in performance wasuniform and predictable. Because of this it was possible todevelop correction, thereby making the camera useful forobstacle avoidance.

5.3 Quantifying the ToF Performance Discrepancy

Several methods were used in the analysis of the camera’sperformance: histograms were produced for sample pixelsin the HPxVP grid to ensure that normally distributederrors were present for the camera sampling. Histogramswere also applied to analyse the pixel detection, which wasgrouped. For most angles, it could be seen that a majorityof the pixels were within the >90% detection bracket and asecond large group was located in the <50% bracket.Unfortunately, >70% the detected pixels (high confidence)were seen to contribute to curve up along the extremities

1200 1400 1600 1800 2000 2200

0

10

20

30

40

50

60

70

80

90

100

0 Degrees Simulated and Actual PMD S3 Camera data comparison in Grass Colour indicates detection [R>70%] [B<70] [G=Simulation]

Dis

tanc

e A

bove

/Bel

ow G

roun

d pl

ane

(+/−

mm

)

Distance from front of Robot (mm)

Figure 19. Side view comparing the simulation and the actualPMD S3 data for a 0-degree mounting angle.

0

1000

2000

3000

4000

5000

6000

−3000−2000

−10000

10002000

3000−50

0

50

100

150

200

250

300

350

Distance from front of robot (mm)

−15 Degrees Simulated and Actual PMD S3 Camera data comparison in Grass Colour indicates detection [R>70%] [B<70] [G=Simulation]

Distance Left/Right of robot (+/− mm)

Dis

tanc

e A

bove

/Bel

ow G

roun

d pl

ane

(+/−

mm

)

Figure 20. Isometric view comparing the simulation and actualPMD S3 data for a -15-degree mounting angle.

600 800 1000 1200 1400 1600 1800

−20

−10

0

10

20

30

40

50

60

70

−15 Degrees Simulated and Actual PMD S3 Camera data comparison in Grass Colour indicates detection [R>70%] [B<70] [G=Simulation]

Dis

tanc

e A

bove

/Bel

ow G

roun

d pl

ane

(+/−

mm

)

Distance from front of Robot (mm)

Figure 21. Side view comparing the simulation and actual PMDS3 data for a -15-degree mounting angle.

by the ToF camera, it seems that the discrepancy inperformance conforms to some form of a trend. Thedifference in the measurements can be seen to conformto a curve down along the x and y axes. Ordinarily, thiswould not be significant but the data points vary from100 mm brackets. This is significant for the AGV becauseit cannot traverse obstacles greater than 150 mm. If theAGV’s normal conception of the ground plain consistedof consistent curves (as shown), the compilation of errorwould compound and lead to improper functionality.Hence, from here, an in-depth process of analysis,characterization and devising corrections was initiated.

5.2. Investigating the ToF performance discrepancy

The function of a ToF camera was researched in the earlystages of this work; however, no risk was perceived fromthe point of view of the physics via which the camerafunctions. The error seen in the plots was attributed to awell-known phenomenon, namely ‘light scattering’.

The key difference between indoor and outdoor ToFapplications was that of gaining a conception of theground required for IR light to be incident on angles muchcloser to 0 degrees compared to 90 degrees. This meant thesusceptibility to IR scattering needed to be considered.

Essentially, IR emitted from the ToF camera was subject toless volume-return to the receiver, thus tricking the deviceinto thinking the distance measurements were furtheraway. Fortunately, the difference in performance wasuniform and predictable. Because of this it was possibleto develop correction, thereby making the camera usefulfor obstacle avoidance.

5.3. Quantifying the ToF performance discrepancy

Several methods were used in the analysis of the camera’sperformance: histograms were produced for sample pixelsin the HPxVP grid to ensure that normally distributederrors were present for the camera sampling. Histogramswere also applied to analyse the pixel detection, whichwas grouped. For most angles, it could be seen that amajority of the pixels were within the > 90% detectionbracket and a second large group was located in the < 50%bracket. Unfortunately, > 70% the detected pixels (highconfidence) were seen to contribute to curve up alongthe extremities of the detected distance in front of thecamera - this consideration was important in developingthe correction.

−5−4−3−2−1012340

100

200

300Pixel at(1,1)

−5−4−3−2−1012340

100

200

300Pixel at(1,16)

−5−4−3−2−1012340

100

200

300Pixel at(1,32)

−5−4−3−2−1012340

100

200

300Pixel at(1,48)

−5−4−3−2−1012340

100

200

300Pixel at(1,64)

−5−4−3−2−1012340

100

200

300Pixel at(1,16)

0 1000 2000 30000

50

100

150Pixel at(16,16)

0 1000 2000 30000

50

100

150

200Pixel at(32,16)

0 1000 2000 30000

50

100

150

200Pixel at(48,16)

−5−4−3−2−1012340

100

200

300Pixel at(64,16)

800 1000 12000

20

40

60

80Pixel at(1,32)

900 950 10000

20

40

60Pixel at(16,32)

900 950 10000

20

40

60

80Pixel at(32,32)

950 1000 10500

20

40

60

80Pixel at(48,32)

800 900 1000 11000

20

40

60Pixel at(64,32)

600 700 800 9000

20

40

60

80Pixel at(1,48)

650 700 7500

20

40

60

80Pixel at(16,48)

660 670 680 6900

20

40

60Pixel at(32,48)

670 680 690 7000

20

40

60Pixel at(48,48)

600 700 8000

20

40

60

80Pixel at(64,48)

Figure 22. Sample of the distance measurement histograms forvarious ψ = -15 grass surfaces.

Another method of holistic analysis was to plot thedetection rate and maximum perceived distances againstthe camera mounting. This also enabled a directcomparison of the performance of the camera from a grasssurface with a white surface. These plots are shown inFigure 23.

www.intechopen.com :A ToF-camera as a 3D Vision Sensor for Autonomous Mobile Robotics

9

Figure 21. Side view comparing the simulation and actual PMD S3 data fora -15-degree mounting angle

10 Int J Adv Robot Syst, 2015, 12:156 | doi: 10.5772/61348

Page 11: A ToF-Camera as a 3D Vision Sensor for Autonomous Mobile

of the detected distance in front of the camera - thisconsideration was important in developing the correction.

1200 1400 1600 1800 2000 2200

0

10

20

30

40

50

60

70

80

90

100

0 Degrees Simulated and Actual PMD S3 Camera data comparison in Grass Colour indicates detection [R>70%] [B<70] [G=Simulation]

Dis

tanc

e A

bove

/Bel

ow G

roun

d pl

ane

(+/−

mm

)

Distance from front of Robot (mm)

Figure 19. Side view comparing the simulation and the actualPMD S3 data for a 0-degree mounting angle.

0

1000

2000

3000

4000

5000

6000

−3000−2000

−10000

10002000

3000−50

0

50

100

150

200

250

300

350

Distance from front of robot (mm)

−15 Degrees Simulated and Actual PMD S3 Camera data comparison in Grass Colour indicates detection [R>70%] [B<70] [G=Simulation]

Distance Left/Right of robot (+/− mm)

Dis

tanc

e A

bove

/Bel

ow G

roun

d pl

ane

(+/−

mm

)

Figure 20. Isometric view comparing the simulation and actualPMD S3 data for a -15-degree mounting angle.

600 800 1000 1200 1400 1600 1800

−20

−10

0

10

20

30

40

50

60

70

−15 Degrees Simulated and Actual PMD S3 Camera data comparison in Grass Colour indicates detection [R>70%] [B<70] [G=Simulation]

Dis

tanc

e A

bove

/Bel

ow G

roun

d pl

ane

(+/−

mm

)

Distance from front of Robot (mm)

Figure 21. Side view comparing the simulation and actual PMDS3 data for a -15-degree mounting angle.

by the ToF camera, it seems that the discrepancy inperformance conforms to some form of a trend. Thedifference in the measurements can be seen to conformto a curve down along the x and y axes. Ordinarily, thiswould not be significant but the data points vary from100 mm brackets. This is significant for the AGV becauseit cannot traverse obstacles greater than 150 mm. If theAGV’s normal conception of the ground plain consistedof consistent curves (as shown), the compilation of errorwould compound and lead to improper functionality.Hence, from here, an in-depth process of analysis,characterization and devising corrections was initiated.

5.2. Investigating the ToF performance discrepancy

The function of a ToF camera was researched in the earlystages of this work; however, no risk was perceived fromthe point of view of the physics via which the camerafunctions. The error seen in the plots was attributed to awell-known phenomenon, namely ‘light scattering’.

The key difference between indoor and outdoor ToFapplications was that of gaining a conception of theground required for IR light to be incident on angles muchcloser to 0 degrees compared to 90 degrees. This meant thesusceptibility to IR scattering needed to be considered.

Essentially, IR emitted from the ToF camera was subject toless volume-return to the receiver, thus tricking the deviceinto thinking the distance measurements were furtheraway. Fortunately, the difference in performance wasuniform and predictable. Because of this it was possibleto develop correction, thereby making the camera usefulfor obstacle avoidance.

5.3. Quantifying the ToF performance discrepancy

Several methods were used in the analysis of the camera’sperformance: histograms were produced for sample pixelsin the HPxVP grid to ensure that normally distributederrors were present for the camera sampling. Histogramswere also applied to analyse the pixel detection, whichwas grouped. For most angles, it could be seen that amajority of the pixels were within the > 90% detectionbracket and a second large group was located in the < 50%bracket. Unfortunately, > 70% the detected pixels (highconfidence) were seen to contribute to curve up alongthe extremities of the detected distance in front of thecamera - this consideration was important in developingthe correction.

−5−4−3−2−1012340

100

200

300Pixel at(1,1)

−5−4−3−2−1012340

100

200

300Pixel at(1,16)

−5−4−3−2−1012340

100

200

300Pixel at(1,32)

−5−4−3−2−1012340

100

200

300Pixel at(1,48)

−5−4−3−2−1012340

100

200

300Pixel at(1,64)

−5−4−3−2−1012340

100

200

300Pixel at(1,16)

0 1000 2000 30000

50

100

150Pixel at(16,16)

0 1000 2000 30000

50

100

150

200Pixel at(32,16)

0 1000 2000 30000

50

100

150

200Pixel at(48,16)

−5−4−3−2−1012340

100

200

300Pixel at(64,16)

800 1000 12000

20

40

60

80Pixel at(1,32)

900 950 10000

20

40

60Pixel at(16,32)

900 950 10000

20

40

60

80Pixel at(32,32)

950 1000 10500

20

40

60

80Pixel at(48,32)

800 900 1000 11000

20

40

60Pixel at(64,32)

600 700 800 9000

20

40

60

80Pixel at(1,48)

650 700 7500

20

40

60

80Pixel at(16,48)

660 670 680 6900

20

40

60Pixel at(32,48)

670 680 690 7000

20

40

60Pixel at(48,48)

600 700 8000

20

40

60

80Pixel at(64,48)

Figure 22. Sample of the distance measurement histograms forvarious ψ = -15 grass surfaces.

Another method of holistic analysis was to plot thedetection rate and maximum perceived distances againstthe camera mounting. This also enabled a directcomparison of the performance of the camera from a grasssurface with a white surface. These plots are shown inFigure 23.

www.intechopen.com :A ToF-camera as a 3D Vision Sensor for Autonomous Mobile Robotics

9

Figure 22. Sample of the distance measurement histograms for various ψ =-15 grass surfaces

Another method of holistic analysis was to plot thedetection rate and maximum perceived distances againstthe camera mounting. This also enabled a direct compari‐son of the performance of the camera from a grass surfacewith a white surface. These plots are shown in Figure 23.

−45 −40 −35 −30 −25 −20 −15 −10 −5 020

30

40

50

60

70

80

90

100

Camera Angles (Degrees)

Per

cent

age

of P

ixel

s re

turn

ing

a di

stan

ce m

easu

rem

ent (

%)

Comparison of Expected and Actual Percentage of pixels returning distance measurents

SimulationActual

Figure 23. Pixel detection simulation and actual case.

In both instances, it can be seen that the detection of thepixels is considerably less in reality. The white surface canbe seen to have approximately 35% less pixel detectionfor any given ψ below 20 degrees; grass can be seen tohave about 15% less. It is interesting to note that the grassperformance is better in this measure. This is due to theintuitive fact that grass is very smooth and thus incidentIR light has a much greater chance of bouncing back off anobject closer to perpendicular than flat ground.

−45 −40 −35 −30 −25 −20 −15 −10 −5 00

1000

2000

3000

4000

5000

6000

Camera Angles (Degrees)

Dis

tanc

e (m

m)

Comparison of Expected and Actual Maximum detected range

SimulationActual

Figure 24. Grass maximum detected range at angle.

Similar to the previous plots, both drastically underperform when it comes to comparing with the simulation’sexpectations. In the case of the better performing grass, amaximum detected range plateau can be seen to be slightlyless than 3,000 m. These two methods of analysis wereuseful in quantifying the difference in terms of sheer datameasurements.

Following this type of quantifying analysis, the idealcamera mounting angles ψ for the PMD S3 on P3ATand CamCube2.0 cameras on P3DX were postulated as-15 and -20 degrees respectively. Adopting this ensuredthat the cameras could provide an adequate conception ofobstacles, maximizing the ground plane conception.

5.4. Grid-based ground surface reconstruction

The ground surface from the robots’ traversed pathswas devised in MATLAB and was reconstructed usingcaptured depth frames. The Cartesian conversion of datapoints scattered around the area which the robot traversedis obtained by synchronizing the orientation and positionof the robot platform - and thus the camera - and the datacaptured from the PMD cameras. The grid produced for amounting angle of -20 degrees and a grid size of 100 mmsquare is shown in Figure 25. The routine was coded togo for 60 seconds until the capture frames, to synchronizeeach to a position and orientation, and to distribute theCartesian coordinates into squares of the grid.

0

1000

2000

3000

4000

5000

6000

7000

8000

−3000

−2000

−1000

0

1000

2000

3000

−5000

500

Grid heights percieved in area encountered by PMD Camcube 2.0 Camera at−20 Degrees

Distance Left/Right of AGV (+/− mm) Distance from start location

Figure 25. Surface reconstruction: grid heights perceived in thearea encountered by the PMD CamCube camera at -20 degrees.

6. Real-time Motion Testing

In the motion experiment, the AGV relies on the PMDcamera - as a single vision sensor - to obtain informationabout its surroundings and to guide it to achieve itstask, and it is equipped with two shaft encoders to trackits position (Xrob and Yrob) and orientation θveh. Theexperiments were performed without prior knowledge ofthe workspace, such as the location, velocity, orientationand number of obstacles. The range data from the PMDcamera were utilised to detect and estimate the relativedistances between the vehicle and the obstacles. Asthe camera senses the 3D coordinates of an obstacle indifferent frame sequences, it uses this information to detectthe obstacles using a scene flow. The ego-motion of thePMD camera mounted on the AGV can be estimated bytracking features between subsequent images [49]. The‘good features to track’ feature detection algorithm [50]is used to stabilize the moving AGV by comparing twoconsecutive frames.

The ego-motion compensation is a transformation fromthe image coordinates of a previous image to that of thecurrent image so that the effect of the ego-motion of thecamera can be eliminated [257]. The feature pairs f it−1, f it,where (t f rame − 1) is the last image and t f rame denotes thecurrent image, and the ego-motion of the camera can beestimated by using a transformation model. As such, we

10 2015, Vol. No, No:2015 www.intechopen.com

Figure 23. Pixel detection simulation and actual case

In both instances, it can be seen that the detection of thepixels is considerably less in reality. The white surface canbe seen to have approximately 35% less pixel detection forany given ψ below 20 degrees; grass can be seen to haveabout 15% less. It is interesting to note that the grassperformance is better in this measure. This is due to theintuitive fact that grass is very smooth and thus incident IRlight has a much greater chance of bouncing back off anobject closer to perpendicular than flat ground.

Similar to the previous plots, both drastically underperform when it comes to comparing with the simulation’sexpectations. In the case of the better performing grass, amaximum detected range plateau can be seen to be slightlyless than 3,000 m. These two methods of analysis wereuseful in quantifying the difference in terms of sheer datameasurements.

Following this type of quantifying analysis, the idealcamera mounting angles ψ for the PMD S3 on P3AT andCamCube2.0 cameras on P3DX were postulated as -15 and-20 degrees respectively. Adopting this ensured that thecameras could provide an adequate conception of obsta‐cles, maximizing the ground plane conception.

5.4 Grid-based Ground Surface Reconstruction

The ground surface from the robots’ traversed paths wasdevised in MATLAB and was reconstructed using captureddepth frames. The Cartesian conversion of data pointsscattered around the area which the robot traversed isobtained by synchronizing the orientation and position ofthe robot platform and thus the camera, and the datacaptured from the PMD cameras. The grid produced for amounting angle of -20 degrees and a grid size of 100 mmsquare is shown in Figure 25. The routine was coded to gofor 60 seconds until the capture frames, to synchronize eachto a position and orientation, and to distribute the Cartesiancoordinates into squares of the grid.−45 −40 −35 −30 −25 −20 −15 −10 −5 0

20

30

40

50

60

70

80

90

100

Camera Angles (Degrees)

Per

cent

age

of P

ixel

s re

turn

ing

a di

stan

ce m

easu

rem

ent (

%)

Comparison of Expected and Actual Percentage of pixels returning distance measurents

SimulationActual

Figure 23. Pixel detection simulation and actual case.

In both instances, it can be seen that the detection of thepixels is considerably less in reality. The white surface canbe seen to have approximately 35% less pixel detectionfor any given ψ below 20 degrees; grass can be seen tohave about 15% less. It is interesting to note that the grassperformance is better in this measure. This is due to theintuitive fact that grass is very smooth and thus incidentIR light has a much greater chance of bouncing back off anobject closer to perpendicular than flat ground.

−45 −40 −35 −30 −25 −20 −15 −10 −5 00

1000

2000

3000

4000

5000

6000

Camera Angles (Degrees)

Dis

tanc

e (m

m)

Comparison of Expected and Actual Maximum detected range

SimulationActual

Figure 24. Grass maximum detected range at angle.

Similar to the previous plots, both drastically underperform when it comes to comparing with the simulation’sexpectations. In the case of the better performing grass, amaximum detected range plateau can be seen to be slightlyless than 3,000 m. These two methods of analysis wereuseful in quantifying the difference in terms of sheer datameasurements.

Following this type of quantifying analysis, the idealcamera mounting angles ψ for the PMD S3 on P3ATand CamCube2.0 cameras on P3DX were postulated as-15 and -20 degrees respectively. Adopting this ensuredthat the cameras could provide an adequate conception ofobstacles, maximizing the ground plane conception.

5.4. Grid-based ground surface reconstruction

The ground surface from the robots’ traversed pathswas devised in MATLAB and was reconstructed usingcaptured depth frames. The Cartesian conversion of datapoints scattered around the area which the robot traversedis obtained by synchronizing the orientation and positionof the robot platform - and thus the camera - and the datacaptured from the PMD cameras. The grid produced for amounting angle of -20 degrees and a grid size of 100 mmsquare is shown in Figure 25. The routine was coded togo for 60 seconds until the capture frames, to synchronizeeach to a position and orientation, and to distribute theCartesian coordinates into squares of the grid.

0

1000

2000

3000

4000

5000

6000

7000

8000

−3000

−2000

−1000

0

1000

2000

3000

−5000

500

Grid heights percieved in area encountered by PMD Camcube 2.0 Camera at−20 Degrees

Distance Left/Right of AGV (+/− mm) Distance from start location

Figure 25. Surface reconstruction: grid heights perceived in thearea encountered by the PMD CamCube camera at -20 degrees.

6. Real-time Motion Testing

In the motion experiment, the AGV relies on the PMDcamera - as a single vision sensor - to obtain informationabout its surroundings and to guide it to achieve itstask, and it is equipped with two shaft encoders to trackits position (Xrob and Yrob) and orientation θveh. Theexperiments were performed without prior knowledge ofthe workspace, such as the location, velocity, orientationand number of obstacles. The range data from the PMDcamera were utilised to detect and estimate the relativedistances between the vehicle and the obstacles. Asthe camera senses the 3D coordinates of an obstacle indifferent frame sequences, it uses this information to detectthe obstacles using a scene flow. The ego-motion of thePMD camera mounted on the AGV can be estimated bytracking features between subsequent images [49]. The‘good features to track’ feature detection algorithm [50]is used to stabilize the moving AGV by comparing twoconsecutive frames.

The ego-motion compensation is a transformation fromthe image coordinates of a previous image to that of thecurrent image so that the effect of the ego-motion of thecamera can be eliminated [257]. The feature pairs f it−1, f it,where (t f rame − 1) is the last image and t f rame denotes thecurrent image, and the ego-motion of the camera can beestimated by using a transformation model. As such, we

10 2015, Vol. No, No:2015 www.intechopen.com

Figure 25. Surface reconstruction: grid heights perceived in the areaencountered by the PMD CamCube camera at -20 degrees

−45 −40 −35 −30 −25 −20 −15 −10 −5 020

30

40

50

60

70

80

90

100

Camera Angles (Degrees)

Per

cent

age

of P

ixel

s re

turn

ing

a di

stan

ce m

easu

rem

ent (

%)

Comparison of Expected and Actual Percentage of pixels returning distance measurents

SimulationActual

Figure 23. Pixel detection simulation and actual case.

In both instances, it can be seen that the detection of thepixels is considerably less in reality. The white surface canbe seen to have approximately 35% less pixel detectionfor any given ψ below 20 degrees; grass can be seen tohave about 15% less. It is interesting to note that the grassperformance is better in this measure. This is due to theintuitive fact that grass is very smooth and thus incidentIR light has a much greater chance of bouncing back off anobject closer to perpendicular than flat ground.

−45 −40 −35 −30 −25 −20 −15 −10 −5 00

1000

2000

3000

4000

5000

6000

Camera Angles (Degrees)

Dis

tanc

e (m

m)

Comparison of Expected and Actual Maximum detected range

SimulationActual

Figure 24. Grass maximum detected range at angle.

Similar to the previous plots, both drastically underperform when it comes to comparing with the simulation’sexpectations. In the case of the better performing grass, amaximum detected range plateau can be seen to be slightlyless than 3,000 m. These two methods of analysis wereuseful in quantifying the difference in terms of sheer datameasurements.

Following this type of quantifying analysis, the idealcamera mounting angles ψ for the PMD S3 on P3ATand CamCube2.0 cameras on P3DX were postulated as-15 and -20 degrees respectively. Adopting this ensuredthat the cameras could provide an adequate conception ofobstacles, maximizing the ground plane conception.

5.4. Grid-based ground surface reconstruction

The ground surface from the robots’ traversed pathswas devised in MATLAB and was reconstructed usingcaptured depth frames. The Cartesian conversion of datapoints scattered around the area which the robot traversedis obtained by synchronizing the orientation and positionof the robot platform - and thus the camera - and the datacaptured from the PMD cameras. The grid produced for amounting angle of -20 degrees and a grid size of 100 mmsquare is shown in Figure 25. The routine was coded togo for 60 seconds until the capture frames, to synchronizeeach to a position and orientation, and to distribute theCartesian coordinates into squares of the grid.

0

1000

2000

3000

4000

5000

6000

7000

8000

−3000

−2000

−1000

0

1000

2000

3000

−5000

500

Grid heights percieved in area encountered by PMD Camcube 2.0 Camera at−20 Degrees

Distance Left/Right of AGV (+/− mm) Distance from start location

Figure 25. Surface reconstruction: grid heights perceived in thearea encountered by the PMD CamCube camera at -20 degrees.

6. Real-time Motion Testing

In the motion experiment, the AGV relies on the PMDcamera - as a single vision sensor - to obtain informationabout its surroundings and to guide it to achieve itstask, and it is equipped with two shaft encoders to trackits position (Xrob and Yrob) and orientation θveh. Theexperiments were performed without prior knowledge ofthe workspace, such as the location, velocity, orientationand number of obstacles. The range data from the PMDcamera were utilised to detect and estimate the relativedistances between the vehicle and the obstacles. Asthe camera senses the 3D coordinates of an obstacle indifferent frame sequences, it uses this information to detectthe obstacles using a scene flow. The ego-motion of thePMD camera mounted on the AGV can be estimated bytracking features between subsequent images [49]. The‘good features to track’ feature detection algorithm [50]is used to stabilize the moving AGV by comparing twoconsecutive frames.

The ego-motion compensation is a transformation fromthe image coordinates of a previous image to that of thecurrent image so that the effect of the ego-motion of thecamera can be eliminated [257]. The feature pairs f it−1, f it,where (t f rame − 1) is the last image and t f rame denotes thecurrent image, and the ego-motion of the camera can beestimated by using a transformation model. As such, we

10 2015, Vol. No, No:2015 www.intechopen.com

Figure 24. Grass maximum detected range at angle

11Sobers Lourdu Xavier Francis, Sreenatha G. Anavatti, Matthew Garratt and Hyunbgo Shim:A ToF-camera as a 3D Vision Sensor for Autonomous Mobile Robotics

Page 12: A ToF-Camera as a 3D Vision Sensor for Autonomous Mobile

6. Real-time Motion Testing

In the motion experiment, the AGV relies on the PMDcamera - as a single vision sensor - to obtain informationabout its surroundings and to guide it to achieve its task,and it is equipped with two shaft encoders to track itsposition (X rob and Y rob) and orientation θveh . The experi‐ments were performed without prior knowledge of theworkspace, such as the location, velocity, orientation andnumber of obstacles. The range data from the PMD camerawere utilised to detect and estimate the relative distancesbetween the vehicle and the obstacles. As the camera sensesthe 3D coordinates of an obstacle in different framesequences, it uses this information to detect the obstaclesusing a scene flow. The ego-motion of the PMD cameramounted on the AGV can be estimated by tracking featuresbetween subsequent images [49]. The ‘Good Features toTrack’ feature detection algorithm [50] is used to stabilizethe moving AGV by comparing two consecutive frames.

The ego-motion compensation is a transformation from theimage coordinates of a previous image to that of the currentimage so that the effect of the ego-motion of the camera canbe eliminated [257]. The feature pairs f it−1, f it , where(t frame −1) is the last image and t frame denotes the currentimage, and the ego-motion of the camera can be estimatedby using a transformation model. As such, we simply applylinear regression to train the constants. The next procedureis to eliminate the bad features and refine the transforma‐tion modal.

Now, the transformation model obtained is used tomanipulate the whole-image pixels in order to eliminatethe effect of the ego-motion of the camera. The initialminimal path is calculated by the grid-based efficient D*Lite path-planning algorithm and the PMD camera pro‐vides information on the obstacles in real-time. When anobstacle is perceived via the cameras’ FoV, the AGVprocesses the sensor information and, if required, it cancontinually re-plan its path to avoid any collision until itreaches its final goal.

The goal is to plan a collision-free path for the AGV to reachits desired position by implementing the efficient D* Litealgorithm on the P3DX’s onboard computer. The PMDcamera is used as an exteroceptive sensor with a frame ratefor the scene flow of 10 fps. All the experiments werecarried out without any modifications to the P3DX control‐ler’s parameter.

In this experiment, the AGV is set to travel from an initialposition (X rob, Y rob) = (0, 0) to a goal position (9.0 m, 0), eachcoordinate being (X rob, Y rob) with X rob and Y rob in metres. Toperceive its surroundings, it obtains information from thesensor rather than using a priori information, and itsuccessfully avoids the three static obstacles in its path, asshown in Figures 26 and 27.

simply apply linear regression to train the constants. Thenext procedure is to eliminate the bad features and refinethe transformation modal.

Now, the transformation model obtained is used tomanipulate the whole-image pixels in order to eliminatethe effect of the ego-motion of the camera. The initialminimal path is calculated by the grid-based efficientD* Lite path-planning algorithm and the PMD cameraprovides information on the obstacles in real-time. Whenan obstacle is perceived via the cameras’ FoV, the AGVprocesses the sensor information and, if required, it cancontinually re-plan its path to avoid any collision until itreaches its final goal.

The goal is to plan a collision-free path for the AGV toreach its desired position by implementing the efficientD* Lite algorithm on the P3DX’s onboard computer. ThePMD camera is used as an exteroceptive sensor with aframe rate for the scene flow of 10 fps. All the experimentswere carried out without any modifications to the P3DXcontroller’s parameter.

In this experiment, the AGV is set to travel from aninitial position (Xrob, Yrob) = (0, 0) to a goal position (9.0m, 0), each coordinate being (Xrob, Yrob) with Xrob andYrob in metres. To perceive its surroundings, it obtainsinformation from the sensor rather than using a prioriinformation, and it successfully avoids the three staticobstacles in its path, as shown in Figures 26 and 27.

0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000−1200

−1000

−800

−600

−400

−200

0

200

400

X−coordinates in mm

Y−

coor

dina

tes

in m

m

AGV

Figure 26. Experiment (office): plot of the AGV’s x-coordinatesvs y-coordinates.

7. Conclusion

The optimal deployment of a ToF-based PMD camera onan AGV is presented in this paper, the overall missionof which is to traverse the AGV from one point toanother in hazardous and hostile environments withouthuman intervention. A ToF camera is used as the keysensory device for perceiving the operating environment,the depth data of which are populated into a workspacegrip map. An optimal camera mounting angle isadopted by the analysis of various cameras’ performancediscrepancies. A series of still and moving tests werecarried out to verify the correct sensor operation. Finally,in the real-time autonomous path-planning experiment,the AGV relied completely on its perception system tosense the operating environment and avoid static obstacles

(a) Sensing the first obstacle (b) Re-planning its path

(c) Avoiding the second (d) Overcoming the third

Figure 27. Experiment (office): indoor test with three staticobstacles.

when it traversed towards its goal. In future, real-timeexperiments will be conducted in dynamic environments.

8. References

[1] S. Patnaik. Robot Cognition and Navigation: AnExperiment with Mobile Robots. Springer BerlinHeidelberg New York, 2007.

[2] J. Martinez Gomez, A. Fernandez Caballero, I. GarciaVarea, L. Rodriguez Ruiz, and C. Romero Gonzalez.A taxonomy of vision systems for ground mobilerobots. International Journal of Advanced RoboticSystems, (1729-8806), July 2014.

[3] H. R. Everett. Sensors for Mobile Robots: Theory andApplication. A. K. Peters, Ltd., Natick, MA, USA, 1995.

[4] Fardi, Basel, Ullrich Schuenert, and GerdWanielik.Shape and motion-based pedestrian detection ininfrared images: A multi sensor approach. In IEEEIntelligent Vehicles Symposium IV, pages 18–23, 2005.

[5] G. Sgouros, Papakonstantinous, and P. Tsanakas.Localized qualitative navigation for indoorenvironments. In IEEE Intl. Conf. on Robotics andAutomation, pages 921–926, 1996.

[6] Bertozzi, Broggi, Cellario, Fascioli, Lombardi, andPorta. Artificial vision in road vehicles. In 28th IEEEIndustrial Electronics Society Annual Conf, pages 1258 –1271, 2002.

[7] K. Rebai, A. Benabderrahmane, O. Azouaoui, andN. Ouadah. Moving Obstacles Detection andTracking with Laser Range Finder. In AdvancedRobotics, 2009. ICAR 2009. International Conference on,pages 1–6, 2009.

www.intechopen.com :A ToF-camera as a 3D Vision Sensor for Autonomous Mobile Robotics

11

Figure 26. Experiment (office): plot of the AGV’s x-coordinates vs y-coordinates

simply apply linear regression to train the constants. Thenext procedure is to eliminate the bad features and refinethe transformation modal.

Now, the transformation model obtained is used tomanipulate the whole-image pixels in order to eliminatethe effect of the ego-motion of the camera. The initialminimal path is calculated by the grid-based efficientD* Lite path-planning algorithm and the PMD cameraprovides information on the obstacles in real-time. Whenan obstacle is perceived via the cameras’ FoV, the AGVprocesses the sensor information and, if required, it cancontinually re-plan its path to avoid any collision until itreaches its final goal.

The goal is to plan a collision-free path for the AGV toreach its desired position by implementing the efficientD* Lite algorithm on the P3DX’s onboard computer. ThePMD camera is used as an exteroceptive sensor with aframe rate for the scene flow of 10 fps. All the experimentswere carried out without any modifications to the P3DXcontroller’s parameter.

In this experiment, the AGV is set to travel from aninitial position (Xrob, Yrob) = (0, 0) to a goal position (9.0m, 0), each coordinate being (Xrob, Yrob) with Xrob andYrob in metres. To perceive its surroundings, it obtainsinformation from the sensor rather than using a prioriinformation, and it successfully avoids the three staticobstacles in its path, as shown in Figures 26 and 27.

0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000−1200

−1000

−800

−600

−400

−200

0

200

400

X−coordinates in mm

Y−

coor

dina

tes

in m

m

AGV

Figure 26. Experiment (office): plot of the AGV’s x-coordinatesvs y-coordinates.

7. Conclusion

The optimal deployment of a ToF-based PMD camera onan AGV is presented in this paper, the overall missionof which is to traverse the AGV from one point toanother in hazardous and hostile environments withouthuman intervention. A ToF camera is used as the keysensory device for perceiving the operating environment,the depth data of which are populated into a workspacegrip map. An optimal camera mounting angle isadopted by the analysis of various cameras’ performancediscrepancies. A series of still and moving tests werecarried out to verify the correct sensor operation. Finally,in the real-time autonomous path-planning experiment,the AGV relied completely on its perception system tosense the operating environment and avoid static obstacles

(a) Sensing the first obstacle (b) Re-planning its path

(c) Avoiding the second (d) Overcoming the third

Figure 27. Experiment (office): indoor test with three staticobstacles.

when it traversed towards its goal. In future, real-timeexperiments will be conducted in dynamic environments.

8. References

[1] S. Patnaik. Robot Cognition and Navigation: AnExperiment with Mobile Robots. Springer BerlinHeidelberg New York, 2007.

[2] J. Martinez Gomez, A. Fernandez Caballero, I. GarciaVarea, L. Rodriguez Ruiz, and C. Romero Gonzalez.A taxonomy of vision systems for ground mobilerobots. International Journal of Advanced RoboticSystems, (1729-8806), July 2014.

[3] H. R. Everett. Sensors for Mobile Robots: Theory andApplication. A. K. Peters, Ltd., Natick, MA, USA, 1995.

[4] Fardi, Basel, Ullrich Schuenert, and GerdWanielik.Shape and motion-based pedestrian detection ininfrared images: A multi sensor approach. In IEEEIntelligent Vehicles Symposium IV, pages 18–23, 2005.

[5] G. Sgouros, Papakonstantinous, and P. Tsanakas.Localized qualitative navigation for indoorenvironments. In IEEE Intl. Conf. on Robotics andAutomation, pages 921–926, 1996.

[6] Bertozzi, Broggi, Cellario, Fascioli, Lombardi, andPorta. Artificial vision in road vehicles. In 28th IEEEIndustrial Electronics Society Annual Conf, pages 1258 –1271, 2002.

[7] K. Rebai, A. Benabderrahmane, O. Azouaoui, andN. Ouadah. Moving Obstacles Detection andTracking with Laser Range Finder. In AdvancedRobotics, 2009. ICAR 2009. International Conference on,pages 1–6, 2009.

www.intechopen.com :A ToF-camera as a 3D Vision Sensor for Autonomous Mobile Robotics

11

Figure 27. Experiment (office): indoor test with three static obstacles

7. Conclusion

The optimal deployment of a ToF-based PMD camera onan AGV is presented in this paper, the overall mission ofwhich is to traverse the AGV from one point to another inhazardous and hostile environments without humanintervention. A ToF camera is used as the key sensorydevice for perceiving the operating environment, the depthdata of which are populated into a workspace grip map. Anoptimal camera mounting angle is adopted by the analysisof various cameras’ performance discrepancies. A series ofstill and moving tests were carried out to verify the correctsensor operation. Finally, in the real-time autonomouspath-planning experiment, the AGV relied completely on

12 Int J Adv Robot Syst, 2015, 12:156 | doi: 10.5772/61348

Page 13: A ToF-Camera as a 3D Vision Sensor for Autonomous Mobile

its perception system to sense the operating environmentand avoid static obstacles when it traversed towards itsgoal. In future, real-time experiments will be conducted indynamic environments.

8. Acknowledgements

The author would like to sincerely thank Mr. Daniel Salierfor the MATLAB code.

9. References

[1] S. Patnaik. Robot Cognition and Navigation: AnExperiment with Mobile Robots. Springer BerlinHeidelberg New York, 2007.

[2] J. Martinez Gomez, A. Fernandez Caballero, I. GarciaVarea, L. Rodriguez Ruiz, and C. Romero Gonza‐lez. A taxonomy of vision systems for ground mobilerobots. International Journal of Advanced RoboticSystems, (1729-8806), July 2014.

[3] H. R. Everett. Sensors for Mobile Robots: Theory andApplication. A. K. Peters, Ltd., Natick, MA, USA,1995.

[4] Fardi, Basel, Ullrich Schuenert, and GerdWanielik.Shape and motion-based pedestrian detection ininfrared images: A multi sensor approach. In IEEEIntelligent Vehicles Symposium IV, pages 18–23, 2005.

[5] G. Sgouros, Papakonstantinous, and P. Tsanakas.Localized qualitative navigation for indoor environ‐ments. In IEEE International Conference on Robotics andAutomation, pages 921–926, 1996.

[6] Bertozzi, Broggi, Cellario, Fascioli, Lombardi, andPorta. Artificial vision in road vehicles. In 28th IEEEIndustrial Electronics Society Annual Conf, pages 1258– 1271, 2002.

[7] K. Rebai, A. Benabderrahmane, O. Azouaoui, and N.Ouadah. Moving Obstacles Detection and Trackingwith Laser Range Finder. In Advanced Robotics, 2009.ICAR 2009. International Conference on, pages 1–6,2009.

[8] Alefs, Bram, David Schreiber, and Markus Clabian.Hypothesis based vehicle detection for increasedsimplicity in multi sensor. In IEEE Intelligent VehiclesSymposium, pages 261–266, 2005.

[9] Mats Ahlskog. 3d vision. Master’s thesis, Depart‐ment of Computer Science and Electronics, Mälar‐dalen University, 2007.

[10] S. Hussmann, T. Ringbeck, and B. Hagebeuker. Aperformance review of 3d tof vision systems incomparison to stereo vision systems. In Stereo Vision(Online book publication), pages 103–120, Vienna,Austria, 2008. I-Tech Education and Publishing.

[11] Stefan May. 3D Time-of-Flight Ranging for RoboticPerception in Dynamic Environments. PhD thesis,Institute for Computer Science, University ofOsnabrück, Germany, 2009.

[12] T. Ringbeck and B. Hagebeuker. A 3D Time of FlightCamera for Object Detection. PMD Technologies

GmbH, Am Eichenhang 50, 57076 Siegen, Germa‐ny, 2007.

[13] D. Droeschel et al. Robust Ego-Motion Estimationwith ToF Cameras. In Proceedings of the 4th EuropeanConference on Mobile Robots, Mlini/Dubrovnik,Croatia, September 2009.

[14] J.W. Weingarten, G. Gruener, and R. Siegwart. Astate-of-the-art 3d sensor for robot navigation. inintelligent robots and systems. In In IEEE/RSJ Int.Conf. on Intelligent Robots and Systems, volume 3,pages 2155–2160, 2004.

[15] Fang Yuan, A. Swadzba, R. Philippsen, O. Engin, M.Hanheide, and S. Wachsmuth. Laser-based naviga‐tion enhanced with 3d time-of-flight data. In InRobotics and Automation, ICRA’09, pages 2844–2850,2009.

[16] M. Hansard, S. Lee, O. Choi, and R. P. Horaud. Timeof Flight Cameras: Principles, Methods, and Applica‐tions. SpringerBriefs in Computer Science. Spring‐er, November 2012.

[17] M. Wiedemann, M. Sauer, F. Driewer, and K.Schilling. Analysis and characterization of the pmdcamera for application in mobile robotics. In The17thIFAC World Congress, Seoul, Korea, pages 13689–13694, 2008.

[18] Roger Bostelman, Tsai Hong, Raj Madhavan, andBrian Weiss. 3d range imaging for urban search andrescue robotics research. In In Safety, Security andRescue Robotics, Workshop, 2005 IEEE International,pages 164–169, 2005.

[19] M. Weyrich, P. Klein, M. Laurowski, and Y. Wang.Vision Based Defect Detection on 3D Objects andPath Planning for Processing. In Proceedings of the11th WSEAS international conference on robotics,control and manufacturing technology, and 11th WSEASinternational conference on Multimedia systems & signalprocessing, ROCOM’11/MUSP’11, pages 19–24,Stevens Point, Wisconsin, USA, 2011. WorldScientific and Engineering Academy and Society(WSEAS).

[20] Benjamin Huhle, Philipp Jenke, and WolfgangStrasser. On-the-fly scene acquisition with a handymulti-sensor system. International Journal of Intelli‐gent Systems Technologies and Applications, 5(3/4):255–263, 2008.

[21] R. Bostelman, P. Russo, J. Albus, T. Hong, and R.Madhavan. Applications of a 3D Range CameraTowards Healthcare Mobility Aids. In IEEE Interna‐tional Conference on Networking, Sensing and Control(ICNSC ’06), pages 416–421, August 2006.

[22] D. Falie and V. Buzuloiu. Wide range time of flightcamera for outdoor surveillance. In Microwaves,Radar and Remote Sensing Symposium, 2008. MRRS2008, pages 79–82, 2008.

[23] V. Castaneda, D. Mateus, and N. Navab. Slamcombining tof and high-resolution cameras. InApplications of Computer Vision (WACV), 2011 IEEEWorkshop on, pages 672–678, 2011.

13Sobers Lourdu Xavier Francis, Sreenatha G. Anavatti, Matthew Garratt and Hyunbgo Shim:A ToF-camera as a 3D Vision Sensor for Autonomous Mobile Robotics

Page 14: A ToF-Camera as a 3D Vision Sensor for Autonomous Mobile

[24] Sergio Almansa-Valverde, José Carlos Castillo, andAntonio Fernández-Caballero. Mobile robot mapbuilding from time-of-flight camera. Expert Syst.Appl., 39(10):8835–8843, 2012.

[25] Jochen Penne, Christian Schaller, Joachim Horneg‐ger, and Torsten Kuwert. Robust real-time 3drespiratory motion detection using time-of-flightcameras. International Journal of Computer AssistedRadiology and Surgery, 3(5):427–431, 2008.

[26] S. May, B. Werner, H. Surmann, and K. Pervolz. 3DTime-of-Flight Cameras for Mobile Robotics. InIntelligent Robots and Systems, 2006 IEEE/RSJ Interna‐tional Conference on, pages 790–795, 2006.

[27] Dirk Holz, Ruwen Schnabel, David Droeschel, JörgStückler, and Sven Behnke. Towards semantic sceneanalysis with time-of-flight cameras. In Javier Ruiz-del Solar, Eric Chown, and Paul G. Plöger, editors,RoboCup 2010: Robot Soccer World Cup XIV, volume6556, pages 121–132, Berlin, Heidelberg, 2011.Springer-Verlag.

[28] R. Koch et al. MixIn3D: 3D Mixed Reality with ToF-Camera. In Andreas Kolb and Reinhard Koch,editors, Dynamic 3D Imaging, volume 5742 of LectureNotes in Computer Science, pages 126–141. SpringerBerlin Heidelberg, 2009.

[29] M. B. Holte, T. B. Moeslund, and P. Fihl. View-invariant gesture recognition using 3D optical flowand harmonic motion context. Comput. Vis. ImageUnderst., 114(12):1353–1361, December 2010.

[30] L.A. Schwarz, A. Mkhitaryan, D. Mateus, and N.Navab. Estimating human 3d pose from time-of-flight images based on geodesic distances and opticalflow. In Automatic Face Gesture Recognition andWorkshops (FG 2011), 2011 IEEE International Confer‐ence on, pages 700–706, 2011.

[31] L. Schwarz, D. Mateus, V. Castaneda, and N. Navab.Manifold Learning for ToF-based Human BodyTracking and Activity Recognition. In Proceedings ofthe British Machine Vision Conference, pages 80.1–80.11. BMVA Press, 2010. doi:10.5244/C.24.80.

[32] P. Henry, M. Krainin, E. Herbst, X. Ren, and D. Fox.RGB-D mapping Using Kinect-Style Depth Cam‐eras for Dense 3D Modeling of Indoor Environ‐ments. 2012.

[33] Muhammad Attamimi, Takaya Araki, TomoakiNakamura, and Takayuki Nagai. Visual recogni‐tion system for cleaning tasks by humanoid robots.International Journal of Advanced Robotic Systems,10(384), 2013.

[34] H. Du, T. Oggier, F. Lustenberger, and E. Charbon.A Virtual Keyboard Based on True-3D OpticalRanging. In BMVC’05, 2005.

[35] S. Soutschek, J. Penne, J. Hornegger, and J. Kornhub‐er. 3D Gesture-based Scene Navigation in Medical

Imaging Applications using Time-of-Flight Cam‐eras. In Computer Vision and Pattern RecognitionWorkshops, 2008. CVPRW. IEEE Computer SocietyConference on, pages 1–6, 2008.

[36] T. Darrell, L. P. Morency, and A. Rahimi. Fast 3DModel Acquisition from Stereo Images. In FirstInternational Symposium on 3D Data ProcessingVisualization and Transmission, pages 172–176,November 2002.

[37] Y. Cui, S. Schuon, D. Chan, S. Thrun, and C. Theo‐balt. 3D Shape Scanning with a Time-of-FlightCamera. In IEEE CVPR 10, pages 1173–1180, 2010.

[38] C. Distante, G. Diraco, and A. Leone. Active RangeImaging Dataset for Indoor Surveillance. Annals ofthe BMVA, London, 3:1–16, December 2010.

[39] J. Teizer. 3D Range Imaging Camera Sensing forActive Safety in Construction. In Sensors in Construc‐tion and Infrastructure Management, pages 103–117.ITcon, 2008.

[40] T.B. Moeslund and M. B. Holte. Gesture Recogni‐tion using a Range Camera. Technical report,CVMT-07-01, February 2007.

[41] R. Conro et al. Shape and Deformation Measuremen‐tusing Heterodyne Range Imaging Technology.Proceedings of 12th Asia-Pacific Conference on NDT,Auckland, New Zealand, November 2006.

[42] T. Kahlmann, F. Remondino, and H. Ingensand.Calibration for Increased Accuracy of the RangeImaging Camera Swissranger. International Ar‐chives of the Photogrammetry, Remote Sensing, andGeoinformation Sciences, XXXVI(5):136–141, 2006.

[43] T. Möller, H. Kraft, J. Frey, M. Albrecht, and R. Lange.Robust 3D Measurement with PMD Sensors.PMDTechnologies GmbH, Am Eichenhang 50,D-57076, Germany., 2005.

[44] Marvin Lindner and Andreas Kolb. Lateral andDepth Calibration of PMD-Distance Sensors. InAdvances in Visual Computing, volume 4292 of LectureNotes in Computer Science, pages 524–533. SpringerBerlin Heidelberg, 2006.

[45] S K Ramanandan. 3D ToF Camera Calibration andImage Pre-processing. Master’s thesis, Deparment ofElectrical Engineering and Computer Science,University of Applied Sciences Bremen, Bremen,Germany, August 2011.

[46] http://en.wikipedia.org/wiki/OpenCV. Accessed inAug 2011.

[47] S.L.X. Francis, S.G. Anavatti, and M. Garratt.Reconstructing the geometry of an object using 3d tofcamera. In Merging Fields Of Computational Intelli‐gence And Sensor Technology (CompSens), 2011 IEEEWorkshop On, pages 13–17, April 2011.

[48] MESA imaging. SR4000 Manual.

14 Int J Adv Robot Syst, 2015, 12:156 | doi: 10.5772/61348

Page 15: A ToF-Camera as a 3D Vision Sensor for Autonomous Mobile

[49] Boyoon Jung and Gaurav S. Sukhatme. Detectingmoving objects using a single camera on a mobilerobot in an outdoor environment. In in InternationalConference on Intelligent Autonomous Systems, pages980–987, 2004.

[50] J. Shi and C. Tomasi. Good features to track. InComputer Vision and Pattern Recognition, 1994.Proceedings CVPR ’94., 1994 IEEE ComputerSociety Conference on, pages 593–600, 1994.

15Sobers Lourdu Xavier Francis, Sreenatha G. Anavatti, Matthew Garratt and Hyunbgo Shim:A ToF-camera as a 3D Vision Sensor for Autonomous Mobile Robotics