research article hybrid motion planning method for autonomous...
TRANSCRIPT
Research ArticleHybrid Motion Planning Method for AutonomousRobots Using Kinect Based Sensor Fusion and VirtualPlane Approach in Dynamic Environments
Doopalam Tuvshinjargal Byambaa Dorj and Deok Jin Lee
School of Mechanical and Automotive Engineering Kunsan National University Gunsan Jeollabuk 573-701 Republic of Korea
Correspondence should be addressed to Deok Jin Lee deokjleekunsanackr
Received 5 February 2015 Accepted 27 March 2015
Academic Editor Guangming Song
Copyright copy 2015 Doopalam Tuvshinjargal et alThis is an open access article distributed under theCreativeCommonsAttributionLicense which permits unrestricted use distribution and reproduction in anymedium provided the originalwork is properly cited
A new reactive motion planning method for an autonomous vehicle in dynamic environments is proposed The new dynamicmotion planning method combines a virtual plane based reactive motion planning technique with a sensor fusion based obstacledetection approach which results in improving robustness and autonomy of vehicle navigation within unpredictable dynamicenvironments The key feature of the new reactive motion planning method is based on a local observer in the virtual plane whichallows the effective transformation of complex dynamic planning problems into simple stationary in the virtual plane In additiona sensor fusion based obstacle detection technique provides the pose estimation of moving obstacles by using a Kinect sensor and asonar sensor which helps to improve the accuracy and robustness of the reactive motion planning approach in uncertain dynamicenvironments The performance of the proposed method was demonstrated through not only simulation studies but also fieldexperiments using multiple moving obstacles even in hostile environments where conventional method failed
1 Introduction
Thecapability ofmobile robots to autonomously navigate andsafely avoid obstacles plays a key role for many successfulreal-world applications [1] To date a major research workhas been applied to analyze and solve the motion planningin a completely known environment with largely static or tosome extent moving obstacles Motion planning in dynamicenvironments is still among the most difficult and importantproblems in mobile robotics The autonomous motion plan-ning approaches for the robots can be classified into threedifferent paradigms such as the hierarchical reactive andhybrid approach [2] These paradigms in robot navigationcommunity point out to a major dichotomy classified intotwo categories planned based approach and behavior basedtechnique The hierarchical (or planned based) navigationapproaches have a serial control architecture with whichrobots sense the known world plan their operations and actto follow a path expressed in global coordinates based on thissensed model For instance deterministic and probabilisticroadmap methods are widely used in [2ndash4] potential field
based methods are suggested in [5 6] In [7] a collision-freepath planning approach was suggested based on Beziercurves A novel optimization method considering robotposture and path smoothness is presented in [8] Since thereis no direct connection between the sensing and actingthe robot is limited to operate only in static environmentIn [9] a path planning based robot navigation approachwas proposed to cope with unexpected changing environ-ment using 119863
lowast approach and automatic docking systemfor recharging home surveillance robot system is proposedin [10] but the performance is limited when obstacles areallowed to move in the workspaceThe feature of the plannedbased approaches makes the robot difficult to manage tointeract with a constantly changing dynamic environmentwhile performing complex tasks at slow speed
On the other hand unlike the preceding methods thebehavior based approaches [11ndash18] or called reactive basedmethods utilize local control laws relative to local featuresand rely on accurate local feature detection to cope with theseunexpected chances in a reactive way Reactive navigationdiffers from the planned navigation approach in the sense
Hindawi Publishing CorporationJournal of SensorsVolume 2015 Article ID 471052 13 pageshttpdxdoiorg1011552015471052
2 Journal of Sensors
that when a mission is assigned or a goal location is giventhe robot does not plan its path but rather navigate itselfby reacting to its immediate environment in real time Themain idea of the reactive paradigm is to separate the controlsystem into small units of sensor-action pairs with a layeredmodular architecture resulting in fast execution of the controlalgorithm [12] There are other types of developments inlocal reactive path planning approaches such as Vector FieldForce (VFF) [13] andVector Field Histogram (VFH) [14]TheVFF and VFHmethods generate histograms from senor datain order to generate control commands for the vehicle butthey do not take into account the dynamic and kinematicconstraints of the vehicle
However there have been a few reactive works that utilizethe kinematic or dynamic information of the environmentto compute the motion commands for avoiding unexpectedchanges in the environment When the velocity informationof the objects obtained from available sensors is utilized therobot navigation system can compute trajectories resulting inimproving the motion performance regarding other obstacleavoidance methods [15ndash19] The curvature velocity method(CVM) [15] and the dynamic windows approach (DWA) [16]search an appropriate control command in velocity space bymaximizing an objective function which has criteria such asspeed distance between obstacles and remaining distancetowards the final destination The CVM and DWA methodhowever could increase the order of complexity resultingfrom the optimization of the cost function In [17ndash19] a veloc-ity information based approach for navigation and collisiondetection based on the kinematic equation is introduced byusing the notion of collision cones in the velocity space In asimilar way the concept of velocity obstacles [20 21] takes thevelocity of the moving obstacles into account which resultsin a shift of the collision cones This method is restrictedto obstacles with linear motion and thus the nonlinearvelocity obstacle approach is introduced to extend to copewith obstacles moving along arbitrary trajectories [22] Thekey concept of velocity obstacles is to transform the dynamicproblem into several static problems in order to increase thecapability of avoiding dynamic obstacle within unexpectedenvironment changes [23] Meanwhile sensor based motionplanning techniques are also widely used for robot navigationapplications in dynamic environments where the pose esti-mates of the moving obstacles are obtained by using sensorysystems [24ndash26] These sensor based navigation approachesalso require the knowledge of the obstaclersquos velocities for anaccurate navigation solution
In this work a new sensor fusion based hybrid reactivenavigation approach for autonomous robots is proposed indynamic environments The contribution of the new motionplanning method lies on the fact that it integrates a localobserver in a virtual plane as a kinematic reactive pathplanner [23] with a sensor fusion based obstacle detectionapproach which can provide a relative information of movingobstacles and environments resulting in an improved robust-ness and accuracy of the dynamic navigation capability Thekey feature of the reactive motion planning method is basedon a local observer in the virtual plane approachwhichmakesthe effective transformation of complex dynamic planning
problems into simple stationary ones along with a collisioncone in the virtual plane approach [23] On the other handa sensor fusion based planning technique provides the poseestimation of moving obstacles by using sensory systems andthus it could improve the accuracy reliability and robustnessof the reactive motion planning approach in uncertaindynamic environmentsThehybrid reactive planningmethodallows an autonomous vehicle to reactively change headingand velocity to cope with an obstacle around in each planningtime As a sensory system Microsoft Kinect device [27]which could obtain distance between the camera and targetobjects is utilized The advantage of using Kinect is on itscapability of calculating the distance between two objectson the world coordinate frame In case that the two objectsare placed closer a sonar sensor mounted on the robot candetect andmake a precise distance calculation in combinationwith the Kinect sensor data The integrated hybrid motionplanning with the integration of the virtual plane approachand sensor based estimation method allows the robot to findthe appropriate windows for the speed and orientation tomove with a collision-free path in dynamic environmentsmaking its usage very attractive and suitable for real-timeembedded applications In order to verify the performanceof the suggested method real experiments are carried out forthe autonomous navigation of a mobile robot in the dynamicenvironments using multiple moving obstacles Here twomobile robots act on the moving obstacles and one has toavoid collision with the other robot
The rest of the work is organized as follows In Section 2we introduce the kinematic equations and the geometry ofthe dynamic motion planning problem In Section 3 theconcept of the hybrid reactive navigation using virtual planeapproach is givenThe configuration and system architectureof the Kinect device is discussed in Section 4 Simulation andexperimental tests are shown and discussed in Section 5
2 Definition of Dynamic Motion Planning
In this section the relative velocity obstacle based motionplanning algorithms for collision detection and control lawsare defined [23] Figure 1 shows some geometry parametersfor the navigation in dynamic environment for the mobilerobot The world is attached to a global fixed reference frameof coordinates 119882 and its origin point is the origin 119874 Itis possible to attach local reference frames to every movingobject in the working space The suggested method is areactive navigation method with which the robot needs tochange the path to avoid either moving or static obstacleswithin a given radius that is the coverage area (CA)
The line of sight of the robot 119897119903is the imaginary straight
line that starts from the origin and is directed toward thereference center point of the robot 119877 The line-of-sight angle120579119903is the angle made by the sight 119897
119903 The distance 119897
119892119903between
robot 119877 and the goal 119866 is calculated by
119897119892119903= radic(119909
119892minus 119909119903)2
+ (119910119892minus 119910119903)2
(1)
Journal of Sensors 3
Goal
Coveragearea (CA)
Y
X
D2
120593gr
lgr
120593ir
lir
lr
120593r
D1
2
O W
li
120579r
i
120593i
1
Cr
r
D3
3
R(xr yr 120579r r)
Di(xi yi 120579i i)
Figure 1 Geometry of the navigation problem Illustration of thekinematic and geometric variables
where (119909119892 119910119892) is the coordinates of the final goal point and
(119910119903 119909119903) is the state of the robot in 119882 The mobile robot has
a differential driving mechanism using two wheels and thekinematic equation of the wheeled mobile robot can be givenby
119903= V119903cos 120579119903
119910119903= V119903sin 120579119903
V119903= 119886119903
120579119903= 119908119903
(2)
where 119886119903is the robotrsquos linear acceleration and V
119903and 119908
119903
are the linear and angular velocities (120579119903 V119903) are the control
inputs of the mobile robot The line-of-sight angle 120593119892119903which
is obtained from the anglemade by the line of sight 119897119892119903is given
by the following equations
cos120593119892119903=
119909119892minus 119909119903
radic(119909119892minus 119909119903)2
+ (119910119892minus 119910119903)2
tan120593119892119903=
119910119892minus 119910119903
119909119892minus 119909119903
(3)
Now the kinematic equation of the 119894th obstacle 119863119894is
expressed by
119894= V119894cos 120579119894
119910119894= V119894sin 120579119894
120579119894= 119908119894
(4)
where the obstacle has the linear velocity V119894and the angular
velocities 119908119894 and 120579
119894is the orientation angle The Euclidian
distance of the line of sight 119897119894119903between the robot and the 119894th
obstacle is calculated by
119897119894119903= radic(119909
119894minus 119909119903)2
+ (119910119894minus 119910119903)2 (5)
and the line-of-sight angle 120593119894119903is expressed by
tan120593119894119903=119910119894minus 119910119903
119909119894minus 119909119903
(6)
The evolution of the range and turning angle between therobot and an obstacle for dynamic collision avoidance iscomputed by using the tangential and normal component ofthe relative velocity in the polar coordinates as follows
119897119894119903= V119894cos (120579
119894minus 120593119894119903) minus V119903cos (120579
119903minus 120593119894119903)
119897119894119903119894119903= V119894sin (120579119894minus 120593119894119903) minus V119903sin (120579119903minus 120593119894119903)
(7)
From these equations it is shown that a negative sign of 119897119894119903
indicates that the robot is approaching obstacle 119863119894 and if
the rate is zero the range implies constant distance betweenthe robot and the obstacle Meanwhile a zero rate for theline-of-sight angle indicates the motion of 119863
119894is a straight
line The relative polar system presents a simple but veryeffective model that allows real-time representation of therelative motion between the robot and moving obstacle [23]
3 Hybrid Reactive Motion Planning Approach
31 Virtual Plane Based Reactive Motion Planning In thissection the virtual plane method which allows transforminga moving object of interest into a stationary object is brieflyreviewed [23] The transformation used in the virtual planeis achieved by introducing a local observer that allows therobot to find the appropriate windows for the speed andorientation to move in a collision-free path Through thistransformation the collision course between the robot 119877 andthe 119894th obstacle 119863
119894is reduced to a collision course between
the virtual robot 119877V and the initial position 119863119894(1199050) of a real
obstacle The components of the relative velocity between 119877V
and119863119894(1199050) along and across 119897
119894119903are given by
119897119894119903= minusVV119903cos (120579V
119903minus 120593119894119903)
119897119894119903119894119903= minusVV119903sin (120579V119903minus 120593119894119903)
(8)
where VV119903and 120579V119903are the linear velocity and orientation of the
virtual robot The linear velocity and orientation angle of 119877V
can be written as follows
VV119903= radic(
119894minus 119903)2
+ ( 119910119894minus 119910119903)2 (9)
tan 120579V119903=
119910119894minus 119910119903
119894minus 119903
(10)
Note that the tangential and normal equations given in (7) forthe dynamic motion planning are rewritten in terms of thevirtual robot as an observer leading to a stationary motionplanning problem More details concerning the virtual plan-ning method can be referred to in [23]
Collision detection is expressed in the virtual plane butthe final objective is to make the robot navigate toward the
4 Journal of Sensors
goal with collision-free path in the real planeThe orientationangle of the robot in the real plane is calculated by
119903=
V119903+ 119894
119910119903= 119910
V119903+ 119910119894
(11)
This is the inverse transformation mapping the virtual planeinto the real plane and gives the velocity of the robot as afunction of the velocities of the virtual robot and the movingobject The speed and orientation of the real robot can becomputed from the virtual robot and the moving objectvelocities as follows
V119903= radic(V
119903+ 119894)2
+ ( 119910V119903+ 119910119894)2
tan 120579119903=
119910V119903+ 119910119894
V119903+ 119894
(12)
32 Navigation Laws In order to make the robot navigatetoward the final goal a kinematic based linear navigation lawis used as [23]
120579119903= 119872120593
119892119903+ 1198881+ 1198880119890minus119886119905
(13)
where 120593119892119903
is the line-of-sight angle of the robot final goaland the variables are deviation terms characterizing the finaldesired orientation angle of the robot and indicating theinitial orientation of the robot The term 119872 is a navigationparameter with 119872 gt 1 and 119886 is a given positive gain Onthe other hand the collision course in the virtual plane with119863119894(1199050) is characterized by
120579V119903isin CCVP
119894 (14)
The collision cone in the virtual plane (CCVP) is given by
CCVP119894= [120593119894119903minus 120573119894 120593119894119903+ 120573119894] (15)
where 120573119894is the angle between the lines of the upper and lower
tangent limit points in119863119894The direct collision course between
119877 and119863119894is characterized by
tan 120579V119903=
119910119894minus 119910119903
119894minus 119903
= [tan (120593119894119903minus 120573119894) tan (120593
119894119903+ 120573119894)] (16)
After the orientation angle 120579V119903of the virtual robot is computed
in terms of the linear velocity of the robot and the movingobstacles as given in (10) it is possible to write the expressionsof the orientation angle 120579
119903and the speed V
119903for the real robot
controls V119903or in terms of the linear velocity and orientation
angle of the moving obstacle and the virtual robot as follows
V119903=V119894(tan 120579V
119903cos 120579119894minus sin 120579
119894)
tan 120579V119903cos 120579119903minus sin 120579
119903
120579119903= 120579
V119903minus arcsin[
V119894sin (120579V119903minus 120579119894)
V119903
]
(17)
For the robot control the desired value of the orientationangle in the virtual plane can be expressed based on usingthe linear navigation law as
120579Vlowast119903(119905) = 120572
119894119896+ 1198881+ 1198880exp minus119886 (119905 minus 119905
119889) 119896 = 1 2 (18)
LED Vision camera Microphone array
Microphone array
3D depth sensor cameras Motorized tilt
Figure 2 Architecture of Microsoft Kinect sensor
where 119905119889denotes the time when the robot starts deviation
for collision avoidance and 1205721198941and 120572
1198942are the left and right
line-of-sight angles between the reference deviation pointsand the points on the collision cone in the virtual planeFinally based on the desired orientation in the virtual planethe corresponding desired speed value Vlowast
119903for the robot is
calculated by
Vlowast119903=
V119894(tan 120579V
lowast
119903cos 120579119894minus sin 120579
119894)
tan 120579Vlowast119903cos 120579119903minus sin 120579
119903
(19)
In a similar way the corresponding desired orientation valuecan be expressed by
tan 120579lowast119903=
VV119903sin (120579V
lowast
119903) + V119894sin (120579119894)
VV119903cos (120579Vlowast
119903) + V119894cos (120579
119894) (20)
Note that for the robot navigation including a collisionavoidance technique within dynamic environments eitherthe linear velocity control expressed in (19) or the orientationangle control in (20) can be utilized
33 Sensor Fusion Based Range and Pose Estimation Low-cost range sensors are an attractive alternative to expensivelaser scanners in application areas such as motion planningand mapping The Microsoft Kinect [26] is a sensor whichconsists of an IR sensor an IR camera an RGB camera amultiarray microphone and an electrical motor providingthe tilt function to the sensor (shown in Figure 2) TheKinect sensor captures not only depth but also color imagessimultaneously at a frame rate of up to 30 fps Some keyfeatures are illustrated in [26ndash29] The RGB video streamuses 8-bit VGA resolution (640 times 480 pixels) with a Bayercolor filter at a frame rate 30Hz The monochrome depthsensing video stream has a VGA resolution (640times480 pixels)with 11-bit depth which provides 2048 levels of sensitivityDepth data is acquired by the combination of IR projector andIR camera The microphone array features four microphonecapsules and operates with each channel processing 16-bitaudio at a sampling rate of 16 kHz The motorized pivotis capable of tilting the sensor up to 27∘ either up ordown
The features of Kinect device make its application veryattractive to autonomous robot navigation In this work theKinect sensor is utilized for measuring range to moving
Journal of Sensors 5
Table 1 Kinectrsquos focal length and field of view
Camera Focal length (pixel) Field of view (degrees)Horizontally Vertically
RGB 525 63 50IR 580 57 43
Detectablerange ofdepth camera
0 200 400 600 800 1000 12000
1
2
3
4
5
6
Kinect units
Met
ers
Equation (21)Equation (22)
Figure 3 Kinect depth measurement and actual distance
obstacles and estimating color-based locations of objects fordynamic motion planning
Before going into detail the concept of the calculationof the real coordinates is discussed Kinect camera hassome good advantages such as depth sensor with minimum800mm and maximum range 4000mm Camera focus isconstant and given and thus real distance between cameraand chosen target is easily calculated The parameters usedin Kinect sensor are summarized in Table 1
Two similar equations have been proposed by researcherwhere one is based on the function 1119863value and the other isusing tan(119863value)The distance between a camera and a targetobject 119911
119908is expressed by
119911119908=
1
(119863value times (minus00030711016) + 33309495161)(21)
119911119908= 01236 times tan(
119863value28425 + 11863
) (22)
Figure 3 shows the detectable ranges of a depth camera wherethe distances in world coordinate based on the above twoequations are computed by limiting the raw depth to 1024 thatcorresponds to about 5 meters
Figure 4 shows the error results of distance measure-ment experiments using a Kinectrsquos depth camera In thisexperiment the measured distance using a ruler is noted bygreen which gives a reference distance and three repetitiveexperiments are carried out and they are drawn in red lightblue and blue colors From the experiment it is shown thatthe errors of the depth measurements from the Kinect sensorare proportional to the distance
0 5 10 15 20 25 30 350
1000200030004000
Cycle time
(mm
)D
istan
ce m
easu
rem
ent
Reference distanceExperiment 1
Experiment 2Experiment 3
500 1000 1500 2000 2500 3000 3500 4000minus40minus20
020406080
Distance measurement (mm)
(mm
)Er
ror m
easu
rem
ent
Experiment 1Experiment 2
Experiment 3Average value
Figure 4 Kinect depth camera measurement experiment and error
C
(uo o)
yc
h z
ys
xw
xc
minusyw
zc
120573120572 Camera
screen
Realworld
Pg(xw yw zw)
yw
minusy
xs
P998400g(xs ys zs)
P998400r (xs ys zs)
Pr(xw yw zw)
Figure 5 Robot and obstacle localization using the Kinect sensorfor ranging and positioning computation
Figure 5 shows a general schematics of geometricapproach to find the 119909 and 119910 coordinates using the Kinectsensor system where ℎ is the screen height in pixels and 120573 isthe field of view of the camera The coordinates of a point onthe image plane of the robot 1198751015840
119903and goal 1198751015840
119892are transformed
into the world coordinates 119875119903and 119875
119892 and it is calculated by
119875 = (119909119908 119910119908) 997888rarr 119875
1015840
= (119909119904 119910119904 119911119904)
119909119908=119909119904
119911119908
119910119908=119910119904
119911119908
(23)
6 Journal of Sensors
yg
yr
xr xg
y
x
120579
(a)
III
III IV
y
x
(b)
Figure 6 Robot heading angle computation approach
Each of coordinates 119909119908 119910119908 119911119908of two red (robot) and green
(goal) points is used as the input into the vision system and1198751015840 is computed by
(
119909119904
119910119904
119911119904
) = (
119891 0 0 0
0 119891 0 0
0 0 1 0
)(
119909119908
119910119908
119911119908
1
) (24)
where 119891 is the focal length of the camera 1198751015840 is calculated atthe pixel coordinates divided by 120588
119906and 120588V and the values of
the pixel of the image 119906 and V are the pixel coordinates andthey are obtained from the following equations
(
1199061015840
V1015840
1199081015840
) =(
1
120588119906
0 1199060
01
120588VV0
0 0 1
)(
119891 0 0 0
0 119891 0 0
0 0 1 0
)(
119883119882
119884119882
119885119882
1
)
1198751015840
= (
119906
V) = (
1199061015840
1199081015840
V1015840
1199081015840
) (
1199061015840
V1015840
1199081015840
) = 119862(
119909119908
119910119908
119911119908
1
)
(25)
In the experiment the final goal and robots are recognized bya built-in RGB camera In addition the distance of an objectis measured within mm accuracy using IR camera and thetarget objectrsquos pixel coordinates (119909
119904 119910119904) are estimated by using
a color-based detection approach In this work the distancemeasured between the planning field and the Kinect sensoris 2700mm which becomes the depth camerarsquos detectablerange The horizontal and vertical coordinates of an objectare calculated as follows
(1) horizontal coordinate
120572 = arctan(119909119904
119891)
119909119908= 119911119908times sin120572
(26)
(2) vertical coordinate
120572 = arctan(119910119904
119891)
119910119908= 119911119908times sin120572
(27)
where 119911119908is the distance to the object obtained from
Kinect sensor Now those real-world coordinatesobtained in the above are used in dynamic pathplanning procedures
In general the camera intrinsic and extrinsic parametersof the color and depth cameras are set as default valuesand thus it is necessary to calibrate them for accurate testsThe calibration of the depth and RGB color cameras in theKinect sensor can be applied by using a mathematical modelof depth measurement and creating a depth annotation of achessboard by physically offsetting the chessboard from itsbackground and details of the calibration procedures can bereferred to in [30 31]
As indicated in the previous section for the proposedrelative velocity obstacle based dynamic motion planningthe accurate estimates of the range and orientation of anobject play an important role In this section an efficient newapproach is proposed to estimate the heading information ofthe robot using a color detection approach First the robot iscovered by green and red sections shown in Figure 6
Then using a color detectionmethod [32] the center loca-tion of the robot is calculated and after finding denominate
Journal of Sensors 7
Input Coordinate of Robot obstacle 1 obstacle 2 and goalOutput Speed of robotrsquos right and left wheelsCalculate 119897
119892119903using Kinect
While 119897119892119903gt 0 do
Calculate 119897119892119903and 120601
119892119903
Send robot speedif All119863
119894in CA
Calculate 119897119894119903using Kinect
if All 119897119894119903gt 0 then
There is no collision risk keep send robot speedelseConstruct the virtual planeTest the collision in the virtual planeif there is a collision risk thenCheck sonar sensor valueif sonar value is too small thenChose 120579
119903of quick motion control
Send robot speedelseConstruct the 120579-windowChose the appropriate values for 120579
119903
Send robot speedend if
end ifend if
end while
Algorithm 1 Hybrid reactive dynamic navigation algorithm
heading angle 120579 as shown in (28) new heading informationin each four different phase sections is computed by using thefollowing equations
Δ119909 = 119909119892minus 119909119903
Δ119910 = 119910119892minus 119910119903
120579 = tanminus1 (119910119892minus 119910119903
119909119892minus 119909119903
)
(28)
(1) 119909119892gt 119909119903 119910119892gt 119910119903
120579 = 120579 + 0
(2) 119909119892lt 119909119903 119910119892gt 119910119903
120579 = 314 minus 120579
(3) 119909119892lt 119909119903 119910119892lt 119910119903
120579 = 314 + 120579
(4) 119909119892gt 119909119903 119910119892lt 119910119903
120579 = 628 minus 120579
(29)
Finally the relative velocity obstacle based reactivedynamic navigation algorithm with the capability of collisionavoidance is summarized in Algorithm 1
4 Experimental Results
For the evaluation and verification of the proposed sensorbased reactive dynamic navigation algorithms both simu-lation study and experimental tests are carried out with arealistic experimental setup
41 Experimental Scenario and Setup For experimental teststwo robots are assigned asmoving obstacles and the third oneis used as a master robot that generates control commands toavoid the dynamic obstacles based on the suggested reactivemotion planning algorithms For the moving obstacles twoNXT Mindstorm based vehicles that can either move in arandom direction or follow a designated path are developedThe HBE-RoboCAR equipped with ultrasonic and encodersensors is used as a master robot as shown in Figure 7
TheHBE-RoboCAR [33] has 8-bit AVRATmega128L pro-cessor The robot is equipped with multiembedded processormodules (embedded processor FPGA MCU) It providesdetection of obstacles with ultrasonic and infrared sensormotion control with acceleration sensor and motor encoderHBE-RoboCAR has the ability to communicate with otherdevice either wireless or wired technology such as Bluetoothmodule and ISP UART interfaces respectively In this workHBE-RoboCAR is connected to a computer on the groundcontrol station using Bluetooth wireless communicationFigure 7 shows the hardware specification and sensor systemsfor the robot platform and Figure 8 shows the interface andcontrol architecture for the embedded components of HBE-RoboCAR [33]
8 Journal of Sensors
Power switch1Power LED and reset switch2Interface connector3Wireless network connector4Ultrasonic sensor5PSD sensor6
LED 7Phototransistors8Voltmeter9DC inCharge inBattery in
101112
1
23
4
5 5
55
5 5
55
6
6
7 7
7 7
7 7
77
8
9
101112
Figure 7 Mobile robot hardware and sensor systems (HBE-RoboCAR [33])
Embedded processor
FPGA module
(Altera Xilinx)
AVRboard
Inte
rface
bo
ard
Inte
rface
bo
ard
Inte
rface
bo
ard
Con
nect
or
ATmega128
Acceleration sensor
(ADXL202E)
POWER
ATmega8 Voltage gauge
Motor driver
Sens
or C
ON
Mot
or C
ON
Photo sensor
(GP2Y0A21)
Infrared sensor
(IL-8L EL-8L)
Ultrasonic sensor
(MA40)
Front Back
DC geared motor(2EA)
DC geared encoder motor
(2EA)
Control board(compatibility with 3 kinds)
HBE-RoboCAR board
Base board Sensor board
LED
Motor(+5V +33V)
times 2)(Write (Red times 2)
Figure 8 Block diagram of the HBE-RoboCAR embedded system
For the dynamic obstacle avoidance the relative velocityobstacle based navigation laws require the range and headinginformation from sensors For the range estimation Kinectsensor is utilized If the Kinect sensor detects the final goalusing a color-based detection algorithm [32 34 35] it sendsthe information to the master robot After receiving thetarget point the master robot starts the onboard navigationalgorithm to reach the goal while avoiding dynamic obstaclesWhen the robot navigates in the experimental field thedistance to each moving obstacle is measured by the Kinectsensor and the range information is fed back to the masterrobot via Bluetooth communication as inputs to the reactivemotion planning algorithms The detailed scenario for theexperimental setup is illustrated in Figure 9
42 Simulation Results Figure 10 shows the simulationresults of the reactive motion planning on both the virtualplane and the real plane In this simulation the trajectoriesof two obstacles were described by the blue and black colorlines the trajectory of the master robot was depicted in thered line and the goal was indicated by green dot As canbe clearly seen in the real plane and the virtual plane inFigures 10(b) and 10(a) the master robot avoided the firstobstaclewhichwasmoving into themaster robot and success-fully reached the target goal after avoiding the collision withthe second moving obstacle just before reaching the targetWhile the master robot avoids the obstacles it generates acollision cone by choosing a deviation point on the virtualplane On the virtual plane the radius of the collision cone
Journal of Sensors 9
Bluetoothcommunication
Kinect
PC
Obstacle 1
Obstacle 2
Robot
Destination
Figure 9 Test scenario and architecture for experimental setup
Deviation point
Deviation point
First obstaclersquoscollision coneSecond obstaclersquos
collision cone
0 500 1000 1500 2000 2500 3000minus500
0500
1000150020002500
x coordinate
y coo
rdin
ate
Virtual plane
(a)
Second obstacle
First obstacleRobot
Goal
Trajectory of robot
Coverage area
Initial locations ofrobot and obstacles
minus500 0 500 1000 1500 2000 2500 3000 3500 4000minus1000
01000200030004000
y coo
rdin
ate
Real plane
x coordinate
(b)
Figure 10 Simulation results on virtual plane and real plane
0 10 20 30 40 50 60 7050
100150200
Cycle time (times of program running)
Targ
et an
gle
(deg
) an
gle (
deg)
(d
eg)
0 10 20 30 40 50 60 7050
100150200
Cycle time (times of program running)
Robo
t hea
ding
0 10 20 30 40 50 60 70minus100
0100
Cycle time (times of program running)
Varia
nce a
ngle
Figure 11 Orientation angle information target angle robot head-ing angle and variance angle
is the same as the obstaclersquos one and the distance betweenthe deviation point and the collision cone is dependent onthe radius of the master robotThe ellipses indicate the initiallocations of the robot and the obstacles
In Figure 11 the orientation angle information used forthe robot control was illustrated The upper top plot showedthe angle of the moving robot to the target from the virtualplane the second plot showed the robot heading anglecommanded for the navigation control and the third plotshowed the angle difference between the target and robotheading angle At the final stage of the path planning the
0 10 20 30 40 50 60 70100110120130140
Cycle time (times of program running)
Obs
tacle
1 an
gle
(deg
ree)
1840 1860 1880 1900 1920 1940 1960 1980 2000 2020 2040500
1000
1500
2000
x coordinate (mm)
y coo
rdin
ate (
mm
) Obstacle 1 position
0 10 20 30 40 50 60 700
20
40
60
Cycle time (times of program running)
Obs
tacle
1 sp
eed
Figure 12 Results of firstmoving obstaclersquos orientation angle speedand position in 119909-119910 coordinates
commanded orientation angle and the target angle to the goalpoint become the same Instead of controlling the robot withthe orientation angle the speed of the master robot can beused to avoid the moving obstacles
Figures 12 and 13 show each moving obstaclersquos headingangle linear velocity and trajectory As can be seen in order
10 Journal of Sensors
0 10 20 30 40 50 60 70150
200
250
300
Cycle time (times of program running)
Obs
tacle
2 an
gle
(deg
ree)
1000 1010 1020 1030 1040 1050 1060 10700
500
1000
1500
x coordinate (mm)
y coo
rdin
ate (
mm
) Obstacle 2 position
0 10 20 30 40 50 60 700
20
40
60
Obs
tacle
2 sp
eed
Cycle time (times of program running)
Figure 13 Results of second moving obstaclersquos orientation anglespeed and position in 119909-119910 coordinates
500 1000 1500 2000 2500 30000
2000
4000
x coordinate (mm)
y co
ordi
nate
(mm
)
Robot position
0 10 20 30 40 50 60 700
100
200
Cycle time (times of program running)
Robo
t spe
ed
0 10 20 30 40 50 60 700
50
100
Cycle time (times of program running)
Righ
t whe
el sp
eed
0 10 20 30 40 50 60 700
50
100
Cycle time (times of program running)
Left
whe
el sp
eed
Figure 14 Results of robotrsquos trajectory speed and right and leftwheels speed
to carry out a dynamic path planning experiment the speedand the heading angle were changed during the simulationresulting in uncertain cluttered environments
Figure 14 shows the mobile robotrsquos trajectory from thestart point to the goal point and also the forward velocity andeach wheel speed from encoder As can be seen the trajectoryof the robot has a sharp turn around the location (1500mm
Obstacle 1rsquos startingposition
Obstacle 2rsquos startingposition
Goal x
y
Robotrsquos startingposition
(a)
0 500 1000 1500 2000 2500 30000
1000
2000
3000Virtual plane
x (mm)
y (m
m)
minus1000 minus500 0 500 1000 1500 2000 2500 3000 3500 4000
0
1000
2000
Real plane
R (27513368 21269749)
3000
x (mm)
y (m
m)
(b)
Figure 15 (a) Experiment environment (initial stage) (b) Plots ofthe experimented data at initial stage
1000mm) in order to avoid the second moving obstacle It isseen that the right and left wheel speeds are mirrored along atime axis Also we can see relationship of the variance angleand robotrsquos right and left wheels speed
43 Field Test and Results Further verification of the per-formance of the proposed hybrid dynamic path planningapproach for real experiments was carried out with thesame scenario used in the previous simulation part In theexperiment two moving obstacles are used and a masterrobot moves without any collision with the obstacles to thetarget point as shown in Figure 15(a) and the initial locationsof the obstacles and the robot are shown in Figure 15(b) Thered dot is the initial starting position of the master robot at(2750mm 2126mm) the black dot is the initial location ofthe second obstacle at (2050mm 1900mm) and the blue dotis the initial location of the first moving obstacle at (1050mm2000mm) In the virtual plane the collision cone of the firstobstacle is depicted as shown in the top plot of Figure 15(b)
Journal of Sensors 11
Goal x
y
Robotrsquos currentposition
Obstacle 2rsquos current Obstacle 1rsquos currentposition and direction position and direction
(a)
0 500 1000 1500 2000 2500 30000
1000
2000
3000Virtual plane
x (mm)
y (m
m)
minus1000 minus500 0 500 1000 1500 2000 2500 3000 3500 4000
0
1000
2000
3000
Real plane
x (mm)
y (m
m)
(b)
Figure 16 (a) Experiment result of collision avoidance with the firstobstacle (b) Plot of the trajectories of the robot and the obstacles inthe virtual and real planes during the collision avoidance with thefirst moving obstacle
and the robot carries out its motion control based on thecollision cone in the virtual plane until it avoids the firstobstacle
Figure 16 showed the collision avoidance performancewith the fist moving obstacle and as can be seen the masterrobot avoided the commanded orientation control into theleft direction without any collisions which is described indetail in the virtual plane in the top plot of Figure 16(b) Thetrajectory and movement of the robot and the obstacles weredepicted in the real plane in Figure 16(b) for the detailedanalysis
In a similar way Figure 17(a) illustrated the collisionavoidance with the second moving obstacle and the detailedpath and trajectory are described in Figure 17(b) The topplot of Figure 17(b) shows the motion planning in thevirtual plane where the initial location of the second movingobstacle is recognized at the center of (2050mm 1900mm)
Goal x
y
Robotrsquos currentposition
Obstacle 2rsquos currentObstacle 1rsquos currentposition and direction
position and direction
(a)
0 500 1000 1500 2000 2500 30000
1000
2000
3000Virtual plane
x (mm)
y (m
m)
minus1000 minus500 0 500 1000 1500 2000 2500 3000 3500 4000
0
1000
2000
3000
Real plane
x (mm)
y (m
m)
(b)
Figure 17 (a) Experiment result of collision avoidance with thesecond obstacle (b) Plot of the trajectories of the robot and theobstacles in the virtual and real planes during the collision avoidancewith the second moving obstacle
Based on this initial location the second collision cone isconstructed with a big green ellipse that allows the virtualrobot to navigate without any collision with the secondobstacle The trajectory of the robot motion planning in thereal plane is depicted in the bottom plot of Figure 17(b)
Now at the final phase after avoiding all the obstacles themaster robot reached the target goal with a motion controlas shown in Figure 18(a) The overall trajectories of the robotfrom the starting point to the final goal target in the virtualplane were depicted in the top plot of Figure 18(a) and thetrajectories of the robot in the real plane were depicted inthe bottom plot of Figure 18(b) Note that the trajectories ofthe robot location differ from each other in the virtual planeand the real plane However the orientation change gives thesame direction change of the robot in both the virtual andthe real plane In this plot the green dot is the final goalpoint and the robot trajectory is depicted with the red dotted
12 Journal of Sensors
Goalx
y
Obstacle 2rsquos current
Obstacle 1rsquos current
position and direction
position and direction
Robotrsquos currentposition
(a)
0 500 1000 1500 2000 2500 30000
1000
2000
3000Virtual plane
x (mm)
y (m
m)
minus1000 minus500 0 500 1000 1500 2000 2500 3000 3500 4000minus1000
01000200030004000
Real plane
x (mm)
y (m
m)
(b)
Figure 18 (a) Experiment result after the collision avoidance withall obstacles (b) Plot of the trajectories of the robot and the obstaclesin the virtual and real planes during the collision avoidance with allthe moving obstacles
circles The smooth trajectory was generated by using thelinear navigation laws as explained
From this experiment it is easily seen that the pro-posed hybrid reactive motion planning approach designedby the integration of the virtual plane approach and asensor based planning is very effective to dynamic collisionavoidance problems in cluttered uncertain environmentsThe effectiveness of the hybrid reactive motion planningmethod makes its usage very attractive to various dynamicnavigation applications of not only mobile robots but alsoother autonomous vehicles such as flying vehicles and self-driving vehicles
5 Conclusion
In this paper we proposed a hybrid reactive motion plan-ning method for an autonomous mobile robot in uncertain
dynamic environmentsThe hybrid reactive motion planningmethod combined a reactive path planning method whichcould transform dynamic moving obstacles into stationaryones with a sensor based approach which can provide relativeinformation of moving obstacles and environments The keyfeatures of the proposed method are twofold the first keyfeature is the simplification of complex dynamic motionplanning problems into stationary ones using the virtualplane approach while the second feature is the robustness ofa sensor basedmotion planning in which the pose estimationof moving obstacles is made by using a Kinect sensor whichprovides a ranging and color detection The sensor basedapproach improves the accuracy and robustness of the reac-tive motion planning approach by providing the informationof the obstacles and environments The performance ofthe proposed method was demonstrated through not onlysimulation studies but also field experiments using multiplemoving obstacles
In the further work a sensor fusion approach which couldimprove the heading estimation of a robot and the speedestimation of moving objects will be investigated more formore robust motion planning
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgments
This research was supported by the National Research Foun-dation ofKorea (NRF) (no 2014-017630 andno 2014-063396)and was also supported by the Human Resource TrainingProgram for Regional Innovation and Creativity through theMinistry of Education and National Research Foundation ofKorea (no 2014-066733)
References
[1] R Siegwart I R Nourbakhsh and D Scaramuzza Introductionto AutonomousMobile RobotsTheMITPress LondonUK 2ndedition 2011
[2] J P van den Berg and M H Overmars ldquoRoadmap-basedmotion planning in dynamic environmentsrdquo IEEE Transactionson Robotics vol 21 no 5 pp 885ndash897 2005
[3] D Hsu R Kindel J-C Latombe and S Rock ldquoRandomizedkinodynamic motion planning with moving obstaclesrdquo TheInternational Journal of Robotics Research vol 21 no 3 pp 233ndash255 2002
[4] R Gayle A Sud M C Lin and D Manocha ldquoReactivedeformation roadmaps motion planning of multiple robots indynamic environmentsrdquo in IEEERSJ International Conferenceon Intelligent Robots and Systems (IROS rsquo07) pp 3777ndash3783 SanDiego Calif USA November 2007
[5] S S Ge and Y J Cui ldquoDynamic motion planning for mobilerobots using potential field methodrdquo Autonomous Robots vol13 no 3 pp 207ndash222 2002
[6] K P Valavanis T Hebert R Kolluru and N TsourveloudisldquoMobile robot navigation in 2-D dynamic environments using
Journal of Sensors 13
an electrostatic potential fieldrdquo IEEE Transactions on SystemsMan and Cybernetics Part A Systems and Humans vol 30 no2 pp 187ndash196 2000
[7] X Gao Y Ou Y Fu et al ldquoA novel local path planningmethod considering both robot posture and path smoothnessrdquoin Proceeding of the IEEE International Conference on Roboticsand Biomimetics (ROBIO 13) pp 172ndash178 Shenzhen ChinaDecember 2013
[8] J Jiao Z Cao P Zhao X Liu and M Tan ldquoBezier curvebased path planning for a mobile manipulator in unknownenvironmentsrdquo in Proceedings of the IEEE International Confer-ence on Robotics and Biomimetics (ROBIO rsquo13) pp 1864ndash1868Shenzhen China December 2013
[9] A Stentz ldquoOptimal and efficient path planning for partially-known environmentsrdquo in Proceedings of the IEEE InternationalConference on Robotics and Automation vol 4 pp 3310ndash3317May 1994
[10] G Song H Wang J Zhang and T Meng ldquoAutomatic dockingsystem for recharging home surveillance robotsrdquo IEEE Transac-tions on Consumer Electronics vol 57 no 2 pp 428ndash435 2011
[11] M Seder and I Petrovic ldquoDynamic window based approachto mobile robot motion control in the presence of movingobstaclesrdquo in Proceedings of the IEEE International Conferenceon Robotics and Automation (ICRA rsquo07) pp 1986ndash1991 RomaItaly April 2007
[12] D Fox W Burgard and S Thrun ldquoThe dynamic windowapproach to collision avoidancerdquo IEEERobotics andAutomationMagazine vol 4 no 1 pp 23ndash33 1997
[13] J Borenstein and Y Koren ldquoReal-time obstacle avoidance forfast mobile robotsrdquo IEEE Transactions on Systems Man andCybernetics vol 19 no 5 pp 1179ndash1187 1989
[14] J Borenstein and Y Koren ldquoThe vector field histogrammdashfastobstacle avoidance for mobile robotsrdquo IEEE Transactions onRobotics and Automation vol 7 no 3 pp 278ndash288 1991
[15] R Simmons ldquoThe curvature-velocity method for local obsta-cle avoidancerdquo in Proceedings of the 13th IEEE InternationalConference on Robotics and Automation vol 4 pp 3375ndash3382Minneapolis Minn USA April 1996
[16] J Minguez and L Montano ldquoNearness Diagram (ND) nav-igation collision avoidance in troublesome scenariosrdquo IEEETransactions on Robotics and Automation vol 20 no 1 pp 45ndash59 2004
[17] E Owen and L Montano ldquoMotion planning in dynamicenvironments using the velocity spacerdquo in Proceedings of theIEEE IRSRSJ International Conference on Intelligent Robots andSystems (IROS 05) pp 997ndash1002 August 2005
[18] A Chakravarthy and D Ghose ldquoObstacle avoidance in adynamic environment a collision cone approachrdquo IEEE Trans-actions on Systems Man and Cybernetics Part A Systems andHumans vol 28 no 5 pp 562ndash574 1998
[19] F Belkhouche T Jin and C H Sung ldquoPlane collision coursedetection betweenmoving objects by transformation of coordi-natesrdquo in Proceedings of the 17th IEEE International Conferenceon Control Applications Part of IEEE Multi-Conference onSystems and Control San Antonio Tex USA 2008
[20] P Fiorini and Z Shiller ldquoMotion planning in dynamic environ-ments using velocity obstaclesrdquo International Journal of RoboticsResearch vol 17 no 7 pp 760ndash772 1998
[21] P Fiorini and Z Shiller ldquoMotion planning in dynamic envi-ronments using velocity obstaclesrdquoThe International Journal ofRobotics Research vol 17 no 7 pp 760ndash772 1998
[22] Z Shiller F Large and S Sekhavat ldquoMotion planning indynamic environments obstacles moving along arbitrary tra-jectoriesrdquo in Proceedings of the IEEE International Conferenceon Robotics and Automation (ICRA rsquo01) pp 3716ndash3721 SeoulRepublic of Korea May 2001
[23] F Belkhouche ldquoReactive path planning in a dynamic environ-mentrdquo IEEE Transactions on Robotics vol 25 no 4 pp 902ndash9112009
[24] B Dorj D Tuvshinjargal K T Chong D P Hong and DJ Lee ldquoMulti-sensor fusion based effective obstacle avoidanceand path-following technologyrdquo Advanced Science Letters vol20 pp 1751ndash1756 2014
[25] R Philippsen B Jensen and R Siegwart ldquoTowards real-timesensor based path planning in highly dynamic environmentsrdquoin Autonomous Navigation in Dynamic Environments vol 35of Tracts in Advanced Robotics pp 135ndash148 Springer BerlinGermany 2007
[26] K-T Song and C C Chang ldquoReactive navigation in dynamicenvironment using a multisensor predictorrdquo IEEE Transactionson Systems Man and Cybernetics Part B Cybernetics vol 29no 6 pp 870ndash880 1999
[27] K Khoshelham and S O Elberink ldquoAccuracy and resolutionof kinect depth data for indoor mapping applicationsrdquo Sensorsvol 12 no 2 pp 1437ndash1454 2012
[28] J Han L Shao D Xu and J Shotton ldquoEnhanced computervision with Microsoft Kinect sensor a reviewrdquo IEEE Transac-tions on Cybernetics vol 43 no 5 pp 1318ndash1334 2013
[29] F Kammergruber A Ebner and W A Gunthner ldquoNavigationin virtual reality using microsoft kinectrdquo in Proceedings of the12th International Conference on Construction Application ofVirtual Reality Taipei Taiwan November 2012
[30] Microsoft Kinect for X-BXO 360 httpwwwxboxcomen-USxbox-oneaccessorieskinect-for-xbox-one
[31] K Kronolige and P Mihelich ldquoTechnical Description of KinectCalibrationrdquo July 2014 httpwwwrosorgwikikinect calibra-tiontechnical
[32] G Shashank M Sharma and A Shukla ldquoRobot navigationusing image processing and isolated word recognitionrdquo Interna-tional Journal of Computer Science and Engineering vol 4 no5 pp 769ndash778 2012
[33] TW Choe H S Son J H Song and J H Park Intelligent RobotSystem HBE-RoboCar HanBack Electronics Seoul Republic ofKorea 2012
[34] K R Jadavm M A Lokhandwala and A P Gharge ldquoVisionbased moving object detection and trackingrdquo in Proceedings ofthe National Conference on Recent Trends in Engineering andTechnology Anand India 2011
[35] A Chandak K Gosavi S Giri S Agrawal and P Kulka-rni ldquoPath planning for mobile robot navigation using imageprocessingrdquo International Journal of Scientific amp EngineeringResearch vol 4 pp 1490ndash1495 2013
International Journal of
AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Active and Passive Electronic Components
Control Scienceand Engineering
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
RotatingMachinery
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation httpwwwhindawicom
Journal ofEngineeringVolume 2014
Submit your manuscripts athttpwwwhindawicom
VLSI Design
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Shock and Vibration
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Civil EngineeringAdvances in
Acoustics and VibrationAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Advances inOptoElectronics
Hindawi Publishing Corporation httpwwwhindawicom
Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
SensorsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Chemical EngineeringInternational Journal of Antennas and
Propagation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Navigation and Observation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
DistributedSensor Networks
International Journal of
2 Journal of Sensors
that when a mission is assigned or a goal location is giventhe robot does not plan its path but rather navigate itselfby reacting to its immediate environment in real time Themain idea of the reactive paradigm is to separate the controlsystem into small units of sensor-action pairs with a layeredmodular architecture resulting in fast execution of the controlalgorithm [12] There are other types of developments inlocal reactive path planning approaches such as Vector FieldForce (VFF) [13] andVector Field Histogram (VFH) [14]TheVFF and VFHmethods generate histograms from senor datain order to generate control commands for the vehicle butthey do not take into account the dynamic and kinematicconstraints of the vehicle
However there have been a few reactive works that utilizethe kinematic or dynamic information of the environmentto compute the motion commands for avoiding unexpectedchanges in the environment When the velocity informationof the objects obtained from available sensors is utilized therobot navigation system can compute trajectories resulting inimproving the motion performance regarding other obstacleavoidance methods [15ndash19] The curvature velocity method(CVM) [15] and the dynamic windows approach (DWA) [16]search an appropriate control command in velocity space bymaximizing an objective function which has criteria such asspeed distance between obstacles and remaining distancetowards the final destination The CVM and DWA methodhowever could increase the order of complexity resultingfrom the optimization of the cost function In [17ndash19] a veloc-ity information based approach for navigation and collisiondetection based on the kinematic equation is introduced byusing the notion of collision cones in the velocity space In asimilar way the concept of velocity obstacles [20 21] takes thevelocity of the moving obstacles into account which resultsin a shift of the collision cones This method is restrictedto obstacles with linear motion and thus the nonlinearvelocity obstacle approach is introduced to extend to copewith obstacles moving along arbitrary trajectories [22] Thekey concept of velocity obstacles is to transform the dynamicproblem into several static problems in order to increase thecapability of avoiding dynamic obstacle within unexpectedenvironment changes [23] Meanwhile sensor based motionplanning techniques are also widely used for robot navigationapplications in dynamic environments where the pose esti-mates of the moving obstacles are obtained by using sensorysystems [24ndash26] These sensor based navigation approachesalso require the knowledge of the obstaclersquos velocities for anaccurate navigation solution
In this work a new sensor fusion based hybrid reactivenavigation approach for autonomous robots is proposed indynamic environments The contribution of the new motionplanning method lies on the fact that it integrates a localobserver in a virtual plane as a kinematic reactive pathplanner [23] with a sensor fusion based obstacle detectionapproach which can provide a relative information of movingobstacles and environments resulting in an improved robust-ness and accuracy of the dynamic navigation capability Thekey feature of the reactive motion planning method is basedon a local observer in the virtual plane approachwhichmakesthe effective transformation of complex dynamic planning
problems into simple stationary ones along with a collisioncone in the virtual plane approach [23] On the other handa sensor fusion based planning technique provides the poseestimation of moving obstacles by using sensory systems andthus it could improve the accuracy reliability and robustnessof the reactive motion planning approach in uncertaindynamic environmentsThehybrid reactive planningmethodallows an autonomous vehicle to reactively change headingand velocity to cope with an obstacle around in each planningtime As a sensory system Microsoft Kinect device [27]which could obtain distance between the camera and targetobjects is utilized The advantage of using Kinect is on itscapability of calculating the distance between two objectson the world coordinate frame In case that the two objectsare placed closer a sonar sensor mounted on the robot candetect andmake a precise distance calculation in combinationwith the Kinect sensor data The integrated hybrid motionplanning with the integration of the virtual plane approachand sensor based estimation method allows the robot to findthe appropriate windows for the speed and orientation tomove with a collision-free path in dynamic environmentsmaking its usage very attractive and suitable for real-timeembedded applications In order to verify the performanceof the suggested method real experiments are carried out forthe autonomous navigation of a mobile robot in the dynamicenvironments using multiple moving obstacles Here twomobile robots act on the moving obstacles and one has toavoid collision with the other robot
The rest of the work is organized as follows In Section 2we introduce the kinematic equations and the geometry ofthe dynamic motion planning problem In Section 3 theconcept of the hybrid reactive navigation using virtual planeapproach is givenThe configuration and system architectureof the Kinect device is discussed in Section 4 Simulation andexperimental tests are shown and discussed in Section 5
2 Definition of Dynamic Motion Planning
In this section the relative velocity obstacle based motionplanning algorithms for collision detection and control lawsare defined [23] Figure 1 shows some geometry parametersfor the navigation in dynamic environment for the mobilerobot The world is attached to a global fixed reference frameof coordinates 119882 and its origin point is the origin 119874 Itis possible to attach local reference frames to every movingobject in the working space The suggested method is areactive navigation method with which the robot needs tochange the path to avoid either moving or static obstacleswithin a given radius that is the coverage area (CA)
The line of sight of the robot 119897119903is the imaginary straight
line that starts from the origin and is directed toward thereference center point of the robot 119877 The line-of-sight angle120579119903is the angle made by the sight 119897
119903 The distance 119897
119892119903between
robot 119877 and the goal 119866 is calculated by
119897119892119903= radic(119909
119892minus 119909119903)2
+ (119910119892minus 119910119903)2
(1)
Journal of Sensors 3
Goal
Coveragearea (CA)
Y
X
D2
120593gr
lgr
120593ir
lir
lr
120593r
D1
2
O W
li
120579r
i
120593i
1
Cr
r
D3
3
R(xr yr 120579r r)
Di(xi yi 120579i i)
Figure 1 Geometry of the navigation problem Illustration of thekinematic and geometric variables
where (119909119892 119910119892) is the coordinates of the final goal point and
(119910119903 119909119903) is the state of the robot in 119882 The mobile robot has
a differential driving mechanism using two wheels and thekinematic equation of the wheeled mobile robot can be givenby
119903= V119903cos 120579119903
119910119903= V119903sin 120579119903
V119903= 119886119903
120579119903= 119908119903
(2)
where 119886119903is the robotrsquos linear acceleration and V
119903and 119908
119903
are the linear and angular velocities (120579119903 V119903) are the control
inputs of the mobile robot The line-of-sight angle 120593119892119903which
is obtained from the anglemade by the line of sight 119897119892119903is given
by the following equations
cos120593119892119903=
119909119892minus 119909119903
radic(119909119892minus 119909119903)2
+ (119910119892minus 119910119903)2
tan120593119892119903=
119910119892minus 119910119903
119909119892minus 119909119903
(3)
Now the kinematic equation of the 119894th obstacle 119863119894is
expressed by
119894= V119894cos 120579119894
119910119894= V119894sin 120579119894
120579119894= 119908119894
(4)
where the obstacle has the linear velocity V119894and the angular
velocities 119908119894 and 120579
119894is the orientation angle The Euclidian
distance of the line of sight 119897119894119903between the robot and the 119894th
obstacle is calculated by
119897119894119903= radic(119909
119894minus 119909119903)2
+ (119910119894minus 119910119903)2 (5)
and the line-of-sight angle 120593119894119903is expressed by
tan120593119894119903=119910119894minus 119910119903
119909119894minus 119909119903
(6)
The evolution of the range and turning angle between therobot and an obstacle for dynamic collision avoidance iscomputed by using the tangential and normal component ofthe relative velocity in the polar coordinates as follows
119897119894119903= V119894cos (120579
119894minus 120593119894119903) minus V119903cos (120579
119903minus 120593119894119903)
119897119894119903119894119903= V119894sin (120579119894minus 120593119894119903) minus V119903sin (120579119903minus 120593119894119903)
(7)
From these equations it is shown that a negative sign of 119897119894119903
indicates that the robot is approaching obstacle 119863119894 and if
the rate is zero the range implies constant distance betweenthe robot and the obstacle Meanwhile a zero rate for theline-of-sight angle indicates the motion of 119863
119894is a straight
line The relative polar system presents a simple but veryeffective model that allows real-time representation of therelative motion between the robot and moving obstacle [23]
3 Hybrid Reactive Motion Planning Approach
31 Virtual Plane Based Reactive Motion Planning In thissection the virtual plane method which allows transforminga moving object of interest into a stationary object is brieflyreviewed [23] The transformation used in the virtual planeis achieved by introducing a local observer that allows therobot to find the appropriate windows for the speed andorientation to move in a collision-free path Through thistransformation the collision course between the robot 119877 andthe 119894th obstacle 119863
119894is reduced to a collision course between
the virtual robot 119877V and the initial position 119863119894(1199050) of a real
obstacle The components of the relative velocity between 119877V
and119863119894(1199050) along and across 119897
119894119903are given by
119897119894119903= minusVV119903cos (120579V
119903minus 120593119894119903)
119897119894119903119894119903= minusVV119903sin (120579V119903minus 120593119894119903)
(8)
where VV119903and 120579V119903are the linear velocity and orientation of the
virtual robot The linear velocity and orientation angle of 119877V
can be written as follows
VV119903= radic(
119894minus 119903)2
+ ( 119910119894minus 119910119903)2 (9)
tan 120579V119903=
119910119894minus 119910119903
119894minus 119903
(10)
Note that the tangential and normal equations given in (7) forthe dynamic motion planning are rewritten in terms of thevirtual robot as an observer leading to a stationary motionplanning problem More details concerning the virtual plan-ning method can be referred to in [23]
Collision detection is expressed in the virtual plane butthe final objective is to make the robot navigate toward the
4 Journal of Sensors
goal with collision-free path in the real planeThe orientationangle of the robot in the real plane is calculated by
119903=
V119903+ 119894
119910119903= 119910
V119903+ 119910119894
(11)
This is the inverse transformation mapping the virtual planeinto the real plane and gives the velocity of the robot as afunction of the velocities of the virtual robot and the movingobject The speed and orientation of the real robot can becomputed from the virtual robot and the moving objectvelocities as follows
V119903= radic(V
119903+ 119894)2
+ ( 119910V119903+ 119910119894)2
tan 120579119903=
119910V119903+ 119910119894
V119903+ 119894
(12)
32 Navigation Laws In order to make the robot navigatetoward the final goal a kinematic based linear navigation lawis used as [23]
120579119903= 119872120593
119892119903+ 1198881+ 1198880119890minus119886119905
(13)
where 120593119892119903
is the line-of-sight angle of the robot final goaland the variables are deviation terms characterizing the finaldesired orientation angle of the robot and indicating theinitial orientation of the robot The term 119872 is a navigationparameter with 119872 gt 1 and 119886 is a given positive gain Onthe other hand the collision course in the virtual plane with119863119894(1199050) is characterized by
120579V119903isin CCVP
119894 (14)
The collision cone in the virtual plane (CCVP) is given by
CCVP119894= [120593119894119903minus 120573119894 120593119894119903+ 120573119894] (15)
where 120573119894is the angle between the lines of the upper and lower
tangent limit points in119863119894The direct collision course between
119877 and119863119894is characterized by
tan 120579V119903=
119910119894minus 119910119903
119894minus 119903
= [tan (120593119894119903minus 120573119894) tan (120593
119894119903+ 120573119894)] (16)
After the orientation angle 120579V119903of the virtual robot is computed
in terms of the linear velocity of the robot and the movingobstacles as given in (10) it is possible to write the expressionsof the orientation angle 120579
119903and the speed V
119903for the real robot
controls V119903or in terms of the linear velocity and orientation
angle of the moving obstacle and the virtual robot as follows
V119903=V119894(tan 120579V
119903cos 120579119894minus sin 120579
119894)
tan 120579V119903cos 120579119903minus sin 120579
119903
120579119903= 120579
V119903minus arcsin[
V119894sin (120579V119903minus 120579119894)
V119903
]
(17)
For the robot control the desired value of the orientationangle in the virtual plane can be expressed based on usingthe linear navigation law as
120579Vlowast119903(119905) = 120572
119894119896+ 1198881+ 1198880exp minus119886 (119905 minus 119905
119889) 119896 = 1 2 (18)
LED Vision camera Microphone array
Microphone array
3D depth sensor cameras Motorized tilt
Figure 2 Architecture of Microsoft Kinect sensor
where 119905119889denotes the time when the robot starts deviation
for collision avoidance and 1205721198941and 120572
1198942are the left and right
line-of-sight angles between the reference deviation pointsand the points on the collision cone in the virtual planeFinally based on the desired orientation in the virtual planethe corresponding desired speed value Vlowast
119903for the robot is
calculated by
Vlowast119903=
V119894(tan 120579V
lowast
119903cos 120579119894minus sin 120579
119894)
tan 120579Vlowast119903cos 120579119903minus sin 120579
119903
(19)
In a similar way the corresponding desired orientation valuecan be expressed by
tan 120579lowast119903=
VV119903sin (120579V
lowast
119903) + V119894sin (120579119894)
VV119903cos (120579Vlowast
119903) + V119894cos (120579
119894) (20)
Note that for the robot navigation including a collisionavoidance technique within dynamic environments eitherthe linear velocity control expressed in (19) or the orientationangle control in (20) can be utilized
33 Sensor Fusion Based Range and Pose Estimation Low-cost range sensors are an attractive alternative to expensivelaser scanners in application areas such as motion planningand mapping The Microsoft Kinect [26] is a sensor whichconsists of an IR sensor an IR camera an RGB camera amultiarray microphone and an electrical motor providingthe tilt function to the sensor (shown in Figure 2) TheKinect sensor captures not only depth but also color imagessimultaneously at a frame rate of up to 30 fps Some keyfeatures are illustrated in [26ndash29] The RGB video streamuses 8-bit VGA resolution (640 times 480 pixels) with a Bayercolor filter at a frame rate 30Hz The monochrome depthsensing video stream has a VGA resolution (640times480 pixels)with 11-bit depth which provides 2048 levels of sensitivityDepth data is acquired by the combination of IR projector andIR camera The microphone array features four microphonecapsules and operates with each channel processing 16-bitaudio at a sampling rate of 16 kHz The motorized pivotis capable of tilting the sensor up to 27∘ either up ordown
The features of Kinect device make its application veryattractive to autonomous robot navigation In this work theKinect sensor is utilized for measuring range to moving
Journal of Sensors 5
Table 1 Kinectrsquos focal length and field of view
Camera Focal length (pixel) Field of view (degrees)Horizontally Vertically
RGB 525 63 50IR 580 57 43
Detectablerange ofdepth camera
0 200 400 600 800 1000 12000
1
2
3
4
5
6
Kinect units
Met
ers
Equation (21)Equation (22)
Figure 3 Kinect depth measurement and actual distance
obstacles and estimating color-based locations of objects fordynamic motion planning
Before going into detail the concept of the calculationof the real coordinates is discussed Kinect camera hassome good advantages such as depth sensor with minimum800mm and maximum range 4000mm Camera focus isconstant and given and thus real distance between cameraand chosen target is easily calculated The parameters usedin Kinect sensor are summarized in Table 1
Two similar equations have been proposed by researcherwhere one is based on the function 1119863value and the other isusing tan(119863value)The distance between a camera and a targetobject 119911
119908is expressed by
119911119908=
1
(119863value times (minus00030711016) + 33309495161)(21)
119911119908= 01236 times tan(
119863value28425 + 11863
) (22)
Figure 3 shows the detectable ranges of a depth camera wherethe distances in world coordinate based on the above twoequations are computed by limiting the raw depth to 1024 thatcorresponds to about 5 meters
Figure 4 shows the error results of distance measure-ment experiments using a Kinectrsquos depth camera In thisexperiment the measured distance using a ruler is noted bygreen which gives a reference distance and three repetitiveexperiments are carried out and they are drawn in red lightblue and blue colors From the experiment it is shown thatthe errors of the depth measurements from the Kinect sensorare proportional to the distance
0 5 10 15 20 25 30 350
1000200030004000
Cycle time
(mm
)D
istan
ce m
easu
rem
ent
Reference distanceExperiment 1
Experiment 2Experiment 3
500 1000 1500 2000 2500 3000 3500 4000minus40minus20
020406080
Distance measurement (mm)
(mm
)Er
ror m
easu
rem
ent
Experiment 1Experiment 2
Experiment 3Average value
Figure 4 Kinect depth camera measurement experiment and error
C
(uo o)
yc
h z
ys
xw
xc
minusyw
zc
120573120572 Camera
screen
Realworld
Pg(xw yw zw)
yw
minusy
xs
P998400g(xs ys zs)
P998400r (xs ys zs)
Pr(xw yw zw)
Figure 5 Robot and obstacle localization using the Kinect sensorfor ranging and positioning computation
Figure 5 shows a general schematics of geometricapproach to find the 119909 and 119910 coordinates using the Kinectsensor system where ℎ is the screen height in pixels and 120573 isthe field of view of the camera The coordinates of a point onthe image plane of the robot 1198751015840
119903and goal 1198751015840
119892are transformed
into the world coordinates 119875119903and 119875
119892 and it is calculated by
119875 = (119909119908 119910119908) 997888rarr 119875
1015840
= (119909119904 119910119904 119911119904)
119909119908=119909119904
119911119908
119910119908=119910119904
119911119908
(23)
6 Journal of Sensors
yg
yr
xr xg
y
x
120579
(a)
III
III IV
y
x
(b)
Figure 6 Robot heading angle computation approach
Each of coordinates 119909119908 119910119908 119911119908of two red (robot) and green
(goal) points is used as the input into the vision system and1198751015840 is computed by
(
119909119904
119910119904
119911119904
) = (
119891 0 0 0
0 119891 0 0
0 0 1 0
)(
119909119908
119910119908
119911119908
1
) (24)
where 119891 is the focal length of the camera 1198751015840 is calculated atthe pixel coordinates divided by 120588
119906and 120588V and the values of
the pixel of the image 119906 and V are the pixel coordinates andthey are obtained from the following equations
(
1199061015840
V1015840
1199081015840
) =(
1
120588119906
0 1199060
01
120588VV0
0 0 1
)(
119891 0 0 0
0 119891 0 0
0 0 1 0
)(
119883119882
119884119882
119885119882
1
)
1198751015840
= (
119906
V) = (
1199061015840
1199081015840
V1015840
1199081015840
) (
1199061015840
V1015840
1199081015840
) = 119862(
119909119908
119910119908
119911119908
1
)
(25)
In the experiment the final goal and robots are recognized bya built-in RGB camera In addition the distance of an objectis measured within mm accuracy using IR camera and thetarget objectrsquos pixel coordinates (119909
119904 119910119904) are estimated by using
a color-based detection approach In this work the distancemeasured between the planning field and the Kinect sensoris 2700mm which becomes the depth camerarsquos detectablerange The horizontal and vertical coordinates of an objectare calculated as follows
(1) horizontal coordinate
120572 = arctan(119909119904
119891)
119909119908= 119911119908times sin120572
(26)
(2) vertical coordinate
120572 = arctan(119910119904
119891)
119910119908= 119911119908times sin120572
(27)
where 119911119908is the distance to the object obtained from
Kinect sensor Now those real-world coordinatesobtained in the above are used in dynamic pathplanning procedures
In general the camera intrinsic and extrinsic parametersof the color and depth cameras are set as default valuesand thus it is necessary to calibrate them for accurate testsThe calibration of the depth and RGB color cameras in theKinect sensor can be applied by using a mathematical modelof depth measurement and creating a depth annotation of achessboard by physically offsetting the chessboard from itsbackground and details of the calibration procedures can bereferred to in [30 31]
As indicated in the previous section for the proposedrelative velocity obstacle based dynamic motion planningthe accurate estimates of the range and orientation of anobject play an important role In this section an efficient newapproach is proposed to estimate the heading information ofthe robot using a color detection approach First the robot iscovered by green and red sections shown in Figure 6
Then using a color detectionmethod [32] the center loca-tion of the robot is calculated and after finding denominate
Journal of Sensors 7
Input Coordinate of Robot obstacle 1 obstacle 2 and goalOutput Speed of robotrsquos right and left wheelsCalculate 119897
119892119903using Kinect
While 119897119892119903gt 0 do
Calculate 119897119892119903and 120601
119892119903
Send robot speedif All119863
119894in CA
Calculate 119897119894119903using Kinect
if All 119897119894119903gt 0 then
There is no collision risk keep send robot speedelseConstruct the virtual planeTest the collision in the virtual planeif there is a collision risk thenCheck sonar sensor valueif sonar value is too small thenChose 120579
119903of quick motion control
Send robot speedelseConstruct the 120579-windowChose the appropriate values for 120579
119903
Send robot speedend if
end ifend if
end while
Algorithm 1 Hybrid reactive dynamic navigation algorithm
heading angle 120579 as shown in (28) new heading informationin each four different phase sections is computed by using thefollowing equations
Δ119909 = 119909119892minus 119909119903
Δ119910 = 119910119892minus 119910119903
120579 = tanminus1 (119910119892minus 119910119903
119909119892minus 119909119903
)
(28)
(1) 119909119892gt 119909119903 119910119892gt 119910119903
120579 = 120579 + 0
(2) 119909119892lt 119909119903 119910119892gt 119910119903
120579 = 314 minus 120579
(3) 119909119892lt 119909119903 119910119892lt 119910119903
120579 = 314 + 120579
(4) 119909119892gt 119909119903 119910119892lt 119910119903
120579 = 628 minus 120579
(29)
Finally the relative velocity obstacle based reactivedynamic navigation algorithm with the capability of collisionavoidance is summarized in Algorithm 1
4 Experimental Results
For the evaluation and verification of the proposed sensorbased reactive dynamic navigation algorithms both simu-lation study and experimental tests are carried out with arealistic experimental setup
41 Experimental Scenario and Setup For experimental teststwo robots are assigned asmoving obstacles and the third oneis used as a master robot that generates control commands toavoid the dynamic obstacles based on the suggested reactivemotion planning algorithms For the moving obstacles twoNXT Mindstorm based vehicles that can either move in arandom direction or follow a designated path are developedThe HBE-RoboCAR equipped with ultrasonic and encodersensors is used as a master robot as shown in Figure 7
TheHBE-RoboCAR [33] has 8-bit AVRATmega128L pro-cessor The robot is equipped with multiembedded processormodules (embedded processor FPGA MCU) It providesdetection of obstacles with ultrasonic and infrared sensormotion control with acceleration sensor and motor encoderHBE-RoboCAR has the ability to communicate with otherdevice either wireless or wired technology such as Bluetoothmodule and ISP UART interfaces respectively In this workHBE-RoboCAR is connected to a computer on the groundcontrol station using Bluetooth wireless communicationFigure 7 shows the hardware specification and sensor systemsfor the robot platform and Figure 8 shows the interface andcontrol architecture for the embedded components of HBE-RoboCAR [33]
8 Journal of Sensors
Power switch1Power LED and reset switch2Interface connector3Wireless network connector4Ultrasonic sensor5PSD sensor6
LED 7Phototransistors8Voltmeter9DC inCharge inBattery in
101112
1
23
4
5 5
55
5 5
55
6
6
7 7
7 7
7 7
77
8
9
101112
Figure 7 Mobile robot hardware and sensor systems (HBE-RoboCAR [33])
Embedded processor
FPGA module
(Altera Xilinx)
AVRboard
Inte
rface
bo
ard
Inte
rface
bo
ard
Inte
rface
bo
ard
Con
nect
or
ATmega128
Acceleration sensor
(ADXL202E)
POWER
ATmega8 Voltage gauge
Motor driver
Sens
or C
ON
Mot
or C
ON
Photo sensor
(GP2Y0A21)
Infrared sensor
(IL-8L EL-8L)
Ultrasonic sensor
(MA40)
Front Back
DC geared motor(2EA)
DC geared encoder motor
(2EA)
Control board(compatibility with 3 kinds)
HBE-RoboCAR board
Base board Sensor board
LED
Motor(+5V +33V)
times 2)(Write (Red times 2)
Figure 8 Block diagram of the HBE-RoboCAR embedded system
For the dynamic obstacle avoidance the relative velocityobstacle based navigation laws require the range and headinginformation from sensors For the range estimation Kinectsensor is utilized If the Kinect sensor detects the final goalusing a color-based detection algorithm [32 34 35] it sendsthe information to the master robot After receiving thetarget point the master robot starts the onboard navigationalgorithm to reach the goal while avoiding dynamic obstaclesWhen the robot navigates in the experimental field thedistance to each moving obstacle is measured by the Kinectsensor and the range information is fed back to the masterrobot via Bluetooth communication as inputs to the reactivemotion planning algorithms The detailed scenario for theexperimental setup is illustrated in Figure 9
42 Simulation Results Figure 10 shows the simulationresults of the reactive motion planning on both the virtualplane and the real plane In this simulation the trajectoriesof two obstacles were described by the blue and black colorlines the trajectory of the master robot was depicted in thered line and the goal was indicated by green dot As canbe clearly seen in the real plane and the virtual plane inFigures 10(b) and 10(a) the master robot avoided the firstobstaclewhichwasmoving into themaster robot and success-fully reached the target goal after avoiding the collision withthe second moving obstacle just before reaching the targetWhile the master robot avoids the obstacles it generates acollision cone by choosing a deviation point on the virtualplane On the virtual plane the radius of the collision cone
Journal of Sensors 9
Bluetoothcommunication
Kinect
PC
Obstacle 1
Obstacle 2
Robot
Destination
Figure 9 Test scenario and architecture for experimental setup
Deviation point
Deviation point
First obstaclersquoscollision coneSecond obstaclersquos
collision cone
0 500 1000 1500 2000 2500 3000minus500
0500
1000150020002500
x coordinate
y coo
rdin
ate
Virtual plane
(a)
Second obstacle
First obstacleRobot
Goal
Trajectory of robot
Coverage area
Initial locations ofrobot and obstacles
minus500 0 500 1000 1500 2000 2500 3000 3500 4000minus1000
01000200030004000
y coo
rdin
ate
Real plane
x coordinate
(b)
Figure 10 Simulation results on virtual plane and real plane
0 10 20 30 40 50 60 7050
100150200
Cycle time (times of program running)
Targ
et an
gle
(deg
) an
gle (
deg)
(d
eg)
0 10 20 30 40 50 60 7050
100150200
Cycle time (times of program running)
Robo
t hea
ding
0 10 20 30 40 50 60 70minus100
0100
Cycle time (times of program running)
Varia
nce a
ngle
Figure 11 Orientation angle information target angle robot head-ing angle and variance angle
is the same as the obstaclersquos one and the distance betweenthe deviation point and the collision cone is dependent onthe radius of the master robotThe ellipses indicate the initiallocations of the robot and the obstacles
In Figure 11 the orientation angle information used forthe robot control was illustrated The upper top plot showedthe angle of the moving robot to the target from the virtualplane the second plot showed the robot heading anglecommanded for the navigation control and the third plotshowed the angle difference between the target and robotheading angle At the final stage of the path planning the
0 10 20 30 40 50 60 70100110120130140
Cycle time (times of program running)
Obs
tacle
1 an
gle
(deg
ree)
1840 1860 1880 1900 1920 1940 1960 1980 2000 2020 2040500
1000
1500
2000
x coordinate (mm)
y coo
rdin
ate (
mm
) Obstacle 1 position
0 10 20 30 40 50 60 700
20
40
60
Cycle time (times of program running)
Obs
tacle
1 sp
eed
Figure 12 Results of firstmoving obstaclersquos orientation angle speedand position in 119909-119910 coordinates
commanded orientation angle and the target angle to the goalpoint become the same Instead of controlling the robot withthe orientation angle the speed of the master robot can beused to avoid the moving obstacles
Figures 12 and 13 show each moving obstaclersquos headingangle linear velocity and trajectory As can be seen in order
10 Journal of Sensors
0 10 20 30 40 50 60 70150
200
250
300
Cycle time (times of program running)
Obs
tacle
2 an
gle
(deg
ree)
1000 1010 1020 1030 1040 1050 1060 10700
500
1000
1500
x coordinate (mm)
y coo
rdin
ate (
mm
) Obstacle 2 position
0 10 20 30 40 50 60 700
20
40
60
Obs
tacle
2 sp
eed
Cycle time (times of program running)
Figure 13 Results of second moving obstaclersquos orientation anglespeed and position in 119909-119910 coordinates
500 1000 1500 2000 2500 30000
2000
4000
x coordinate (mm)
y co
ordi
nate
(mm
)
Robot position
0 10 20 30 40 50 60 700
100
200
Cycle time (times of program running)
Robo
t spe
ed
0 10 20 30 40 50 60 700
50
100
Cycle time (times of program running)
Righ
t whe
el sp
eed
0 10 20 30 40 50 60 700
50
100
Cycle time (times of program running)
Left
whe
el sp
eed
Figure 14 Results of robotrsquos trajectory speed and right and leftwheels speed
to carry out a dynamic path planning experiment the speedand the heading angle were changed during the simulationresulting in uncertain cluttered environments
Figure 14 shows the mobile robotrsquos trajectory from thestart point to the goal point and also the forward velocity andeach wheel speed from encoder As can be seen the trajectoryof the robot has a sharp turn around the location (1500mm
Obstacle 1rsquos startingposition
Obstacle 2rsquos startingposition
Goal x
y
Robotrsquos startingposition
(a)
0 500 1000 1500 2000 2500 30000
1000
2000
3000Virtual plane
x (mm)
y (m
m)
minus1000 minus500 0 500 1000 1500 2000 2500 3000 3500 4000
0
1000
2000
Real plane
R (27513368 21269749)
3000
x (mm)
y (m
m)
(b)
Figure 15 (a) Experiment environment (initial stage) (b) Plots ofthe experimented data at initial stage
1000mm) in order to avoid the second moving obstacle It isseen that the right and left wheel speeds are mirrored along atime axis Also we can see relationship of the variance angleand robotrsquos right and left wheels speed
43 Field Test and Results Further verification of the per-formance of the proposed hybrid dynamic path planningapproach for real experiments was carried out with thesame scenario used in the previous simulation part In theexperiment two moving obstacles are used and a masterrobot moves without any collision with the obstacles to thetarget point as shown in Figure 15(a) and the initial locationsof the obstacles and the robot are shown in Figure 15(b) Thered dot is the initial starting position of the master robot at(2750mm 2126mm) the black dot is the initial location ofthe second obstacle at (2050mm 1900mm) and the blue dotis the initial location of the first moving obstacle at (1050mm2000mm) In the virtual plane the collision cone of the firstobstacle is depicted as shown in the top plot of Figure 15(b)
Journal of Sensors 11
Goal x
y
Robotrsquos currentposition
Obstacle 2rsquos current Obstacle 1rsquos currentposition and direction position and direction
(a)
0 500 1000 1500 2000 2500 30000
1000
2000
3000Virtual plane
x (mm)
y (m
m)
minus1000 minus500 0 500 1000 1500 2000 2500 3000 3500 4000
0
1000
2000
3000
Real plane
x (mm)
y (m
m)
(b)
Figure 16 (a) Experiment result of collision avoidance with the firstobstacle (b) Plot of the trajectories of the robot and the obstacles inthe virtual and real planes during the collision avoidance with thefirst moving obstacle
and the robot carries out its motion control based on thecollision cone in the virtual plane until it avoids the firstobstacle
Figure 16 showed the collision avoidance performancewith the fist moving obstacle and as can be seen the masterrobot avoided the commanded orientation control into theleft direction without any collisions which is described indetail in the virtual plane in the top plot of Figure 16(b) Thetrajectory and movement of the robot and the obstacles weredepicted in the real plane in Figure 16(b) for the detailedanalysis
In a similar way Figure 17(a) illustrated the collisionavoidance with the second moving obstacle and the detailedpath and trajectory are described in Figure 17(b) The topplot of Figure 17(b) shows the motion planning in thevirtual plane where the initial location of the second movingobstacle is recognized at the center of (2050mm 1900mm)
Goal x
y
Robotrsquos currentposition
Obstacle 2rsquos currentObstacle 1rsquos currentposition and direction
position and direction
(a)
0 500 1000 1500 2000 2500 30000
1000
2000
3000Virtual plane
x (mm)
y (m
m)
minus1000 minus500 0 500 1000 1500 2000 2500 3000 3500 4000
0
1000
2000
3000
Real plane
x (mm)
y (m
m)
(b)
Figure 17 (a) Experiment result of collision avoidance with thesecond obstacle (b) Plot of the trajectories of the robot and theobstacles in the virtual and real planes during the collision avoidancewith the second moving obstacle
Based on this initial location the second collision cone isconstructed with a big green ellipse that allows the virtualrobot to navigate without any collision with the secondobstacle The trajectory of the robot motion planning in thereal plane is depicted in the bottom plot of Figure 17(b)
Now at the final phase after avoiding all the obstacles themaster robot reached the target goal with a motion controlas shown in Figure 18(a) The overall trajectories of the robotfrom the starting point to the final goal target in the virtualplane were depicted in the top plot of Figure 18(a) and thetrajectories of the robot in the real plane were depicted inthe bottom plot of Figure 18(b) Note that the trajectories ofthe robot location differ from each other in the virtual planeand the real plane However the orientation change gives thesame direction change of the robot in both the virtual andthe real plane In this plot the green dot is the final goalpoint and the robot trajectory is depicted with the red dotted
12 Journal of Sensors
Goalx
y
Obstacle 2rsquos current
Obstacle 1rsquos current
position and direction
position and direction
Robotrsquos currentposition
(a)
0 500 1000 1500 2000 2500 30000
1000
2000
3000Virtual plane
x (mm)
y (m
m)
minus1000 minus500 0 500 1000 1500 2000 2500 3000 3500 4000minus1000
01000200030004000
Real plane
x (mm)
y (m
m)
(b)
Figure 18 (a) Experiment result after the collision avoidance withall obstacles (b) Plot of the trajectories of the robot and the obstaclesin the virtual and real planes during the collision avoidance with allthe moving obstacles
circles The smooth trajectory was generated by using thelinear navigation laws as explained
From this experiment it is easily seen that the pro-posed hybrid reactive motion planning approach designedby the integration of the virtual plane approach and asensor based planning is very effective to dynamic collisionavoidance problems in cluttered uncertain environmentsThe effectiveness of the hybrid reactive motion planningmethod makes its usage very attractive to various dynamicnavigation applications of not only mobile robots but alsoother autonomous vehicles such as flying vehicles and self-driving vehicles
5 Conclusion
In this paper we proposed a hybrid reactive motion plan-ning method for an autonomous mobile robot in uncertain
dynamic environmentsThe hybrid reactive motion planningmethod combined a reactive path planning method whichcould transform dynamic moving obstacles into stationaryones with a sensor based approach which can provide relativeinformation of moving obstacles and environments The keyfeatures of the proposed method are twofold the first keyfeature is the simplification of complex dynamic motionplanning problems into stationary ones using the virtualplane approach while the second feature is the robustness ofa sensor basedmotion planning in which the pose estimationof moving obstacles is made by using a Kinect sensor whichprovides a ranging and color detection The sensor basedapproach improves the accuracy and robustness of the reac-tive motion planning approach by providing the informationof the obstacles and environments The performance ofthe proposed method was demonstrated through not onlysimulation studies but also field experiments using multiplemoving obstacles
In the further work a sensor fusion approach which couldimprove the heading estimation of a robot and the speedestimation of moving objects will be investigated more formore robust motion planning
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgments
This research was supported by the National Research Foun-dation ofKorea (NRF) (no 2014-017630 andno 2014-063396)and was also supported by the Human Resource TrainingProgram for Regional Innovation and Creativity through theMinistry of Education and National Research Foundation ofKorea (no 2014-066733)
References
[1] R Siegwart I R Nourbakhsh and D Scaramuzza Introductionto AutonomousMobile RobotsTheMITPress LondonUK 2ndedition 2011
[2] J P van den Berg and M H Overmars ldquoRoadmap-basedmotion planning in dynamic environmentsrdquo IEEE Transactionson Robotics vol 21 no 5 pp 885ndash897 2005
[3] D Hsu R Kindel J-C Latombe and S Rock ldquoRandomizedkinodynamic motion planning with moving obstaclesrdquo TheInternational Journal of Robotics Research vol 21 no 3 pp 233ndash255 2002
[4] R Gayle A Sud M C Lin and D Manocha ldquoReactivedeformation roadmaps motion planning of multiple robots indynamic environmentsrdquo in IEEERSJ International Conferenceon Intelligent Robots and Systems (IROS rsquo07) pp 3777ndash3783 SanDiego Calif USA November 2007
[5] S S Ge and Y J Cui ldquoDynamic motion planning for mobilerobots using potential field methodrdquo Autonomous Robots vol13 no 3 pp 207ndash222 2002
[6] K P Valavanis T Hebert R Kolluru and N TsourveloudisldquoMobile robot navigation in 2-D dynamic environments using
Journal of Sensors 13
an electrostatic potential fieldrdquo IEEE Transactions on SystemsMan and Cybernetics Part A Systems and Humans vol 30 no2 pp 187ndash196 2000
[7] X Gao Y Ou Y Fu et al ldquoA novel local path planningmethod considering both robot posture and path smoothnessrdquoin Proceeding of the IEEE International Conference on Roboticsand Biomimetics (ROBIO 13) pp 172ndash178 Shenzhen ChinaDecember 2013
[8] J Jiao Z Cao P Zhao X Liu and M Tan ldquoBezier curvebased path planning for a mobile manipulator in unknownenvironmentsrdquo in Proceedings of the IEEE International Confer-ence on Robotics and Biomimetics (ROBIO rsquo13) pp 1864ndash1868Shenzhen China December 2013
[9] A Stentz ldquoOptimal and efficient path planning for partially-known environmentsrdquo in Proceedings of the IEEE InternationalConference on Robotics and Automation vol 4 pp 3310ndash3317May 1994
[10] G Song H Wang J Zhang and T Meng ldquoAutomatic dockingsystem for recharging home surveillance robotsrdquo IEEE Transac-tions on Consumer Electronics vol 57 no 2 pp 428ndash435 2011
[11] M Seder and I Petrovic ldquoDynamic window based approachto mobile robot motion control in the presence of movingobstaclesrdquo in Proceedings of the IEEE International Conferenceon Robotics and Automation (ICRA rsquo07) pp 1986ndash1991 RomaItaly April 2007
[12] D Fox W Burgard and S Thrun ldquoThe dynamic windowapproach to collision avoidancerdquo IEEERobotics andAutomationMagazine vol 4 no 1 pp 23ndash33 1997
[13] J Borenstein and Y Koren ldquoReal-time obstacle avoidance forfast mobile robotsrdquo IEEE Transactions on Systems Man andCybernetics vol 19 no 5 pp 1179ndash1187 1989
[14] J Borenstein and Y Koren ldquoThe vector field histogrammdashfastobstacle avoidance for mobile robotsrdquo IEEE Transactions onRobotics and Automation vol 7 no 3 pp 278ndash288 1991
[15] R Simmons ldquoThe curvature-velocity method for local obsta-cle avoidancerdquo in Proceedings of the 13th IEEE InternationalConference on Robotics and Automation vol 4 pp 3375ndash3382Minneapolis Minn USA April 1996
[16] J Minguez and L Montano ldquoNearness Diagram (ND) nav-igation collision avoidance in troublesome scenariosrdquo IEEETransactions on Robotics and Automation vol 20 no 1 pp 45ndash59 2004
[17] E Owen and L Montano ldquoMotion planning in dynamicenvironments using the velocity spacerdquo in Proceedings of theIEEE IRSRSJ International Conference on Intelligent Robots andSystems (IROS 05) pp 997ndash1002 August 2005
[18] A Chakravarthy and D Ghose ldquoObstacle avoidance in adynamic environment a collision cone approachrdquo IEEE Trans-actions on Systems Man and Cybernetics Part A Systems andHumans vol 28 no 5 pp 562ndash574 1998
[19] F Belkhouche T Jin and C H Sung ldquoPlane collision coursedetection betweenmoving objects by transformation of coordi-natesrdquo in Proceedings of the 17th IEEE International Conferenceon Control Applications Part of IEEE Multi-Conference onSystems and Control San Antonio Tex USA 2008
[20] P Fiorini and Z Shiller ldquoMotion planning in dynamic environ-ments using velocity obstaclesrdquo International Journal of RoboticsResearch vol 17 no 7 pp 760ndash772 1998
[21] P Fiorini and Z Shiller ldquoMotion planning in dynamic envi-ronments using velocity obstaclesrdquoThe International Journal ofRobotics Research vol 17 no 7 pp 760ndash772 1998
[22] Z Shiller F Large and S Sekhavat ldquoMotion planning indynamic environments obstacles moving along arbitrary tra-jectoriesrdquo in Proceedings of the IEEE International Conferenceon Robotics and Automation (ICRA rsquo01) pp 3716ndash3721 SeoulRepublic of Korea May 2001
[23] F Belkhouche ldquoReactive path planning in a dynamic environ-mentrdquo IEEE Transactions on Robotics vol 25 no 4 pp 902ndash9112009
[24] B Dorj D Tuvshinjargal K T Chong D P Hong and DJ Lee ldquoMulti-sensor fusion based effective obstacle avoidanceand path-following technologyrdquo Advanced Science Letters vol20 pp 1751ndash1756 2014
[25] R Philippsen B Jensen and R Siegwart ldquoTowards real-timesensor based path planning in highly dynamic environmentsrdquoin Autonomous Navigation in Dynamic Environments vol 35of Tracts in Advanced Robotics pp 135ndash148 Springer BerlinGermany 2007
[26] K-T Song and C C Chang ldquoReactive navigation in dynamicenvironment using a multisensor predictorrdquo IEEE Transactionson Systems Man and Cybernetics Part B Cybernetics vol 29no 6 pp 870ndash880 1999
[27] K Khoshelham and S O Elberink ldquoAccuracy and resolutionof kinect depth data for indoor mapping applicationsrdquo Sensorsvol 12 no 2 pp 1437ndash1454 2012
[28] J Han L Shao D Xu and J Shotton ldquoEnhanced computervision with Microsoft Kinect sensor a reviewrdquo IEEE Transac-tions on Cybernetics vol 43 no 5 pp 1318ndash1334 2013
[29] F Kammergruber A Ebner and W A Gunthner ldquoNavigationin virtual reality using microsoft kinectrdquo in Proceedings of the12th International Conference on Construction Application ofVirtual Reality Taipei Taiwan November 2012
[30] Microsoft Kinect for X-BXO 360 httpwwwxboxcomen-USxbox-oneaccessorieskinect-for-xbox-one
[31] K Kronolige and P Mihelich ldquoTechnical Description of KinectCalibrationrdquo July 2014 httpwwwrosorgwikikinect calibra-tiontechnical
[32] G Shashank M Sharma and A Shukla ldquoRobot navigationusing image processing and isolated word recognitionrdquo Interna-tional Journal of Computer Science and Engineering vol 4 no5 pp 769ndash778 2012
[33] TW Choe H S Son J H Song and J H Park Intelligent RobotSystem HBE-RoboCar HanBack Electronics Seoul Republic ofKorea 2012
[34] K R Jadavm M A Lokhandwala and A P Gharge ldquoVisionbased moving object detection and trackingrdquo in Proceedings ofthe National Conference on Recent Trends in Engineering andTechnology Anand India 2011
[35] A Chandak K Gosavi S Giri S Agrawal and P Kulka-rni ldquoPath planning for mobile robot navigation using imageprocessingrdquo International Journal of Scientific amp EngineeringResearch vol 4 pp 1490ndash1495 2013
International Journal of
AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Active and Passive Electronic Components
Control Scienceand Engineering
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
RotatingMachinery
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation httpwwwhindawicom
Journal ofEngineeringVolume 2014
Submit your manuscripts athttpwwwhindawicom
VLSI Design
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Shock and Vibration
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Civil EngineeringAdvances in
Acoustics and VibrationAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Advances inOptoElectronics
Hindawi Publishing Corporation httpwwwhindawicom
Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
SensorsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Chemical EngineeringInternational Journal of Antennas and
Propagation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Navigation and Observation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
DistributedSensor Networks
International Journal of
Journal of Sensors 3
Goal
Coveragearea (CA)
Y
X
D2
120593gr
lgr
120593ir
lir
lr
120593r
D1
2
O W
li
120579r
i
120593i
1
Cr
r
D3
3
R(xr yr 120579r r)
Di(xi yi 120579i i)
Figure 1 Geometry of the navigation problem Illustration of thekinematic and geometric variables
where (119909119892 119910119892) is the coordinates of the final goal point and
(119910119903 119909119903) is the state of the robot in 119882 The mobile robot has
a differential driving mechanism using two wheels and thekinematic equation of the wheeled mobile robot can be givenby
119903= V119903cos 120579119903
119910119903= V119903sin 120579119903
V119903= 119886119903
120579119903= 119908119903
(2)
where 119886119903is the robotrsquos linear acceleration and V
119903and 119908
119903
are the linear and angular velocities (120579119903 V119903) are the control
inputs of the mobile robot The line-of-sight angle 120593119892119903which
is obtained from the anglemade by the line of sight 119897119892119903is given
by the following equations
cos120593119892119903=
119909119892minus 119909119903
radic(119909119892minus 119909119903)2
+ (119910119892minus 119910119903)2
tan120593119892119903=
119910119892minus 119910119903
119909119892minus 119909119903
(3)
Now the kinematic equation of the 119894th obstacle 119863119894is
expressed by
119894= V119894cos 120579119894
119910119894= V119894sin 120579119894
120579119894= 119908119894
(4)
where the obstacle has the linear velocity V119894and the angular
velocities 119908119894 and 120579
119894is the orientation angle The Euclidian
distance of the line of sight 119897119894119903between the robot and the 119894th
obstacle is calculated by
119897119894119903= radic(119909
119894minus 119909119903)2
+ (119910119894minus 119910119903)2 (5)
and the line-of-sight angle 120593119894119903is expressed by
tan120593119894119903=119910119894minus 119910119903
119909119894minus 119909119903
(6)
The evolution of the range and turning angle between therobot and an obstacle for dynamic collision avoidance iscomputed by using the tangential and normal component ofthe relative velocity in the polar coordinates as follows
119897119894119903= V119894cos (120579
119894minus 120593119894119903) minus V119903cos (120579
119903minus 120593119894119903)
119897119894119903119894119903= V119894sin (120579119894minus 120593119894119903) minus V119903sin (120579119903minus 120593119894119903)
(7)
From these equations it is shown that a negative sign of 119897119894119903
indicates that the robot is approaching obstacle 119863119894 and if
the rate is zero the range implies constant distance betweenthe robot and the obstacle Meanwhile a zero rate for theline-of-sight angle indicates the motion of 119863
119894is a straight
line The relative polar system presents a simple but veryeffective model that allows real-time representation of therelative motion between the robot and moving obstacle [23]
3 Hybrid Reactive Motion Planning Approach
31 Virtual Plane Based Reactive Motion Planning In thissection the virtual plane method which allows transforminga moving object of interest into a stationary object is brieflyreviewed [23] The transformation used in the virtual planeis achieved by introducing a local observer that allows therobot to find the appropriate windows for the speed andorientation to move in a collision-free path Through thistransformation the collision course between the robot 119877 andthe 119894th obstacle 119863
119894is reduced to a collision course between
the virtual robot 119877V and the initial position 119863119894(1199050) of a real
obstacle The components of the relative velocity between 119877V
and119863119894(1199050) along and across 119897
119894119903are given by
119897119894119903= minusVV119903cos (120579V
119903minus 120593119894119903)
119897119894119903119894119903= minusVV119903sin (120579V119903minus 120593119894119903)
(8)
where VV119903and 120579V119903are the linear velocity and orientation of the
virtual robot The linear velocity and orientation angle of 119877V
can be written as follows
VV119903= radic(
119894minus 119903)2
+ ( 119910119894minus 119910119903)2 (9)
tan 120579V119903=
119910119894minus 119910119903
119894minus 119903
(10)
Note that the tangential and normal equations given in (7) forthe dynamic motion planning are rewritten in terms of thevirtual robot as an observer leading to a stationary motionplanning problem More details concerning the virtual plan-ning method can be referred to in [23]
Collision detection is expressed in the virtual plane butthe final objective is to make the robot navigate toward the
4 Journal of Sensors
goal with collision-free path in the real planeThe orientationangle of the robot in the real plane is calculated by
119903=
V119903+ 119894
119910119903= 119910
V119903+ 119910119894
(11)
This is the inverse transformation mapping the virtual planeinto the real plane and gives the velocity of the robot as afunction of the velocities of the virtual robot and the movingobject The speed and orientation of the real robot can becomputed from the virtual robot and the moving objectvelocities as follows
V119903= radic(V
119903+ 119894)2
+ ( 119910V119903+ 119910119894)2
tan 120579119903=
119910V119903+ 119910119894
V119903+ 119894
(12)
32 Navigation Laws In order to make the robot navigatetoward the final goal a kinematic based linear navigation lawis used as [23]
120579119903= 119872120593
119892119903+ 1198881+ 1198880119890minus119886119905
(13)
where 120593119892119903
is the line-of-sight angle of the robot final goaland the variables are deviation terms characterizing the finaldesired orientation angle of the robot and indicating theinitial orientation of the robot The term 119872 is a navigationparameter with 119872 gt 1 and 119886 is a given positive gain Onthe other hand the collision course in the virtual plane with119863119894(1199050) is characterized by
120579V119903isin CCVP
119894 (14)
The collision cone in the virtual plane (CCVP) is given by
CCVP119894= [120593119894119903minus 120573119894 120593119894119903+ 120573119894] (15)
where 120573119894is the angle between the lines of the upper and lower
tangent limit points in119863119894The direct collision course between
119877 and119863119894is characterized by
tan 120579V119903=
119910119894minus 119910119903
119894minus 119903
= [tan (120593119894119903minus 120573119894) tan (120593
119894119903+ 120573119894)] (16)
After the orientation angle 120579V119903of the virtual robot is computed
in terms of the linear velocity of the robot and the movingobstacles as given in (10) it is possible to write the expressionsof the orientation angle 120579
119903and the speed V
119903for the real robot
controls V119903or in terms of the linear velocity and orientation
angle of the moving obstacle and the virtual robot as follows
V119903=V119894(tan 120579V
119903cos 120579119894minus sin 120579
119894)
tan 120579V119903cos 120579119903minus sin 120579
119903
120579119903= 120579
V119903minus arcsin[
V119894sin (120579V119903minus 120579119894)
V119903
]
(17)
For the robot control the desired value of the orientationangle in the virtual plane can be expressed based on usingthe linear navigation law as
120579Vlowast119903(119905) = 120572
119894119896+ 1198881+ 1198880exp minus119886 (119905 minus 119905
119889) 119896 = 1 2 (18)
LED Vision camera Microphone array
Microphone array
3D depth sensor cameras Motorized tilt
Figure 2 Architecture of Microsoft Kinect sensor
where 119905119889denotes the time when the robot starts deviation
for collision avoidance and 1205721198941and 120572
1198942are the left and right
line-of-sight angles between the reference deviation pointsand the points on the collision cone in the virtual planeFinally based on the desired orientation in the virtual planethe corresponding desired speed value Vlowast
119903for the robot is
calculated by
Vlowast119903=
V119894(tan 120579V
lowast
119903cos 120579119894minus sin 120579
119894)
tan 120579Vlowast119903cos 120579119903minus sin 120579
119903
(19)
In a similar way the corresponding desired orientation valuecan be expressed by
tan 120579lowast119903=
VV119903sin (120579V
lowast
119903) + V119894sin (120579119894)
VV119903cos (120579Vlowast
119903) + V119894cos (120579
119894) (20)
Note that for the robot navigation including a collisionavoidance technique within dynamic environments eitherthe linear velocity control expressed in (19) or the orientationangle control in (20) can be utilized
33 Sensor Fusion Based Range and Pose Estimation Low-cost range sensors are an attractive alternative to expensivelaser scanners in application areas such as motion planningand mapping The Microsoft Kinect [26] is a sensor whichconsists of an IR sensor an IR camera an RGB camera amultiarray microphone and an electrical motor providingthe tilt function to the sensor (shown in Figure 2) TheKinect sensor captures not only depth but also color imagessimultaneously at a frame rate of up to 30 fps Some keyfeatures are illustrated in [26ndash29] The RGB video streamuses 8-bit VGA resolution (640 times 480 pixels) with a Bayercolor filter at a frame rate 30Hz The monochrome depthsensing video stream has a VGA resolution (640times480 pixels)with 11-bit depth which provides 2048 levels of sensitivityDepth data is acquired by the combination of IR projector andIR camera The microphone array features four microphonecapsules and operates with each channel processing 16-bitaudio at a sampling rate of 16 kHz The motorized pivotis capable of tilting the sensor up to 27∘ either up ordown
The features of Kinect device make its application veryattractive to autonomous robot navigation In this work theKinect sensor is utilized for measuring range to moving
Journal of Sensors 5
Table 1 Kinectrsquos focal length and field of view
Camera Focal length (pixel) Field of view (degrees)Horizontally Vertically
RGB 525 63 50IR 580 57 43
Detectablerange ofdepth camera
0 200 400 600 800 1000 12000
1
2
3
4
5
6
Kinect units
Met
ers
Equation (21)Equation (22)
Figure 3 Kinect depth measurement and actual distance
obstacles and estimating color-based locations of objects fordynamic motion planning
Before going into detail the concept of the calculationof the real coordinates is discussed Kinect camera hassome good advantages such as depth sensor with minimum800mm and maximum range 4000mm Camera focus isconstant and given and thus real distance between cameraand chosen target is easily calculated The parameters usedin Kinect sensor are summarized in Table 1
Two similar equations have been proposed by researcherwhere one is based on the function 1119863value and the other isusing tan(119863value)The distance between a camera and a targetobject 119911
119908is expressed by
119911119908=
1
(119863value times (minus00030711016) + 33309495161)(21)
119911119908= 01236 times tan(
119863value28425 + 11863
) (22)
Figure 3 shows the detectable ranges of a depth camera wherethe distances in world coordinate based on the above twoequations are computed by limiting the raw depth to 1024 thatcorresponds to about 5 meters
Figure 4 shows the error results of distance measure-ment experiments using a Kinectrsquos depth camera In thisexperiment the measured distance using a ruler is noted bygreen which gives a reference distance and three repetitiveexperiments are carried out and they are drawn in red lightblue and blue colors From the experiment it is shown thatthe errors of the depth measurements from the Kinect sensorare proportional to the distance
0 5 10 15 20 25 30 350
1000200030004000
Cycle time
(mm
)D
istan
ce m
easu
rem
ent
Reference distanceExperiment 1
Experiment 2Experiment 3
500 1000 1500 2000 2500 3000 3500 4000minus40minus20
020406080
Distance measurement (mm)
(mm
)Er
ror m
easu
rem
ent
Experiment 1Experiment 2
Experiment 3Average value
Figure 4 Kinect depth camera measurement experiment and error
C
(uo o)
yc
h z
ys
xw
xc
minusyw
zc
120573120572 Camera
screen
Realworld
Pg(xw yw zw)
yw
minusy
xs
P998400g(xs ys zs)
P998400r (xs ys zs)
Pr(xw yw zw)
Figure 5 Robot and obstacle localization using the Kinect sensorfor ranging and positioning computation
Figure 5 shows a general schematics of geometricapproach to find the 119909 and 119910 coordinates using the Kinectsensor system where ℎ is the screen height in pixels and 120573 isthe field of view of the camera The coordinates of a point onthe image plane of the robot 1198751015840
119903and goal 1198751015840
119892are transformed
into the world coordinates 119875119903and 119875
119892 and it is calculated by
119875 = (119909119908 119910119908) 997888rarr 119875
1015840
= (119909119904 119910119904 119911119904)
119909119908=119909119904
119911119908
119910119908=119910119904
119911119908
(23)
6 Journal of Sensors
yg
yr
xr xg
y
x
120579
(a)
III
III IV
y
x
(b)
Figure 6 Robot heading angle computation approach
Each of coordinates 119909119908 119910119908 119911119908of two red (robot) and green
(goal) points is used as the input into the vision system and1198751015840 is computed by
(
119909119904
119910119904
119911119904
) = (
119891 0 0 0
0 119891 0 0
0 0 1 0
)(
119909119908
119910119908
119911119908
1
) (24)
where 119891 is the focal length of the camera 1198751015840 is calculated atthe pixel coordinates divided by 120588
119906and 120588V and the values of
the pixel of the image 119906 and V are the pixel coordinates andthey are obtained from the following equations
(
1199061015840
V1015840
1199081015840
) =(
1
120588119906
0 1199060
01
120588VV0
0 0 1
)(
119891 0 0 0
0 119891 0 0
0 0 1 0
)(
119883119882
119884119882
119885119882
1
)
1198751015840
= (
119906
V) = (
1199061015840
1199081015840
V1015840
1199081015840
) (
1199061015840
V1015840
1199081015840
) = 119862(
119909119908
119910119908
119911119908
1
)
(25)
In the experiment the final goal and robots are recognized bya built-in RGB camera In addition the distance of an objectis measured within mm accuracy using IR camera and thetarget objectrsquos pixel coordinates (119909
119904 119910119904) are estimated by using
a color-based detection approach In this work the distancemeasured between the planning field and the Kinect sensoris 2700mm which becomes the depth camerarsquos detectablerange The horizontal and vertical coordinates of an objectare calculated as follows
(1) horizontal coordinate
120572 = arctan(119909119904
119891)
119909119908= 119911119908times sin120572
(26)
(2) vertical coordinate
120572 = arctan(119910119904
119891)
119910119908= 119911119908times sin120572
(27)
where 119911119908is the distance to the object obtained from
Kinect sensor Now those real-world coordinatesobtained in the above are used in dynamic pathplanning procedures
In general the camera intrinsic and extrinsic parametersof the color and depth cameras are set as default valuesand thus it is necessary to calibrate them for accurate testsThe calibration of the depth and RGB color cameras in theKinect sensor can be applied by using a mathematical modelof depth measurement and creating a depth annotation of achessboard by physically offsetting the chessboard from itsbackground and details of the calibration procedures can bereferred to in [30 31]
As indicated in the previous section for the proposedrelative velocity obstacle based dynamic motion planningthe accurate estimates of the range and orientation of anobject play an important role In this section an efficient newapproach is proposed to estimate the heading information ofthe robot using a color detection approach First the robot iscovered by green and red sections shown in Figure 6
Then using a color detectionmethod [32] the center loca-tion of the robot is calculated and after finding denominate
Journal of Sensors 7
Input Coordinate of Robot obstacle 1 obstacle 2 and goalOutput Speed of robotrsquos right and left wheelsCalculate 119897
119892119903using Kinect
While 119897119892119903gt 0 do
Calculate 119897119892119903and 120601
119892119903
Send robot speedif All119863
119894in CA
Calculate 119897119894119903using Kinect
if All 119897119894119903gt 0 then
There is no collision risk keep send robot speedelseConstruct the virtual planeTest the collision in the virtual planeif there is a collision risk thenCheck sonar sensor valueif sonar value is too small thenChose 120579
119903of quick motion control
Send robot speedelseConstruct the 120579-windowChose the appropriate values for 120579
119903
Send robot speedend if
end ifend if
end while
Algorithm 1 Hybrid reactive dynamic navigation algorithm
heading angle 120579 as shown in (28) new heading informationin each four different phase sections is computed by using thefollowing equations
Δ119909 = 119909119892minus 119909119903
Δ119910 = 119910119892minus 119910119903
120579 = tanminus1 (119910119892minus 119910119903
119909119892minus 119909119903
)
(28)
(1) 119909119892gt 119909119903 119910119892gt 119910119903
120579 = 120579 + 0
(2) 119909119892lt 119909119903 119910119892gt 119910119903
120579 = 314 minus 120579
(3) 119909119892lt 119909119903 119910119892lt 119910119903
120579 = 314 + 120579
(4) 119909119892gt 119909119903 119910119892lt 119910119903
120579 = 628 minus 120579
(29)
Finally the relative velocity obstacle based reactivedynamic navigation algorithm with the capability of collisionavoidance is summarized in Algorithm 1
4 Experimental Results
For the evaluation and verification of the proposed sensorbased reactive dynamic navigation algorithms both simu-lation study and experimental tests are carried out with arealistic experimental setup
41 Experimental Scenario and Setup For experimental teststwo robots are assigned asmoving obstacles and the third oneis used as a master robot that generates control commands toavoid the dynamic obstacles based on the suggested reactivemotion planning algorithms For the moving obstacles twoNXT Mindstorm based vehicles that can either move in arandom direction or follow a designated path are developedThe HBE-RoboCAR equipped with ultrasonic and encodersensors is used as a master robot as shown in Figure 7
TheHBE-RoboCAR [33] has 8-bit AVRATmega128L pro-cessor The robot is equipped with multiembedded processormodules (embedded processor FPGA MCU) It providesdetection of obstacles with ultrasonic and infrared sensormotion control with acceleration sensor and motor encoderHBE-RoboCAR has the ability to communicate with otherdevice either wireless or wired technology such as Bluetoothmodule and ISP UART interfaces respectively In this workHBE-RoboCAR is connected to a computer on the groundcontrol station using Bluetooth wireless communicationFigure 7 shows the hardware specification and sensor systemsfor the robot platform and Figure 8 shows the interface andcontrol architecture for the embedded components of HBE-RoboCAR [33]
8 Journal of Sensors
Power switch1Power LED and reset switch2Interface connector3Wireless network connector4Ultrasonic sensor5PSD sensor6
LED 7Phototransistors8Voltmeter9DC inCharge inBattery in
101112
1
23
4
5 5
55
5 5
55
6
6
7 7
7 7
7 7
77
8
9
101112
Figure 7 Mobile robot hardware and sensor systems (HBE-RoboCAR [33])
Embedded processor
FPGA module
(Altera Xilinx)
AVRboard
Inte
rface
bo
ard
Inte
rface
bo
ard
Inte
rface
bo
ard
Con
nect
or
ATmega128
Acceleration sensor
(ADXL202E)
POWER
ATmega8 Voltage gauge
Motor driver
Sens
or C
ON
Mot
or C
ON
Photo sensor
(GP2Y0A21)
Infrared sensor
(IL-8L EL-8L)
Ultrasonic sensor
(MA40)
Front Back
DC geared motor(2EA)
DC geared encoder motor
(2EA)
Control board(compatibility with 3 kinds)
HBE-RoboCAR board
Base board Sensor board
LED
Motor(+5V +33V)
times 2)(Write (Red times 2)
Figure 8 Block diagram of the HBE-RoboCAR embedded system
For the dynamic obstacle avoidance the relative velocityobstacle based navigation laws require the range and headinginformation from sensors For the range estimation Kinectsensor is utilized If the Kinect sensor detects the final goalusing a color-based detection algorithm [32 34 35] it sendsthe information to the master robot After receiving thetarget point the master robot starts the onboard navigationalgorithm to reach the goal while avoiding dynamic obstaclesWhen the robot navigates in the experimental field thedistance to each moving obstacle is measured by the Kinectsensor and the range information is fed back to the masterrobot via Bluetooth communication as inputs to the reactivemotion planning algorithms The detailed scenario for theexperimental setup is illustrated in Figure 9
42 Simulation Results Figure 10 shows the simulationresults of the reactive motion planning on both the virtualplane and the real plane In this simulation the trajectoriesof two obstacles were described by the blue and black colorlines the trajectory of the master robot was depicted in thered line and the goal was indicated by green dot As canbe clearly seen in the real plane and the virtual plane inFigures 10(b) and 10(a) the master robot avoided the firstobstaclewhichwasmoving into themaster robot and success-fully reached the target goal after avoiding the collision withthe second moving obstacle just before reaching the targetWhile the master robot avoids the obstacles it generates acollision cone by choosing a deviation point on the virtualplane On the virtual plane the radius of the collision cone
Journal of Sensors 9
Bluetoothcommunication
Kinect
PC
Obstacle 1
Obstacle 2
Robot
Destination
Figure 9 Test scenario and architecture for experimental setup
Deviation point
Deviation point
First obstaclersquoscollision coneSecond obstaclersquos
collision cone
0 500 1000 1500 2000 2500 3000minus500
0500
1000150020002500
x coordinate
y coo
rdin
ate
Virtual plane
(a)
Second obstacle
First obstacleRobot
Goal
Trajectory of robot
Coverage area
Initial locations ofrobot and obstacles
minus500 0 500 1000 1500 2000 2500 3000 3500 4000minus1000
01000200030004000
y coo
rdin
ate
Real plane
x coordinate
(b)
Figure 10 Simulation results on virtual plane and real plane
0 10 20 30 40 50 60 7050
100150200
Cycle time (times of program running)
Targ
et an
gle
(deg
) an
gle (
deg)
(d
eg)
0 10 20 30 40 50 60 7050
100150200
Cycle time (times of program running)
Robo
t hea
ding
0 10 20 30 40 50 60 70minus100
0100
Cycle time (times of program running)
Varia
nce a
ngle
Figure 11 Orientation angle information target angle robot head-ing angle and variance angle
is the same as the obstaclersquos one and the distance betweenthe deviation point and the collision cone is dependent onthe radius of the master robotThe ellipses indicate the initiallocations of the robot and the obstacles
In Figure 11 the orientation angle information used forthe robot control was illustrated The upper top plot showedthe angle of the moving robot to the target from the virtualplane the second plot showed the robot heading anglecommanded for the navigation control and the third plotshowed the angle difference between the target and robotheading angle At the final stage of the path planning the
0 10 20 30 40 50 60 70100110120130140
Cycle time (times of program running)
Obs
tacle
1 an
gle
(deg
ree)
1840 1860 1880 1900 1920 1940 1960 1980 2000 2020 2040500
1000
1500
2000
x coordinate (mm)
y coo
rdin
ate (
mm
) Obstacle 1 position
0 10 20 30 40 50 60 700
20
40
60
Cycle time (times of program running)
Obs
tacle
1 sp
eed
Figure 12 Results of firstmoving obstaclersquos orientation angle speedand position in 119909-119910 coordinates
commanded orientation angle and the target angle to the goalpoint become the same Instead of controlling the robot withthe orientation angle the speed of the master robot can beused to avoid the moving obstacles
Figures 12 and 13 show each moving obstaclersquos headingangle linear velocity and trajectory As can be seen in order
10 Journal of Sensors
0 10 20 30 40 50 60 70150
200
250
300
Cycle time (times of program running)
Obs
tacle
2 an
gle
(deg
ree)
1000 1010 1020 1030 1040 1050 1060 10700
500
1000
1500
x coordinate (mm)
y coo
rdin
ate (
mm
) Obstacle 2 position
0 10 20 30 40 50 60 700
20
40
60
Obs
tacle
2 sp
eed
Cycle time (times of program running)
Figure 13 Results of second moving obstaclersquos orientation anglespeed and position in 119909-119910 coordinates
500 1000 1500 2000 2500 30000
2000
4000
x coordinate (mm)
y co
ordi
nate
(mm
)
Robot position
0 10 20 30 40 50 60 700
100
200
Cycle time (times of program running)
Robo
t spe
ed
0 10 20 30 40 50 60 700
50
100
Cycle time (times of program running)
Righ
t whe
el sp
eed
0 10 20 30 40 50 60 700
50
100
Cycle time (times of program running)
Left
whe
el sp
eed
Figure 14 Results of robotrsquos trajectory speed and right and leftwheels speed
to carry out a dynamic path planning experiment the speedand the heading angle were changed during the simulationresulting in uncertain cluttered environments
Figure 14 shows the mobile robotrsquos trajectory from thestart point to the goal point and also the forward velocity andeach wheel speed from encoder As can be seen the trajectoryof the robot has a sharp turn around the location (1500mm
Obstacle 1rsquos startingposition
Obstacle 2rsquos startingposition
Goal x
y
Robotrsquos startingposition
(a)
0 500 1000 1500 2000 2500 30000
1000
2000
3000Virtual plane
x (mm)
y (m
m)
minus1000 minus500 0 500 1000 1500 2000 2500 3000 3500 4000
0
1000
2000
Real plane
R (27513368 21269749)
3000
x (mm)
y (m
m)
(b)
Figure 15 (a) Experiment environment (initial stage) (b) Plots ofthe experimented data at initial stage
1000mm) in order to avoid the second moving obstacle It isseen that the right and left wheel speeds are mirrored along atime axis Also we can see relationship of the variance angleand robotrsquos right and left wheels speed
43 Field Test and Results Further verification of the per-formance of the proposed hybrid dynamic path planningapproach for real experiments was carried out with thesame scenario used in the previous simulation part In theexperiment two moving obstacles are used and a masterrobot moves without any collision with the obstacles to thetarget point as shown in Figure 15(a) and the initial locationsof the obstacles and the robot are shown in Figure 15(b) Thered dot is the initial starting position of the master robot at(2750mm 2126mm) the black dot is the initial location ofthe second obstacle at (2050mm 1900mm) and the blue dotis the initial location of the first moving obstacle at (1050mm2000mm) In the virtual plane the collision cone of the firstobstacle is depicted as shown in the top plot of Figure 15(b)
Journal of Sensors 11
Goal x
y
Robotrsquos currentposition
Obstacle 2rsquos current Obstacle 1rsquos currentposition and direction position and direction
(a)
0 500 1000 1500 2000 2500 30000
1000
2000
3000Virtual plane
x (mm)
y (m
m)
minus1000 minus500 0 500 1000 1500 2000 2500 3000 3500 4000
0
1000
2000
3000
Real plane
x (mm)
y (m
m)
(b)
Figure 16 (a) Experiment result of collision avoidance with the firstobstacle (b) Plot of the trajectories of the robot and the obstacles inthe virtual and real planes during the collision avoidance with thefirst moving obstacle
and the robot carries out its motion control based on thecollision cone in the virtual plane until it avoids the firstobstacle
Figure 16 showed the collision avoidance performancewith the fist moving obstacle and as can be seen the masterrobot avoided the commanded orientation control into theleft direction without any collisions which is described indetail in the virtual plane in the top plot of Figure 16(b) Thetrajectory and movement of the robot and the obstacles weredepicted in the real plane in Figure 16(b) for the detailedanalysis
In a similar way Figure 17(a) illustrated the collisionavoidance with the second moving obstacle and the detailedpath and trajectory are described in Figure 17(b) The topplot of Figure 17(b) shows the motion planning in thevirtual plane where the initial location of the second movingobstacle is recognized at the center of (2050mm 1900mm)
Goal x
y
Robotrsquos currentposition
Obstacle 2rsquos currentObstacle 1rsquos currentposition and direction
position and direction
(a)
0 500 1000 1500 2000 2500 30000
1000
2000
3000Virtual plane
x (mm)
y (m
m)
minus1000 minus500 0 500 1000 1500 2000 2500 3000 3500 4000
0
1000
2000
3000
Real plane
x (mm)
y (m
m)
(b)
Figure 17 (a) Experiment result of collision avoidance with thesecond obstacle (b) Plot of the trajectories of the robot and theobstacles in the virtual and real planes during the collision avoidancewith the second moving obstacle
Based on this initial location the second collision cone isconstructed with a big green ellipse that allows the virtualrobot to navigate without any collision with the secondobstacle The trajectory of the robot motion planning in thereal plane is depicted in the bottom plot of Figure 17(b)
Now at the final phase after avoiding all the obstacles themaster robot reached the target goal with a motion controlas shown in Figure 18(a) The overall trajectories of the robotfrom the starting point to the final goal target in the virtualplane were depicted in the top plot of Figure 18(a) and thetrajectories of the robot in the real plane were depicted inthe bottom plot of Figure 18(b) Note that the trajectories ofthe robot location differ from each other in the virtual planeand the real plane However the orientation change gives thesame direction change of the robot in both the virtual andthe real plane In this plot the green dot is the final goalpoint and the robot trajectory is depicted with the red dotted
12 Journal of Sensors
Goalx
y
Obstacle 2rsquos current
Obstacle 1rsquos current
position and direction
position and direction
Robotrsquos currentposition
(a)
0 500 1000 1500 2000 2500 30000
1000
2000
3000Virtual plane
x (mm)
y (m
m)
minus1000 minus500 0 500 1000 1500 2000 2500 3000 3500 4000minus1000
01000200030004000
Real plane
x (mm)
y (m
m)
(b)
Figure 18 (a) Experiment result after the collision avoidance withall obstacles (b) Plot of the trajectories of the robot and the obstaclesin the virtual and real planes during the collision avoidance with allthe moving obstacles
circles The smooth trajectory was generated by using thelinear navigation laws as explained
From this experiment it is easily seen that the pro-posed hybrid reactive motion planning approach designedby the integration of the virtual plane approach and asensor based planning is very effective to dynamic collisionavoidance problems in cluttered uncertain environmentsThe effectiveness of the hybrid reactive motion planningmethod makes its usage very attractive to various dynamicnavigation applications of not only mobile robots but alsoother autonomous vehicles such as flying vehicles and self-driving vehicles
5 Conclusion
In this paper we proposed a hybrid reactive motion plan-ning method for an autonomous mobile robot in uncertain
dynamic environmentsThe hybrid reactive motion planningmethod combined a reactive path planning method whichcould transform dynamic moving obstacles into stationaryones with a sensor based approach which can provide relativeinformation of moving obstacles and environments The keyfeatures of the proposed method are twofold the first keyfeature is the simplification of complex dynamic motionplanning problems into stationary ones using the virtualplane approach while the second feature is the robustness ofa sensor basedmotion planning in which the pose estimationof moving obstacles is made by using a Kinect sensor whichprovides a ranging and color detection The sensor basedapproach improves the accuracy and robustness of the reac-tive motion planning approach by providing the informationof the obstacles and environments The performance ofthe proposed method was demonstrated through not onlysimulation studies but also field experiments using multiplemoving obstacles
In the further work a sensor fusion approach which couldimprove the heading estimation of a robot and the speedestimation of moving objects will be investigated more formore robust motion planning
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgments
This research was supported by the National Research Foun-dation ofKorea (NRF) (no 2014-017630 andno 2014-063396)and was also supported by the Human Resource TrainingProgram for Regional Innovation and Creativity through theMinistry of Education and National Research Foundation ofKorea (no 2014-066733)
References
[1] R Siegwart I R Nourbakhsh and D Scaramuzza Introductionto AutonomousMobile RobotsTheMITPress LondonUK 2ndedition 2011
[2] J P van den Berg and M H Overmars ldquoRoadmap-basedmotion planning in dynamic environmentsrdquo IEEE Transactionson Robotics vol 21 no 5 pp 885ndash897 2005
[3] D Hsu R Kindel J-C Latombe and S Rock ldquoRandomizedkinodynamic motion planning with moving obstaclesrdquo TheInternational Journal of Robotics Research vol 21 no 3 pp 233ndash255 2002
[4] R Gayle A Sud M C Lin and D Manocha ldquoReactivedeformation roadmaps motion planning of multiple robots indynamic environmentsrdquo in IEEERSJ International Conferenceon Intelligent Robots and Systems (IROS rsquo07) pp 3777ndash3783 SanDiego Calif USA November 2007
[5] S S Ge and Y J Cui ldquoDynamic motion planning for mobilerobots using potential field methodrdquo Autonomous Robots vol13 no 3 pp 207ndash222 2002
[6] K P Valavanis T Hebert R Kolluru and N TsourveloudisldquoMobile robot navigation in 2-D dynamic environments using
Journal of Sensors 13
an electrostatic potential fieldrdquo IEEE Transactions on SystemsMan and Cybernetics Part A Systems and Humans vol 30 no2 pp 187ndash196 2000
[7] X Gao Y Ou Y Fu et al ldquoA novel local path planningmethod considering both robot posture and path smoothnessrdquoin Proceeding of the IEEE International Conference on Roboticsand Biomimetics (ROBIO 13) pp 172ndash178 Shenzhen ChinaDecember 2013
[8] J Jiao Z Cao P Zhao X Liu and M Tan ldquoBezier curvebased path planning for a mobile manipulator in unknownenvironmentsrdquo in Proceedings of the IEEE International Confer-ence on Robotics and Biomimetics (ROBIO rsquo13) pp 1864ndash1868Shenzhen China December 2013
[9] A Stentz ldquoOptimal and efficient path planning for partially-known environmentsrdquo in Proceedings of the IEEE InternationalConference on Robotics and Automation vol 4 pp 3310ndash3317May 1994
[10] G Song H Wang J Zhang and T Meng ldquoAutomatic dockingsystem for recharging home surveillance robotsrdquo IEEE Transac-tions on Consumer Electronics vol 57 no 2 pp 428ndash435 2011
[11] M Seder and I Petrovic ldquoDynamic window based approachto mobile robot motion control in the presence of movingobstaclesrdquo in Proceedings of the IEEE International Conferenceon Robotics and Automation (ICRA rsquo07) pp 1986ndash1991 RomaItaly April 2007
[12] D Fox W Burgard and S Thrun ldquoThe dynamic windowapproach to collision avoidancerdquo IEEERobotics andAutomationMagazine vol 4 no 1 pp 23ndash33 1997
[13] J Borenstein and Y Koren ldquoReal-time obstacle avoidance forfast mobile robotsrdquo IEEE Transactions on Systems Man andCybernetics vol 19 no 5 pp 1179ndash1187 1989
[14] J Borenstein and Y Koren ldquoThe vector field histogrammdashfastobstacle avoidance for mobile robotsrdquo IEEE Transactions onRobotics and Automation vol 7 no 3 pp 278ndash288 1991
[15] R Simmons ldquoThe curvature-velocity method for local obsta-cle avoidancerdquo in Proceedings of the 13th IEEE InternationalConference on Robotics and Automation vol 4 pp 3375ndash3382Minneapolis Minn USA April 1996
[16] J Minguez and L Montano ldquoNearness Diagram (ND) nav-igation collision avoidance in troublesome scenariosrdquo IEEETransactions on Robotics and Automation vol 20 no 1 pp 45ndash59 2004
[17] E Owen and L Montano ldquoMotion planning in dynamicenvironments using the velocity spacerdquo in Proceedings of theIEEE IRSRSJ International Conference on Intelligent Robots andSystems (IROS 05) pp 997ndash1002 August 2005
[18] A Chakravarthy and D Ghose ldquoObstacle avoidance in adynamic environment a collision cone approachrdquo IEEE Trans-actions on Systems Man and Cybernetics Part A Systems andHumans vol 28 no 5 pp 562ndash574 1998
[19] F Belkhouche T Jin and C H Sung ldquoPlane collision coursedetection betweenmoving objects by transformation of coordi-natesrdquo in Proceedings of the 17th IEEE International Conferenceon Control Applications Part of IEEE Multi-Conference onSystems and Control San Antonio Tex USA 2008
[20] P Fiorini and Z Shiller ldquoMotion planning in dynamic environ-ments using velocity obstaclesrdquo International Journal of RoboticsResearch vol 17 no 7 pp 760ndash772 1998
[21] P Fiorini and Z Shiller ldquoMotion planning in dynamic envi-ronments using velocity obstaclesrdquoThe International Journal ofRobotics Research vol 17 no 7 pp 760ndash772 1998
[22] Z Shiller F Large and S Sekhavat ldquoMotion planning indynamic environments obstacles moving along arbitrary tra-jectoriesrdquo in Proceedings of the IEEE International Conferenceon Robotics and Automation (ICRA rsquo01) pp 3716ndash3721 SeoulRepublic of Korea May 2001
[23] F Belkhouche ldquoReactive path planning in a dynamic environ-mentrdquo IEEE Transactions on Robotics vol 25 no 4 pp 902ndash9112009
[24] B Dorj D Tuvshinjargal K T Chong D P Hong and DJ Lee ldquoMulti-sensor fusion based effective obstacle avoidanceand path-following technologyrdquo Advanced Science Letters vol20 pp 1751ndash1756 2014
[25] R Philippsen B Jensen and R Siegwart ldquoTowards real-timesensor based path planning in highly dynamic environmentsrdquoin Autonomous Navigation in Dynamic Environments vol 35of Tracts in Advanced Robotics pp 135ndash148 Springer BerlinGermany 2007
[26] K-T Song and C C Chang ldquoReactive navigation in dynamicenvironment using a multisensor predictorrdquo IEEE Transactionson Systems Man and Cybernetics Part B Cybernetics vol 29no 6 pp 870ndash880 1999
[27] K Khoshelham and S O Elberink ldquoAccuracy and resolutionof kinect depth data for indoor mapping applicationsrdquo Sensorsvol 12 no 2 pp 1437ndash1454 2012
[28] J Han L Shao D Xu and J Shotton ldquoEnhanced computervision with Microsoft Kinect sensor a reviewrdquo IEEE Transac-tions on Cybernetics vol 43 no 5 pp 1318ndash1334 2013
[29] F Kammergruber A Ebner and W A Gunthner ldquoNavigationin virtual reality using microsoft kinectrdquo in Proceedings of the12th International Conference on Construction Application ofVirtual Reality Taipei Taiwan November 2012
[30] Microsoft Kinect for X-BXO 360 httpwwwxboxcomen-USxbox-oneaccessorieskinect-for-xbox-one
[31] K Kronolige and P Mihelich ldquoTechnical Description of KinectCalibrationrdquo July 2014 httpwwwrosorgwikikinect calibra-tiontechnical
[32] G Shashank M Sharma and A Shukla ldquoRobot navigationusing image processing and isolated word recognitionrdquo Interna-tional Journal of Computer Science and Engineering vol 4 no5 pp 769ndash778 2012
[33] TW Choe H S Son J H Song and J H Park Intelligent RobotSystem HBE-RoboCar HanBack Electronics Seoul Republic ofKorea 2012
[34] K R Jadavm M A Lokhandwala and A P Gharge ldquoVisionbased moving object detection and trackingrdquo in Proceedings ofthe National Conference on Recent Trends in Engineering andTechnology Anand India 2011
[35] A Chandak K Gosavi S Giri S Agrawal and P Kulka-rni ldquoPath planning for mobile robot navigation using imageprocessingrdquo International Journal of Scientific amp EngineeringResearch vol 4 pp 1490ndash1495 2013
International Journal of
AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Active and Passive Electronic Components
Control Scienceand Engineering
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
RotatingMachinery
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation httpwwwhindawicom
Journal ofEngineeringVolume 2014
Submit your manuscripts athttpwwwhindawicom
VLSI Design
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Shock and Vibration
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Civil EngineeringAdvances in
Acoustics and VibrationAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Advances inOptoElectronics
Hindawi Publishing Corporation httpwwwhindawicom
Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
SensorsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Chemical EngineeringInternational Journal of Antennas and
Propagation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Navigation and Observation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
DistributedSensor Networks
International Journal of
4 Journal of Sensors
goal with collision-free path in the real planeThe orientationangle of the robot in the real plane is calculated by
119903=
V119903+ 119894
119910119903= 119910
V119903+ 119910119894
(11)
This is the inverse transformation mapping the virtual planeinto the real plane and gives the velocity of the robot as afunction of the velocities of the virtual robot and the movingobject The speed and orientation of the real robot can becomputed from the virtual robot and the moving objectvelocities as follows
V119903= radic(V
119903+ 119894)2
+ ( 119910V119903+ 119910119894)2
tan 120579119903=
119910V119903+ 119910119894
V119903+ 119894
(12)
32 Navigation Laws In order to make the robot navigatetoward the final goal a kinematic based linear navigation lawis used as [23]
120579119903= 119872120593
119892119903+ 1198881+ 1198880119890minus119886119905
(13)
where 120593119892119903
is the line-of-sight angle of the robot final goaland the variables are deviation terms characterizing the finaldesired orientation angle of the robot and indicating theinitial orientation of the robot The term 119872 is a navigationparameter with 119872 gt 1 and 119886 is a given positive gain Onthe other hand the collision course in the virtual plane with119863119894(1199050) is characterized by
120579V119903isin CCVP
119894 (14)
The collision cone in the virtual plane (CCVP) is given by
CCVP119894= [120593119894119903minus 120573119894 120593119894119903+ 120573119894] (15)
where 120573119894is the angle between the lines of the upper and lower
tangent limit points in119863119894The direct collision course between
119877 and119863119894is characterized by
tan 120579V119903=
119910119894minus 119910119903
119894minus 119903
= [tan (120593119894119903minus 120573119894) tan (120593
119894119903+ 120573119894)] (16)
After the orientation angle 120579V119903of the virtual robot is computed
in terms of the linear velocity of the robot and the movingobstacles as given in (10) it is possible to write the expressionsof the orientation angle 120579
119903and the speed V
119903for the real robot
controls V119903or in terms of the linear velocity and orientation
angle of the moving obstacle and the virtual robot as follows
V119903=V119894(tan 120579V
119903cos 120579119894minus sin 120579
119894)
tan 120579V119903cos 120579119903minus sin 120579
119903
120579119903= 120579
V119903minus arcsin[
V119894sin (120579V119903minus 120579119894)
V119903
]
(17)
For the robot control the desired value of the orientationangle in the virtual plane can be expressed based on usingthe linear navigation law as
120579Vlowast119903(119905) = 120572
119894119896+ 1198881+ 1198880exp minus119886 (119905 minus 119905
119889) 119896 = 1 2 (18)
LED Vision camera Microphone array
Microphone array
3D depth sensor cameras Motorized tilt
Figure 2 Architecture of Microsoft Kinect sensor
where 119905119889denotes the time when the robot starts deviation
for collision avoidance and 1205721198941and 120572
1198942are the left and right
line-of-sight angles between the reference deviation pointsand the points on the collision cone in the virtual planeFinally based on the desired orientation in the virtual planethe corresponding desired speed value Vlowast
119903for the robot is
calculated by
Vlowast119903=
V119894(tan 120579V
lowast
119903cos 120579119894minus sin 120579
119894)
tan 120579Vlowast119903cos 120579119903minus sin 120579
119903
(19)
In a similar way the corresponding desired orientation valuecan be expressed by
tan 120579lowast119903=
VV119903sin (120579V
lowast
119903) + V119894sin (120579119894)
VV119903cos (120579Vlowast
119903) + V119894cos (120579
119894) (20)
Note that for the robot navigation including a collisionavoidance technique within dynamic environments eitherthe linear velocity control expressed in (19) or the orientationangle control in (20) can be utilized
33 Sensor Fusion Based Range and Pose Estimation Low-cost range sensors are an attractive alternative to expensivelaser scanners in application areas such as motion planningand mapping The Microsoft Kinect [26] is a sensor whichconsists of an IR sensor an IR camera an RGB camera amultiarray microphone and an electrical motor providingthe tilt function to the sensor (shown in Figure 2) TheKinect sensor captures not only depth but also color imagessimultaneously at a frame rate of up to 30 fps Some keyfeatures are illustrated in [26ndash29] The RGB video streamuses 8-bit VGA resolution (640 times 480 pixels) with a Bayercolor filter at a frame rate 30Hz The monochrome depthsensing video stream has a VGA resolution (640times480 pixels)with 11-bit depth which provides 2048 levels of sensitivityDepth data is acquired by the combination of IR projector andIR camera The microphone array features four microphonecapsules and operates with each channel processing 16-bitaudio at a sampling rate of 16 kHz The motorized pivotis capable of tilting the sensor up to 27∘ either up ordown
The features of Kinect device make its application veryattractive to autonomous robot navigation In this work theKinect sensor is utilized for measuring range to moving
Journal of Sensors 5
Table 1 Kinectrsquos focal length and field of view
Camera Focal length (pixel) Field of view (degrees)Horizontally Vertically
RGB 525 63 50IR 580 57 43
Detectablerange ofdepth camera
0 200 400 600 800 1000 12000
1
2
3
4
5
6
Kinect units
Met
ers
Equation (21)Equation (22)
Figure 3 Kinect depth measurement and actual distance
obstacles and estimating color-based locations of objects fordynamic motion planning
Before going into detail the concept of the calculationof the real coordinates is discussed Kinect camera hassome good advantages such as depth sensor with minimum800mm and maximum range 4000mm Camera focus isconstant and given and thus real distance between cameraand chosen target is easily calculated The parameters usedin Kinect sensor are summarized in Table 1
Two similar equations have been proposed by researcherwhere one is based on the function 1119863value and the other isusing tan(119863value)The distance between a camera and a targetobject 119911
119908is expressed by
119911119908=
1
(119863value times (minus00030711016) + 33309495161)(21)
119911119908= 01236 times tan(
119863value28425 + 11863
) (22)
Figure 3 shows the detectable ranges of a depth camera wherethe distances in world coordinate based on the above twoequations are computed by limiting the raw depth to 1024 thatcorresponds to about 5 meters
Figure 4 shows the error results of distance measure-ment experiments using a Kinectrsquos depth camera In thisexperiment the measured distance using a ruler is noted bygreen which gives a reference distance and three repetitiveexperiments are carried out and they are drawn in red lightblue and blue colors From the experiment it is shown thatthe errors of the depth measurements from the Kinect sensorare proportional to the distance
0 5 10 15 20 25 30 350
1000200030004000
Cycle time
(mm
)D
istan
ce m
easu
rem
ent
Reference distanceExperiment 1
Experiment 2Experiment 3
500 1000 1500 2000 2500 3000 3500 4000minus40minus20
020406080
Distance measurement (mm)
(mm
)Er
ror m
easu
rem
ent
Experiment 1Experiment 2
Experiment 3Average value
Figure 4 Kinect depth camera measurement experiment and error
C
(uo o)
yc
h z
ys
xw
xc
minusyw
zc
120573120572 Camera
screen
Realworld
Pg(xw yw zw)
yw
minusy
xs
P998400g(xs ys zs)
P998400r (xs ys zs)
Pr(xw yw zw)
Figure 5 Robot and obstacle localization using the Kinect sensorfor ranging and positioning computation
Figure 5 shows a general schematics of geometricapproach to find the 119909 and 119910 coordinates using the Kinectsensor system where ℎ is the screen height in pixels and 120573 isthe field of view of the camera The coordinates of a point onthe image plane of the robot 1198751015840
119903and goal 1198751015840
119892are transformed
into the world coordinates 119875119903and 119875
119892 and it is calculated by
119875 = (119909119908 119910119908) 997888rarr 119875
1015840
= (119909119904 119910119904 119911119904)
119909119908=119909119904
119911119908
119910119908=119910119904
119911119908
(23)
6 Journal of Sensors
yg
yr
xr xg
y
x
120579
(a)
III
III IV
y
x
(b)
Figure 6 Robot heading angle computation approach
Each of coordinates 119909119908 119910119908 119911119908of two red (robot) and green
(goal) points is used as the input into the vision system and1198751015840 is computed by
(
119909119904
119910119904
119911119904
) = (
119891 0 0 0
0 119891 0 0
0 0 1 0
)(
119909119908
119910119908
119911119908
1
) (24)
where 119891 is the focal length of the camera 1198751015840 is calculated atthe pixel coordinates divided by 120588
119906and 120588V and the values of
the pixel of the image 119906 and V are the pixel coordinates andthey are obtained from the following equations
(
1199061015840
V1015840
1199081015840
) =(
1
120588119906
0 1199060
01
120588VV0
0 0 1
)(
119891 0 0 0
0 119891 0 0
0 0 1 0
)(
119883119882
119884119882
119885119882
1
)
1198751015840
= (
119906
V) = (
1199061015840
1199081015840
V1015840
1199081015840
) (
1199061015840
V1015840
1199081015840
) = 119862(
119909119908
119910119908
119911119908
1
)
(25)
In the experiment the final goal and robots are recognized bya built-in RGB camera In addition the distance of an objectis measured within mm accuracy using IR camera and thetarget objectrsquos pixel coordinates (119909
119904 119910119904) are estimated by using
a color-based detection approach In this work the distancemeasured between the planning field and the Kinect sensoris 2700mm which becomes the depth camerarsquos detectablerange The horizontal and vertical coordinates of an objectare calculated as follows
(1) horizontal coordinate
120572 = arctan(119909119904
119891)
119909119908= 119911119908times sin120572
(26)
(2) vertical coordinate
120572 = arctan(119910119904
119891)
119910119908= 119911119908times sin120572
(27)
where 119911119908is the distance to the object obtained from
Kinect sensor Now those real-world coordinatesobtained in the above are used in dynamic pathplanning procedures
In general the camera intrinsic and extrinsic parametersof the color and depth cameras are set as default valuesand thus it is necessary to calibrate them for accurate testsThe calibration of the depth and RGB color cameras in theKinect sensor can be applied by using a mathematical modelof depth measurement and creating a depth annotation of achessboard by physically offsetting the chessboard from itsbackground and details of the calibration procedures can bereferred to in [30 31]
As indicated in the previous section for the proposedrelative velocity obstacle based dynamic motion planningthe accurate estimates of the range and orientation of anobject play an important role In this section an efficient newapproach is proposed to estimate the heading information ofthe robot using a color detection approach First the robot iscovered by green and red sections shown in Figure 6
Then using a color detectionmethod [32] the center loca-tion of the robot is calculated and after finding denominate
Journal of Sensors 7
Input Coordinate of Robot obstacle 1 obstacle 2 and goalOutput Speed of robotrsquos right and left wheelsCalculate 119897
119892119903using Kinect
While 119897119892119903gt 0 do
Calculate 119897119892119903and 120601
119892119903
Send robot speedif All119863
119894in CA
Calculate 119897119894119903using Kinect
if All 119897119894119903gt 0 then
There is no collision risk keep send robot speedelseConstruct the virtual planeTest the collision in the virtual planeif there is a collision risk thenCheck sonar sensor valueif sonar value is too small thenChose 120579
119903of quick motion control
Send robot speedelseConstruct the 120579-windowChose the appropriate values for 120579
119903
Send robot speedend if
end ifend if
end while
Algorithm 1 Hybrid reactive dynamic navigation algorithm
heading angle 120579 as shown in (28) new heading informationin each four different phase sections is computed by using thefollowing equations
Δ119909 = 119909119892minus 119909119903
Δ119910 = 119910119892minus 119910119903
120579 = tanminus1 (119910119892minus 119910119903
119909119892minus 119909119903
)
(28)
(1) 119909119892gt 119909119903 119910119892gt 119910119903
120579 = 120579 + 0
(2) 119909119892lt 119909119903 119910119892gt 119910119903
120579 = 314 minus 120579
(3) 119909119892lt 119909119903 119910119892lt 119910119903
120579 = 314 + 120579
(4) 119909119892gt 119909119903 119910119892lt 119910119903
120579 = 628 minus 120579
(29)
Finally the relative velocity obstacle based reactivedynamic navigation algorithm with the capability of collisionavoidance is summarized in Algorithm 1
4 Experimental Results
For the evaluation and verification of the proposed sensorbased reactive dynamic navigation algorithms both simu-lation study and experimental tests are carried out with arealistic experimental setup
41 Experimental Scenario and Setup For experimental teststwo robots are assigned asmoving obstacles and the third oneis used as a master robot that generates control commands toavoid the dynamic obstacles based on the suggested reactivemotion planning algorithms For the moving obstacles twoNXT Mindstorm based vehicles that can either move in arandom direction or follow a designated path are developedThe HBE-RoboCAR equipped with ultrasonic and encodersensors is used as a master robot as shown in Figure 7
TheHBE-RoboCAR [33] has 8-bit AVRATmega128L pro-cessor The robot is equipped with multiembedded processormodules (embedded processor FPGA MCU) It providesdetection of obstacles with ultrasonic and infrared sensormotion control with acceleration sensor and motor encoderHBE-RoboCAR has the ability to communicate with otherdevice either wireless or wired technology such as Bluetoothmodule and ISP UART interfaces respectively In this workHBE-RoboCAR is connected to a computer on the groundcontrol station using Bluetooth wireless communicationFigure 7 shows the hardware specification and sensor systemsfor the robot platform and Figure 8 shows the interface andcontrol architecture for the embedded components of HBE-RoboCAR [33]
8 Journal of Sensors
Power switch1Power LED and reset switch2Interface connector3Wireless network connector4Ultrasonic sensor5PSD sensor6
LED 7Phototransistors8Voltmeter9DC inCharge inBattery in
101112
1
23
4
5 5
55
5 5
55
6
6
7 7
7 7
7 7
77
8
9
101112
Figure 7 Mobile robot hardware and sensor systems (HBE-RoboCAR [33])
Embedded processor
FPGA module
(Altera Xilinx)
AVRboard
Inte
rface
bo
ard
Inte
rface
bo
ard
Inte
rface
bo
ard
Con
nect
or
ATmega128
Acceleration sensor
(ADXL202E)
POWER
ATmega8 Voltage gauge
Motor driver
Sens
or C
ON
Mot
or C
ON
Photo sensor
(GP2Y0A21)
Infrared sensor
(IL-8L EL-8L)
Ultrasonic sensor
(MA40)
Front Back
DC geared motor(2EA)
DC geared encoder motor
(2EA)
Control board(compatibility with 3 kinds)
HBE-RoboCAR board
Base board Sensor board
LED
Motor(+5V +33V)
times 2)(Write (Red times 2)
Figure 8 Block diagram of the HBE-RoboCAR embedded system
For the dynamic obstacle avoidance the relative velocityobstacle based navigation laws require the range and headinginformation from sensors For the range estimation Kinectsensor is utilized If the Kinect sensor detects the final goalusing a color-based detection algorithm [32 34 35] it sendsthe information to the master robot After receiving thetarget point the master robot starts the onboard navigationalgorithm to reach the goal while avoiding dynamic obstaclesWhen the robot navigates in the experimental field thedistance to each moving obstacle is measured by the Kinectsensor and the range information is fed back to the masterrobot via Bluetooth communication as inputs to the reactivemotion planning algorithms The detailed scenario for theexperimental setup is illustrated in Figure 9
42 Simulation Results Figure 10 shows the simulationresults of the reactive motion planning on both the virtualplane and the real plane In this simulation the trajectoriesof two obstacles were described by the blue and black colorlines the trajectory of the master robot was depicted in thered line and the goal was indicated by green dot As canbe clearly seen in the real plane and the virtual plane inFigures 10(b) and 10(a) the master robot avoided the firstobstaclewhichwasmoving into themaster robot and success-fully reached the target goal after avoiding the collision withthe second moving obstacle just before reaching the targetWhile the master robot avoids the obstacles it generates acollision cone by choosing a deviation point on the virtualplane On the virtual plane the radius of the collision cone
Journal of Sensors 9
Bluetoothcommunication
Kinect
PC
Obstacle 1
Obstacle 2
Robot
Destination
Figure 9 Test scenario and architecture for experimental setup
Deviation point
Deviation point
First obstaclersquoscollision coneSecond obstaclersquos
collision cone
0 500 1000 1500 2000 2500 3000minus500
0500
1000150020002500
x coordinate
y coo
rdin
ate
Virtual plane
(a)
Second obstacle
First obstacleRobot
Goal
Trajectory of robot
Coverage area
Initial locations ofrobot and obstacles
minus500 0 500 1000 1500 2000 2500 3000 3500 4000minus1000
01000200030004000
y coo
rdin
ate
Real plane
x coordinate
(b)
Figure 10 Simulation results on virtual plane and real plane
0 10 20 30 40 50 60 7050
100150200
Cycle time (times of program running)
Targ
et an
gle
(deg
) an
gle (
deg)
(d
eg)
0 10 20 30 40 50 60 7050
100150200
Cycle time (times of program running)
Robo
t hea
ding
0 10 20 30 40 50 60 70minus100
0100
Cycle time (times of program running)
Varia
nce a
ngle
Figure 11 Orientation angle information target angle robot head-ing angle and variance angle
is the same as the obstaclersquos one and the distance betweenthe deviation point and the collision cone is dependent onthe radius of the master robotThe ellipses indicate the initiallocations of the robot and the obstacles
In Figure 11 the orientation angle information used forthe robot control was illustrated The upper top plot showedthe angle of the moving robot to the target from the virtualplane the second plot showed the robot heading anglecommanded for the navigation control and the third plotshowed the angle difference between the target and robotheading angle At the final stage of the path planning the
0 10 20 30 40 50 60 70100110120130140
Cycle time (times of program running)
Obs
tacle
1 an
gle
(deg
ree)
1840 1860 1880 1900 1920 1940 1960 1980 2000 2020 2040500
1000
1500
2000
x coordinate (mm)
y coo
rdin
ate (
mm
) Obstacle 1 position
0 10 20 30 40 50 60 700
20
40
60
Cycle time (times of program running)
Obs
tacle
1 sp
eed
Figure 12 Results of firstmoving obstaclersquos orientation angle speedand position in 119909-119910 coordinates
commanded orientation angle and the target angle to the goalpoint become the same Instead of controlling the robot withthe orientation angle the speed of the master robot can beused to avoid the moving obstacles
Figures 12 and 13 show each moving obstaclersquos headingangle linear velocity and trajectory As can be seen in order
10 Journal of Sensors
0 10 20 30 40 50 60 70150
200
250
300
Cycle time (times of program running)
Obs
tacle
2 an
gle
(deg
ree)
1000 1010 1020 1030 1040 1050 1060 10700
500
1000
1500
x coordinate (mm)
y coo
rdin
ate (
mm
) Obstacle 2 position
0 10 20 30 40 50 60 700
20
40
60
Obs
tacle
2 sp
eed
Cycle time (times of program running)
Figure 13 Results of second moving obstaclersquos orientation anglespeed and position in 119909-119910 coordinates
500 1000 1500 2000 2500 30000
2000
4000
x coordinate (mm)
y co
ordi
nate
(mm
)
Robot position
0 10 20 30 40 50 60 700
100
200
Cycle time (times of program running)
Robo
t spe
ed
0 10 20 30 40 50 60 700
50
100
Cycle time (times of program running)
Righ
t whe
el sp
eed
0 10 20 30 40 50 60 700
50
100
Cycle time (times of program running)
Left
whe
el sp
eed
Figure 14 Results of robotrsquos trajectory speed and right and leftwheels speed
to carry out a dynamic path planning experiment the speedand the heading angle were changed during the simulationresulting in uncertain cluttered environments
Figure 14 shows the mobile robotrsquos trajectory from thestart point to the goal point and also the forward velocity andeach wheel speed from encoder As can be seen the trajectoryof the robot has a sharp turn around the location (1500mm
Obstacle 1rsquos startingposition
Obstacle 2rsquos startingposition
Goal x
y
Robotrsquos startingposition
(a)
0 500 1000 1500 2000 2500 30000
1000
2000
3000Virtual plane
x (mm)
y (m
m)
minus1000 minus500 0 500 1000 1500 2000 2500 3000 3500 4000
0
1000
2000
Real plane
R (27513368 21269749)
3000
x (mm)
y (m
m)
(b)
Figure 15 (a) Experiment environment (initial stage) (b) Plots ofthe experimented data at initial stage
1000mm) in order to avoid the second moving obstacle It isseen that the right and left wheel speeds are mirrored along atime axis Also we can see relationship of the variance angleand robotrsquos right and left wheels speed
43 Field Test and Results Further verification of the per-formance of the proposed hybrid dynamic path planningapproach for real experiments was carried out with thesame scenario used in the previous simulation part In theexperiment two moving obstacles are used and a masterrobot moves without any collision with the obstacles to thetarget point as shown in Figure 15(a) and the initial locationsof the obstacles and the robot are shown in Figure 15(b) Thered dot is the initial starting position of the master robot at(2750mm 2126mm) the black dot is the initial location ofthe second obstacle at (2050mm 1900mm) and the blue dotis the initial location of the first moving obstacle at (1050mm2000mm) In the virtual plane the collision cone of the firstobstacle is depicted as shown in the top plot of Figure 15(b)
Journal of Sensors 11
Goal x
y
Robotrsquos currentposition
Obstacle 2rsquos current Obstacle 1rsquos currentposition and direction position and direction
(a)
0 500 1000 1500 2000 2500 30000
1000
2000
3000Virtual plane
x (mm)
y (m
m)
minus1000 minus500 0 500 1000 1500 2000 2500 3000 3500 4000
0
1000
2000
3000
Real plane
x (mm)
y (m
m)
(b)
Figure 16 (a) Experiment result of collision avoidance with the firstobstacle (b) Plot of the trajectories of the robot and the obstacles inthe virtual and real planes during the collision avoidance with thefirst moving obstacle
and the robot carries out its motion control based on thecollision cone in the virtual plane until it avoids the firstobstacle
Figure 16 showed the collision avoidance performancewith the fist moving obstacle and as can be seen the masterrobot avoided the commanded orientation control into theleft direction without any collisions which is described indetail in the virtual plane in the top plot of Figure 16(b) Thetrajectory and movement of the robot and the obstacles weredepicted in the real plane in Figure 16(b) for the detailedanalysis
In a similar way Figure 17(a) illustrated the collisionavoidance with the second moving obstacle and the detailedpath and trajectory are described in Figure 17(b) The topplot of Figure 17(b) shows the motion planning in thevirtual plane where the initial location of the second movingobstacle is recognized at the center of (2050mm 1900mm)
Goal x
y
Robotrsquos currentposition
Obstacle 2rsquos currentObstacle 1rsquos currentposition and direction
position and direction
(a)
0 500 1000 1500 2000 2500 30000
1000
2000
3000Virtual plane
x (mm)
y (m
m)
minus1000 minus500 0 500 1000 1500 2000 2500 3000 3500 4000
0
1000
2000
3000
Real plane
x (mm)
y (m
m)
(b)
Figure 17 (a) Experiment result of collision avoidance with thesecond obstacle (b) Plot of the trajectories of the robot and theobstacles in the virtual and real planes during the collision avoidancewith the second moving obstacle
Based on this initial location the second collision cone isconstructed with a big green ellipse that allows the virtualrobot to navigate without any collision with the secondobstacle The trajectory of the robot motion planning in thereal plane is depicted in the bottom plot of Figure 17(b)
Now at the final phase after avoiding all the obstacles themaster robot reached the target goal with a motion controlas shown in Figure 18(a) The overall trajectories of the robotfrom the starting point to the final goal target in the virtualplane were depicted in the top plot of Figure 18(a) and thetrajectories of the robot in the real plane were depicted inthe bottom plot of Figure 18(b) Note that the trajectories ofthe robot location differ from each other in the virtual planeand the real plane However the orientation change gives thesame direction change of the robot in both the virtual andthe real plane In this plot the green dot is the final goalpoint and the robot trajectory is depicted with the red dotted
12 Journal of Sensors
Goalx
y
Obstacle 2rsquos current
Obstacle 1rsquos current
position and direction
position and direction
Robotrsquos currentposition
(a)
0 500 1000 1500 2000 2500 30000
1000
2000
3000Virtual plane
x (mm)
y (m
m)
minus1000 minus500 0 500 1000 1500 2000 2500 3000 3500 4000minus1000
01000200030004000
Real plane
x (mm)
y (m
m)
(b)
Figure 18 (a) Experiment result after the collision avoidance withall obstacles (b) Plot of the trajectories of the robot and the obstaclesin the virtual and real planes during the collision avoidance with allthe moving obstacles
circles The smooth trajectory was generated by using thelinear navigation laws as explained
From this experiment it is easily seen that the pro-posed hybrid reactive motion planning approach designedby the integration of the virtual plane approach and asensor based planning is very effective to dynamic collisionavoidance problems in cluttered uncertain environmentsThe effectiveness of the hybrid reactive motion planningmethod makes its usage very attractive to various dynamicnavigation applications of not only mobile robots but alsoother autonomous vehicles such as flying vehicles and self-driving vehicles
5 Conclusion
In this paper we proposed a hybrid reactive motion plan-ning method for an autonomous mobile robot in uncertain
dynamic environmentsThe hybrid reactive motion planningmethod combined a reactive path planning method whichcould transform dynamic moving obstacles into stationaryones with a sensor based approach which can provide relativeinformation of moving obstacles and environments The keyfeatures of the proposed method are twofold the first keyfeature is the simplification of complex dynamic motionplanning problems into stationary ones using the virtualplane approach while the second feature is the robustness ofa sensor basedmotion planning in which the pose estimationof moving obstacles is made by using a Kinect sensor whichprovides a ranging and color detection The sensor basedapproach improves the accuracy and robustness of the reac-tive motion planning approach by providing the informationof the obstacles and environments The performance ofthe proposed method was demonstrated through not onlysimulation studies but also field experiments using multiplemoving obstacles
In the further work a sensor fusion approach which couldimprove the heading estimation of a robot and the speedestimation of moving objects will be investigated more formore robust motion planning
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgments
This research was supported by the National Research Foun-dation ofKorea (NRF) (no 2014-017630 andno 2014-063396)and was also supported by the Human Resource TrainingProgram for Regional Innovation and Creativity through theMinistry of Education and National Research Foundation ofKorea (no 2014-066733)
References
[1] R Siegwart I R Nourbakhsh and D Scaramuzza Introductionto AutonomousMobile RobotsTheMITPress LondonUK 2ndedition 2011
[2] J P van den Berg and M H Overmars ldquoRoadmap-basedmotion planning in dynamic environmentsrdquo IEEE Transactionson Robotics vol 21 no 5 pp 885ndash897 2005
[3] D Hsu R Kindel J-C Latombe and S Rock ldquoRandomizedkinodynamic motion planning with moving obstaclesrdquo TheInternational Journal of Robotics Research vol 21 no 3 pp 233ndash255 2002
[4] R Gayle A Sud M C Lin and D Manocha ldquoReactivedeformation roadmaps motion planning of multiple robots indynamic environmentsrdquo in IEEERSJ International Conferenceon Intelligent Robots and Systems (IROS rsquo07) pp 3777ndash3783 SanDiego Calif USA November 2007
[5] S S Ge and Y J Cui ldquoDynamic motion planning for mobilerobots using potential field methodrdquo Autonomous Robots vol13 no 3 pp 207ndash222 2002
[6] K P Valavanis T Hebert R Kolluru and N TsourveloudisldquoMobile robot navigation in 2-D dynamic environments using
Journal of Sensors 13
an electrostatic potential fieldrdquo IEEE Transactions on SystemsMan and Cybernetics Part A Systems and Humans vol 30 no2 pp 187ndash196 2000
[7] X Gao Y Ou Y Fu et al ldquoA novel local path planningmethod considering both robot posture and path smoothnessrdquoin Proceeding of the IEEE International Conference on Roboticsand Biomimetics (ROBIO 13) pp 172ndash178 Shenzhen ChinaDecember 2013
[8] J Jiao Z Cao P Zhao X Liu and M Tan ldquoBezier curvebased path planning for a mobile manipulator in unknownenvironmentsrdquo in Proceedings of the IEEE International Confer-ence on Robotics and Biomimetics (ROBIO rsquo13) pp 1864ndash1868Shenzhen China December 2013
[9] A Stentz ldquoOptimal and efficient path planning for partially-known environmentsrdquo in Proceedings of the IEEE InternationalConference on Robotics and Automation vol 4 pp 3310ndash3317May 1994
[10] G Song H Wang J Zhang and T Meng ldquoAutomatic dockingsystem for recharging home surveillance robotsrdquo IEEE Transac-tions on Consumer Electronics vol 57 no 2 pp 428ndash435 2011
[11] M Seder and I Petrovic ldquoDynamic window based approachto mobile robot motion control in the presence of movingobstaclesrdquo in Proceedings of the IEEE International Conferenceon Robotics and Automation (ICRA rsquo07) pp 1986ndash1991 RomaItaly April 2007
[12] D Fox W Burgard and S Thrun ldquoThe dynamic windowapproach to collision avoidancerdquo IEEERobotics andAutomationMagazine vol 4 no 1 pp 23ndash33 1997
[13] J Borenstein and Y Koren ldquoReal-time obstacle avoidance forfast mobile robotsrdquo IEEE Transactions on Systems Man andCybernetics vol 19 no 5 pp 1179ndash1187 1989
[14] J Borenstein and Y Koren ldquoThe vector field histogrammdashfastobstacle avoidance for mobile robotsrdquo IEEE Transactions onRobotics and Automation vol 7 no 3 pp 278ndash288 1991
[15] R Simmons ldquoThe curvature-velocity method for local obsta-cle avoidancerdquo in Proceedings of the 13th IEEE InternationalConference on Robotics and Automation vol 4 pp 3375ndash3382Minneapolis Minn USA April 1996
[16] J Minguez and L Montano ldquoNearness Diagram (ND) nav-igation collision avoidance in troublesome scenariosrdquo IEEETransactions on Robotics and Automation vol 20 no 1 pp 45ndash59 2004
[17] E Owen and L Montano ldquoMotion planning in dynamicenvironments using the velocity spacerdquo in Proceedings of theIEEE IRSRSJ International Conference on Intelligent Robots andSystems (IROS 05) pp 997ndash1002 August 2005
[18] A Chakravarthy and D Ghose ldquoObstacle avoidance in adynamic environment a collision cone approachrdquo IEEE Trans-actions on Systems Man and Cybernetics Part A Systems andHumans vol 28 no 5 pp 562ndash574 1998
[19] F Belkhouche T Jin and C H Sung ldquoPlane collision coursedetection betweenmoving objects by transformation of coordi-natesrdquo in Proceedings of the 17th IEEE International Conferenceon Control Applications Part of IEEE Multi-Conference onSystems and Control San Antonio Tex USA 2008
[20] P Fiorini and Z Shiller ldquoMotion planning in dynamic environ-ments using velocity obstaclesrdquo International Journal of RoboticsResearch vol 17 no 7 pp 760ndash772 1998
[21] P Fiorini and Z Shiller ldquoMotion planning in dynamic envi-ronments using velocity obstaclesrdquoThe International Journal ofRobotics Research vol 17 no 7 pp 760ndash772 1998
[22] Z Shiller F Large and S Sekhavat ldquoMotion planning indynamic environments obstacles moving along arbitrary tra-jectoriesrdquo in Proceedings of the IEEE International Conferenceon Robotics and Automation (ICRA rsquo01) pp 3716ndash3721 SeoulRepublic of Korea May 2001
[23] F Belkhouche ldquoReactive path planning in a dynamic environ-mentrdquo IEEE Transactions on Robotics vol 25 no 4 pp 902ndash9112009
[24] B Dorj D Tuvshinjargal K T Chong D P Hong and DJ Lee ldquoMulti-sensor fusion based effective obstacle avoidanceand path-following technologyrdquo Advanced Science Letters vol20 pp 1751ndash1756 2014
[25] R Philippsen B Jensen and R Siegwart ldquoTowards real-timesensor based path planning in highly dynamic environmentsrdquoin Autonomous Navigation in Dynamic Environments vol 35of Tracts in Advanced Robotics pp 135ndash148 Springer BerlinGermany 2007
[26] K-T Song and C C Chang ldquoReactive navigation in dynamicenvironment using a multisensor predictorrdquo IEEE Transactionson Systems Man and Cybernetics Part B Cybernetics vol 29no 6 pp 870ndash880 1999
[27] K Khoshelham and S O Elberink ldquoAccuracy and resolutionof kinect depth data for indoor mapping applicationsrdquo Sensorsvol 12 no 2 pp 1437ndash1454 2012
[28] J Han L Shao D Xu and J Shotton ldquoEnhanced computervision with Microsoft Kinect sensor a reviewrdquo IEEE Transac-tions on Cybernetics vol 43 no 5 pp 1318ndash1334 2013
[29] F Kammergruber A Ebner and W A Gunthner ldquoNavigationin virtual reality using microsoft kinectrdquo in Proceedings of the12th International Conference on Construction Application ofVirtual Reality Taipei Taiwan November 2012
[30] Microsoft Kinect for X-BXO 360 httpwwwxboxcomen-USxbox-oneaccessorieskinect-for-xbox-one
[31] K Kronolige and P Mihelich ldquoTechnical Description of KinectCalibrationrdquo July 2014 httpwwwrosorgwikikinect calibra-tiontechnical
[32] G Shashank M Sharma and A Shukla ldquoRobot navigationusing image processing and isolated word recognitionrdquo Interna-tional Journal of Computer Science and Engineering vol 4 no5 pp 769ndash778 2012
[33] TW Choe H S Son J H Song and J H Park Intelligent RobotSystem HBE-RoboCar HanBack Electronics Seoul Republic ofKorea 2012
[34] K R Jadavm M A Lokhandwala and A P Gharge ldquoVisionbased moving object detection and trackingrdquo in Proceedings ofthe National Conference on Recent Trends in Engineering andTechnology Anand India 2011
[35] A Chandak K Gosavi S Giri S Agrawal and P Kulka-rni ldquoPath planning for mobile robot navigation using imageprocessingrdquo International Journal of Scientific amp EngineeringResearch vol 4 pp 1490ndash1495 2013
International Journal of
AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Active and Passive Electronic Components
Control Scienceand Engineering
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
RotatingMachinery
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation httpwwwhindawicom
Journal ofEngineeringVolume 2014
Submit your manuscripts athttpwwwhindawicom
VLSI Design
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Shock and Vibration
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Civil EngineeringAdvances in
Acoustics and VibrationAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Advances inOptoElectronics
Hindawi Publishing Corporation httpwwwhindawicom
Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
SensorsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Chemical EngineeringInternational Journal of Antennas and
Propagation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Navigation and Observation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
DistributedSensor Networks
International Journal of
Journal of Sensors 5
Table 1 Kinectrsquos focal length and field of view
Camera Focal length (pixel) Field of view (degrees)Horizontally Vertically
RGB 525 63 50IR 580 57 43
Detectablerange ofdepth camera
0 200 400 600 800 1000 12000
1
2
3
4
5
6
Kinect units
Met
ers
Equation (21)Equation (22)
Figure 3 Kinect depth measurement and actual distance
obstacles and estimating color-based locations of objects fordynamic motion planning
Before going into detail the concept of the calculationof the real coordinates is discussed Kinect camera hassome good advantages such as depth sensor with minimum800mm and maximum range 4000mm Camera focus isconstant and given and thus real distance between cameraand chosen target is easily calculated The parameters usedin Kinect sensor are summarized in Table 1
Two similar equations have been proposed by researcherwhere one is based on the function 1119863value and the other isusing tan(119863value)The distance between a camera and a targetobject 119911
119908is expressed by
119911119908=
1
(119863value times (minus00030711016) + 33309495161)(21)
119911119908= 01236 times tan(
119863value28425 + 11863
) (22)
Figure 3 shows the detectable ranges of a depth camera wherethe distances in world coordinate based on the above twoequations are computed by limiting the raw depth to 1024 thatcorresponds to about 5 meters
Figure 4 shows the error results of distance measure-ment experiments using a Kinectrsquos depth camera In thisexperiment the measured distance using a ruler is noted bygreen which gives a reference distance and three repetitiveexperiments are carried out and they are drawn in red lightblue and blue colors From the experiment it is shown thatthe errors of the depth measurements from the Kinect sensorare proportional to the distance
0 5 10 15 20 25 30 350
1000200030004000
Cycle time
(mm
)D
istan
ce m
easu
rem
ent
Reference distanceExperiment 1
Experiment 2Experiment 3
500 1000 1500 2000 2500 3000 3500 4000minus40minus20
020406080
Distance measurement (mm)
(mm
)Er
ror m
easu
rem
ent
Experiment 1Experiment 2
Experiment 3Average value
Figure 4 Kinect depth camera measurement experiment and error
C
(uo o)
yc
h z
ys
xw
xc
minusyw
zc
120573120572 Camera
screen
Realworld
Pg(xw yw zw)
yw
minusy
xs
P998400g(xs ys zs)
P998400r (xs ys zs)
Pr(xw yw zw)
Figure 5 Robot and obstacle localization using the Kinect sensorfor ranging and positioning computation
Figure 5 shows a general schematics of geometricapproach to find the 119909 and 119910 coordinates using the Kinectsensor system where ℎ is the screen height in pixels and 120573 isthe field of view of the camera The coordinates of a point onthe image plane of the robot 1198751015840
119903and goal 1198751015840
119892are transformed
into the world coordinates 119875119903and 119875
119892 and it is calculated by
119875 = (119909119908 119910119908) 997888rarr 119875
1015840
= (119909119904 119910119904 119911119904)
119909119908=119909119904
119911119908
119910119908=119910119904
119911119908
(23)
6 Journal of Sensors
yg
yr
xr xg
y
x
120579
(a)
III
III IV
y
x
(b)
Figure 6 Robot heading angle computation approach
Each of coordinates 119909119908 119910119908 119911119908of two red (robot) and green
(goal) points is used as the input into the vision system and1198751015840 is computed by
(
119909119904
119910119904
119911119904
) = (
119891 0 0 0
0 119891 0 0
0 0 1 0
)(
119909119908
119910119908
119911119908
1
) (24)
where 119891 is the focal length of the camera 1198751015840 is calculated atthe pixel coordinates divided by 120588
119906and 120588V and the values of
the pixel of the image 119906 and V are the pixel coordinates andthey are obtained from the following equations
(
1199061015840
V1015840
1199081015840
) =(
1
120588119906
0 1199060
01
120588VV0
0 0 1
)(
119891 0 0 0
0 119891 0 0
0 0 1 0
)(
119883119882
119884119882
119885119882
1
)
1198751015840
= (
119906
V) = (
1199061015840
1199081015840
V1015840
1199081015840
) (
1199061015840
V1015840
1199081015840
) = 119862(
119909119908
119910119908
119911119908
1
)
(25)
In the experiment the final goal and robots are recognized bya built-in RGB camera In addition the distance of an objectis measured within mm accuracy using IR camera and thetarget objectrsquos pixel coordinates (119909
119904 119910119904) are estimated by using
a color-based detection approach In this work the distancemeasured between the planning field and the Kinect sensoris 2700mm which becomes the depth camerarsquos detectablerange The horizontal and vertical coordinates of an objectare calculated as follows
(1) horizontal coordinate
120572 = arctan(119909119904
119891)
119909119908= 119911119908times sin120572
(26)
(2) vertical coordinate
120572 = arctan(119910119904
119891)
119910119908= 119911119908times sin120572
(27)
where 119911119908is the distance to the object obtained from
Kinect sensor Now those real-world coordinatesobtained in the above are used in dynamic pathplanning procedures
In general the camera intrinsic and extrinsic parametersof the color and depth cameras are set as default valuesand thus it is necessary to calibrate them for accurate testsThe calibration of the depth and RGB color cameras in theKinect sensor can be applied by using a mathematical modelof depth measurement and creating a depth annotation of achessboard by physically offsetting the chessboard from itsbackground and details of the calibration procedures can bereferred to in [30 31]
As indicated in the previous section for the proposedrelative velocity obstacle based dynamic motion planningthe accurate estimates of the range and orientation of anobject play an important role In this section an efficient newapproach is proposed to estimate the heading information ofthe robot using a color detection approach First the robot iscovered by green and red sections shown in Figure 6
Then using a color detectionmethod [32] the center loca-tion of the robot is calculated and after finding denominate
Journal of Sensors 7
Input Coordinate of Robot obstacle 1 obstacle 2 and goalOutput Speed of robotrsquos right and left wheelsCalculate 119897
119892119903using Kinect
While 119897119892119903gt 0 do
Calculate 119897119892119903and 120601
119892119903
Send robot speedif All119863
119894in CA
Calculate 119897119894119903using Kinect
if All 119897119894119903gt 0 then
There is no collision risk keep send robot speedelseConstruct the virtual planeTest the collision in the virtual planeif there is a collision risk thenCheck sonar sensor valueif sonar value is too small thenChose 120579
119903of quick motion control
Send robot speedelseConstruct the 120579-windowChose the appropriate values for 120579
119903
Send robot speedend if
end ifend if
end while
Algorithm 1 Hybrid reactive dynamic navigation algorithm
heading angle 120579 as shown in (28) new heading informationin each four different phase sections is computed by using thefollowing equations
Δ119909 = 119909119892minus 119909119903
Δ119910 = 119910119892minus 119910119903
120579 = tanminus1 (119910119892minus 119910119903
119909119892minus 119909119903
)
(28)
(1) 119909119892gt 119909119903 119910119892gt 119910119903
120579 = 120579 + 0
(2) 119909119892lt 119909119903 119910119892gt 119910119903
120579 = 314 minus 120579
(3) 119909119892lt 119909119903 119910119892lt 119910119903
120579 = 314 + 120579
(4) 119909119892gt 119909119903 119910119892lt 119910119903
120579 = 628 minus 120579
(29)
Finally the relative velocity obstacle based reactivedynamic navigation algorithm with the capability of collisionavoidance is summarized in Algorithm 1
4 Experimental Results
For the evaluation and verification of the proposed sensorbased reactive dynamic navigation algorithms both simu-lation study and experimental tests are carried out with arealistic experimental setup
41 Experimental Scenario and Setup For experimental teststwo robots are assigned asmoving obstacles and the third oneis used as a master robot that generates control commands toavoid the dynamic obstacles based on the suggested reactivemotion planning algorithms For the moving obstacles twoNXT Mindstorm based vehicles that can either move in arandom direction or follow a designated path are developedThe HBE-RoboCAR equipped with ultrasonic and encodersensors is used as a master robot as shown in Figure 7
TheHBE-RoboCAR [33] has 8-bit AVRATmega128L pro-cessor The robot is equipped with multiembedded processormodules (embedded processor FPGA MCU) It providesdetection of obstacles with ultrasonic and infrared sensormotion control with acceleration sensor and motor encoderHBE-RoboCAR has the ability to communicate with otherdevice either wireless or wired technology such as Bluetoothmodule and ISP UART interfaces respectively In this workHBE-RoboCAR is connected to a computer on the groundcontrol station using Bluetooth wireless communicationFigure 7 shows the hardware specification and sensor systemsfor the robot platform and Figure 8 shows the interface andcontrol architecture for the embedded components of HBE-RoboCAR [33]
8 Journal of Sensors
Power switch1Power LED and reset switch2Interface connector3Wireless network connector4Ultrasonic sensor5PSD sensor6
LED 7Phototransistors8Voltmeter9DC inCharge inBattery in
101112
1
23
4
5 5
55
5 5
55
6
6
7 7
7 7
7 7
77
8
9
101112
Figure 7 Mobile robot hardware and sensor systems (HBE-RoboCAR [33])
Embedded processor
FPGA module
(Altera Xilinx)
AVRboard
Inte
rface
bo
ard
Inte
rface
bo
ard
Inte
rface
bo
ard
Con
nect
or
ATmega128
Acceleration sensor
(ADXL202E)
POWER
ATmega8 Voltage gauge
Motor driver
Sens
or C
ON
Mot
or C
ON
Photo sensor
(GP2Y0A21)
Infrared sensor
(IL-8L EL-8L)
Ultrasonic sensor
(MA40)
Front Back
DC geared motor(2EA)
DC geared encoder motor
(2EA)
Control board(compatibility with 3 kinds)
HBE-RoboCAR board
Base board Sensor board
LED
Motor(+5V +33V)
times 2)(Write (Red times 2)
Figure 8 Block diagram of the HBE-RoboCAR embedded system
For the dynamic obstacle avoidance the relative velocityobstacle based navigation laws require the range and headinginformation from sensors For the range estimation Kinectsensor is utilized If the Kinect sensor detects the final goalusing a color-based detection algorithm [32 34 35] it sendsthe information to the master robot After receiving thetarget point the master robot starts the onboard navigationalgorithm to reach the goal while avoiding dynamic obstaclesWhen the robot navigates in the experimental field thedistance to each moving obstacle is measured by the Kinectsensor and the range information is fed back to the masterrobot via Bluetooth communication as inputs to the reactivemotion planning algorithms The detailed scenario for theexperimental setup is illustrated in Figure 9
42 Simulation Results Figure 10 shows the simulationresults of the reactive motion planning on both the virtualplane and the real plane In this simulation the trajectoriesof two obstacles were described by the blue and black colorlines the trajectory of the master robot was depicted in thered line and the goal was indicated by green dot As canbe clearly seen in the real plane and the virtual plane inFigures 10(b) and 10(a) the master robot avoided the firstobstaclewhichwasmoving into themaster robot and success-fully reached the target goal after avoiding the collision withthe second moving obstacle just before reaching the targetWhile the master robot avoids the obstacles it generates acollision cone by choosing a deviation point on the virtualplane On the virtual plane the radius of the collision cone
Journal of Sensors 9
Bluetoothcommunication
Kinect
PC
Obstacle 1
Obstacle 2
Robot
Destination
Figure 9 Test scenario and architecture for experimental setup
Deviation point
Deviation point
First obstaclersquoscollision coneSecond obstaclersquos
collision cone
0 500 1000 1500 2000 2500 3000minus500
0500
1000150020002500
x coordinate
y coo
rdin
ate
Virtual plane
(a)
Second obstacle
First obstacleRobot
Goal
Trajectory of robot
Coverage area
Initial locations ofrobot and obstacles
minus500 0 500 1000 1500 2000 2500 3000 3500 4000minus1000
01000200030004000
y coo
rdin
ate
Real plane
x coordinate
(b)
Figure 10 Simulation results on virtual plane and real plane
0 10 20 30 40 50 60 7050
100150200
Cycle time (times of program running)
Targ
et an
gle
(deg
) an
gle (
deg)
(d
eg)
0 10 20 30 40 50 60 7050
100150200
Cycle time (times of program running)
Robo
t hea
ding
0 10 20 30 40 50 60 70minus100
0100
Cycle time (times of program running)
Varia
nce a
ngle
Figure 11 Orientation angle information target angle robot head-ing angle and variance angle
is the same as the obstaclersquos one and the distance betweenthe deviation point and the collision cone is dependent onthe radius of the master robotThe ellipses indicate the initiallocations of the robot and the obstacles
In Figure 11 the orientation angle information used forthe robot control was illustrated The upper top plot showedthe angle of the moving robot to the target from the virtualplane the second plot showed the robot heading anglecommanded for the navigation control and the third plotshowed the angle difference between the target and robotheading angle At the final stage of the path planning the
0 10 20 30 40 50 60 70100110120130140
Cycle time (times of program running)
Obs
tacle
1 an
gle
(deg
ree)
1840 1860 1880 1900 1920 1940 1960 1980 2000 2020 2040500
1000
1500
2000
x coordinate (mm)
y coo
rdin
ate (
mm
) Obstacle 1 position
0 10 20 30 40 50 60 700
20
40
60
Cycle time (times of program running)
Obs
tacle
1 sp
eed
Figure 12 Results of firstmoving obstaclersquos orientation angle speedand position in 119909-119910 coordinates
commanded orientation angle and the target angle to the goalpoint become the same Instead of controlling the robot withthe orientation angle the speed of the master robot can beused to avoid the moving obstacles
Figures 12 and 13 show each moving obstaclersquos headingangle linear velocity and trajectory As can be seen in order
10 Journal of Sensors
0 10 20 30 40 50 60 70150
200
250
300
Cycle time (times of program running)
Obs
tacle
2 an
gle
(deg
ree)
1000 1010 1020 1030 1040 1050 1060 10700
500
1000
1500
x coordinate (mm)
y coo
rdin
ate (
mm
) Obstacle 2 position
0 10 20 30 40 50 60 700
20
40
60
Obs
tacle
2 sp
eed
Cycle time (times of program running)
Figure 13 Results of second moving obstaclersquos orientation anglespeed and position in 119909-119910 coordinates
500 1000 1500 2000 2500 30000
2000
4000
x coordinate (mm)
y co
ordi
nate
(mm
)
Robot position
0 10 20 30 40 50 60 700
100
200
Cycle time (times of program running)
Robo
t spe
ed
0 10 20 30 40 50 60 700
50
100
Cycle time (times of program running)
Righ
t whe
el sp
eed
0 10 20 30 40 50 60 700
50
100
Cycle time (times of program running)
Left
whe
el sp
eed
Figure 14 Results of robotrsquos trajectory speed and right and leftwheels speed
to carry out a dynamic path planning experiment the speedand the heading angle were changed during the simulationresulting in uncertain cluttered environments
Figure 14 shows the mobile robotrsquos trajectory from thestart point to the goal point and also the forward velocity andeach wheel speed from encoder As can be seen the trajectoryof the robot has a sharp turn around the location (1500mm
Obstacle 1rsquos startingposition
Obstacle 2rsquos startingposition
Goal x
y
Robotrsquos startingposition
(a)
0 500 1000 1500 2000 2500 30000
1000
2000
3000Virtual plane
x (mm)
y (m
m)
minus1000 minus500 0 500 1000 1500 2000 2500 3000 3500 4000
0
1000
2000
Real plane
R (27513368 21269749)
3000
x (mm)
y (m
m)
(b)
Figure 15 (a) Experiment environment (initial stage) (b) Plots ofthe experimented data at initial stage
1000mm) in order to avoid the second moving obstacle It isseen that the right and left wheel speeds are mirrored along atime axis Also we can see relationship of the variance angleand robotrsquos right and left wheels speed
43 Field Test and Results Further verification of the per-formance of the proposed hybrid dynamic path planningapproach for real experiments was carried out with thesame scenario used in the previous simulation part In theexperiment two moving obstacles are used and a masterrobot moves without any collision with the obstacles to thetarget point as shown in Figure 15(a) and the initial locationsof the obstacles and the robot are shown in Figure 15(b) Thered dot is the initial starting position of the master robot at(2750mm 2126mm) the black dot is the initial location ofthe second obstacle at (2050mm 1900mm) and the blue dotis the initial location of the first moving obstacle at (1050mm2000mm) In the virtual plane the collision cone of the firstobstacle is depicted as shown in the top plot of Figure 15(b)
Journal of Sensors 11
Goal x
y
Robotrsquos currentposition
Obstacle 2rsquos current Obstacle 1rsquos currentposition and direction position and direction
(a)
0 500 1000 1500 2000 2500 30000
1000
2000
3000Virtual plane
x (mm)
y (m
m)
minus1000 minus500 0 500 1000 1500 2000 2500 3000 3500 4000
0
1000
2000
3000
Real plane
x (mm)
y (m
m)
(b)
Figure 16 (a) Experiment result of collision avoidance with the firstobstacle (b) Plot of the trajectories of the robot and the obstacles inthe virtual and real planes during the collision avoidance with thefirst moving obstacle
and the robot carries out its motion control based on thecollision cone in the virtual plane until it avoids the firstobstacle
Figure 16 showed the collision avoidance performancewith the fist moving obstacle and as can be seen the masterrobot avoided the commanded orientation control into theleft direction without any collisions which is described indetail in the virtual plane in the top plot of Figure 16(b) Thetrajectory and movement of the robot and the obstacles weredepicted in the real plane in Figure 16(b) for the detailedanalysis
In a similar way Figure 17(a) illustrated the collisionavoidance with the second moving obstacle and the detailedpath and trajectory are described in Figure 17(b) The topplot of Figure 17(b) shows the motion planning in thevirtual plane where the initial location of the second movingobstacle is recognized at the center of (2050mm 1900mm)
Goal x
y
Robotrsquos currentposition
Obstacle 2rsquos currentObstacle 1rsquos currentposition and direction
position and direction
(a)
0 500 1000 1500 2000 2500 30000
1000
2000
3000Virtual plane
x (mm)
y (m
m)
minus1000 minus500 0 500 1000 1500 2000 2500 3000 3500 4000
0
1000
2000
3000
Real plane
x (mm)
y (m
m)
(b)
Figure 17 (a) Experiment result of collision avoidance with thesecond obstacle (b) Plot of the trajectories of the robot and theobstacles in the virtual and real planes during the collision avoidancewith the second moving obstacle
Based on this initial location the second collision cone isconstructed with a big green ellipse that allows the virtualrobot to navigate without any collision with the secondobstacle The trajectory of the robot motion planning in thereal plane is depicted in the bottom plot of Figure 17(b)
Now at the final phase after avoiding all the obstacles themaster robot reached the target goal with a motion controlas shown in Figure 18(a) The overall trajectories of the robotfrom the starting point to the final goal target in the virtualplane were depicted in the top plot of Figure 18(a) and thetrajectories of the robot in the real plane were depicted inthe bottom plot of Figure 18(b) Note that the trajectories ofthe robot location differ from each other in the virtual planeand the real plane However the orientation change gives thesame direction change of the robot in both the virtual andthe real plane In this plot the green dot is the final goalpoint and the robot trajectory is depicted with the red dotted
12 Journal of Sensors
Goalx
y
Obstacle 2rsquos current
Obstacle 1rsquos current
position and direction
position and direction
Robotrsquos currentposition
(a)
0 500 1000 1500 2000 2500 30000
1000
2000
3000Virtual plane
x (mm)
y (m
m)
minus1000 minus500 0 500 1000 1500 2000 2500 3000 3500 4000minus1000
01000200030004000
Real plane
x (mm)
y (m
m)
(b)
Figure 18 (a) Experiment result after the collision avoidance withall obstacles (b) Plot of the trajectories of the robot and the obstaclesin the virtual and real planes during the collision avoidance with allthe moving obstacles
circles The smooth trajectory was generated by using thelinear navigation laws as explained
From this experiment it is easily seen that the pro-posed hybrid reactive motion planning approach designedby the integration of the virtual plane approach and asensor based planning is very effective to dynamic collisionavoidance problems in cluttered uncertain environmentsThe effectiveness of the hybrid reactive motion planningmethod makes its usage very attractive to various dynamicnavigation applications of not only mobile robots but alsoother autonomous vehicles such as flying vehicles and self-driving vehicles
5 Conclusion
In this paper we proposed a hybrid reactive motion plan-ning method for an autonomous mobile robot in uncertain
dynamic environmentsThe hybrid reactive motion planningmethod combined a reactive path planning method whichcould transform dynamic moving obstacles into stationaryones with a sensor based approach which can provide relativeinformation of moving obstacles and environments The keyfeatures of the proposed method are twofold the first keyfeature is the simplification of complex dynamic motionplanning problems into stationary ones using the virtualplane approach while the second feature is the robustness ofa sensor basedmotion planning in which the pose estimationof moving obstacles is made by using a Kinect sensor whichprovides a ranging and color detection The sensor basedapproach improves the accuracy and robustness of the reac-tive motion planning approach by providing the informationof the obstacles and environments The performance ofthe proposed method was demonstrated through not onlysimulation studies but also field experiments using multiplemoving obstacles
In the further work a sensor fusion approach which couldimprove the heading estimation of a robot and the speedestimation of moving objects will be investigated more formore robust motion planning
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgments
This research was supported by the National Research Foun-dation ofKorea (NRF) (no 2014-017630 andno 2014-063396)and was also supported by the Human Resource TrainingProgram for Regional Innovation and Creativity through theMinistry of Education and National Research Foundation ofKorea (no 2014-066733)
References
[1] R Siegwart I R Nourbakhsh and D Scaramuzza Introductionto AutonomousMobile RobotsTheMITPress LondonUK 2ndedition 2011
[2] J P van den Berg and M H Overmars ldquoRoadmap-basedmotion planning in dynamic environmentsrdquo IEEE Transactionson Robotics vol 21 no 5 pp 885ndash897 2005
[3] D Hsu R Kindel J-C Latombe and S Rock ldquoRandomizedkinodynamic motion planning with moving obstaclesrdquo TheInternational Journal of Robotics Research vol 21 no 3 pp 233ndash255 2002
[4] R Gayle A Sud M C Lin and D Manocha ldquoReactivedeformation roadmaps motion planning of multiple robots indynamic environmentsrdquo in IEEERSJ International Conferenceon Intelligent Robots and Systems (IROS rsquo07) pp 3777ndash3783 SanDiego Calif USA November 2007
[5] S S Ge and Y J Cui ldquoDynamic motion planning for mobilerobots using potential field methodrdquo Autonomous Robots vol13 no 3 pp 207ndash222 2002
[6] K P Valavanis T Hebert R Kolluru and N TsourveloudisldquoMobile robot navigation in 2-D dynamic environments using
Journal of Sensors 13
an electrostatic potential fieldrdquo IEEE Transactions on SystemsMan and Cybernetics Part A Systems and Humans vol 30 no2 pp 187ndash196 2000
[7] X Gao Y Ou Y Fu et al ldquoA novel local path planningmethod considering both robot posture and path smoothnessrdquoin Proceeding of the IEEE International Conference on Roboticsand Biomimetics (ROBIO 13) pp 172ndash178 Shenzhen ChinaDecember 2013
[8] J Jiao Z Cao P Zhao X Liu and M Tan ldquoBezier curvebased path planning for a mobile manipulator in unknownenvironmentsrdquo in Proceedings of the IEEE International Confer-ence on Robotics and Biomimetics (ROBIO rsquo13) pp 1864ndash1868Shenzhen China December 2013
[9] A Stentz ldquoOptimal and efficient path planning for partially-known environmentsrdquo in Proceedings of the IEEE InternationalConference on Robotics and Automation vol 4 pp 3310ndash3317May 1994
[10] G Song H Wang J Zhang and T Meng ldquoAutomatic dockingsystem for recharging home surveillance robotsrdquo IEEE Transac-tions on Consumer Electronics vol 57 no 2 pp 428ndash435 2011
[11] M Seder and I Petrovic ldquoDynamic window based approachto mobile robot motion control in the presence of movingobstaclesrdquo in Proceedings of the IEEE International Conferenceon Robotics and Automation (ICRA rsquo07) pp 1986ndash1991 RomaItaly April 2007
[12] D Fox W Burgard and S Thrun ldquoThe dynamic windowapproach to collision avoidancerdquo IEEERobotics andAutomationMagazine vol 4 no 1 pp 23ndash33 1997
[13] J Borenstein and Y Koren ldquoReal-time obstacle avoidance forfast mobile robotsrdquo IEEE Transactions on Systems Man andCybernetics vol 19 no 5 pp 1179ndash1187 1989
[14] J Borenstein and Y Koren ldquoThe vector field histogrammdashfastobstacle avoidance for mobile robotsrdquo IEEE Transactions onRobotics and Automation vol 7 no 3 pp 278ndash288 1991
[15] R Simmons ldquoThe curvature-velocity method for local obsta-cle avoidancerdquo in Proceedings of the 13th IEEE InternationalConference on Robotics and Automation vol 4 pp 3375ndash3382Minneapolis Minn USA April 1996
[16] J Minguez and L Montano ldquoNearness Diagram (ND) nav-igation collision avoidance in troublesome scenariosrdquo IEEETransactions on Robotics and Automation vol 20 no 1 pp 45ndash59 2004
[17] E Owen and L Montano ldquoMotion planning in dynamicenvironments using the velocity spacerdquo in Proceedings of theIEEE IRSRSJ International Conference on Intelligent Robots andSystems (IROS 05) pp 997ndash1002 August 2005
[18] A Chakravarthy and D Ghose ldquoObstacle avoidance in adynamic environment a collision cone approachrdquo IEEE Trans-actions on Systems Man and Cybernetics Part A Systems andHumans vol 28 no 5 pp 562ndash574 1998
[19] F Belkhouche T Jin and C H Sung ldquoPlane collision coursedetection betweenmoving objects by transformation of coordi-natesrdquo in Proceedings of the 17th IEEE International Conferenceon Control Applications Part of IEEE Multi-Conference onSystems and Control San Antonio Tex USA 2008
[20] P Fiorini and Z Shiller ldquoMotion planning in dynamic environ-ments using velocity obstaclesrdquo International Journal of RoboticsResearch vol 17 no 7 pp 760ndash772 1998
[21] P Fiorini and Z Shiller ldquoMotion planning in dynamic envi-ronments using velocity obstaclesrdquoThe International Journal ofRobotics Research vol 17 no 7 pp 760ndash772 1998
[22] Z Shiller F Large and S Sekhavat ldquoMotion planning indynamic environments obstacles moving along arbitrary tra-jectoriesrdquo in Proceedings of the IEEE International Conferenceon Robotics and Automation (ICRA rsquo01) pp 3716ndash3721 SeoulRepublic of Korea May 2001
[23] F Belkhouche ldquoReactive path planning in a dynamic environ-mentrdquo IEEE Transactions on Robotics vol 25 no 4 pp 902ndash9112009
[24] B Dorj D Tuvshinjargal K T Chong D P Hong and DJ Lee ldquoMulti-sensor fusion based effective obstacle avoidanceand path-following technologyrdquo Advanced Science Letters vol20 pp 1751ndash1756 2014
[25] R Philippsen B Jensen and R Siegwart ldquoTowards real-timesensor based path planning in highly dynamic environmentsrdquoin Autonomous Navigation in Dynamic Environments vol 35of Tracts in Advanced Robotics pp 135ndash148 Springer BerlinGermany 2007
[26] K-T Song and C C Chang ldquoReactive navigation in dynamicenvironment using a multisensor predictorrdquo IEEE Transactionson Systems Man and Cybernetics Part B Cybernetics vol 29no 6 pp 870ndash880 1999
[27] K Khoshelham and S O Elberink ldquoAccuracy and resolutionof kinect depth data for indoor mapping applicationsrdquo Sensorsvol 12 no 2 pp 1437ndash1454 2012
[28] J Han L Shao D Xu and J Shotton ldquoEnhanced computervision with Microsoft Kinect sensor a reviewrdquo IEEE Transac-tions on Cybernetics vol 43 no 5 pp 1318ndash1334 2013
[29] F Kammergruber A Ebner and W A Gunthner ldquoNavigationin virtual reality using microsoft kinectrdquo in Proceedings of the12th International Conference on Construction Application ofVirtual Reality Taipei Taiwan November 2012
[30] Microsoft Kinect for X-BXO 360 httpwwwxboxcomen-USxbox-oneaccessorieskinect-for-xbox-one
[31] K Kronolige and P Mihelich ldquoTechnical Description of KinectCalibrationrdquo July 2014 httpwwwrosorgwikikinect calibra-tiontechnical
[32] G Shashank M Sharma and A Shukla ldquoRobot navigationusing image processing and isolated word recognitionrdquo Interna-tional Journal of Computer Science and Engineering vol 4 no5 pp 769ndash778 2012
[33] TW Choe H S Son J H Song and J H Park Intelligent RobotSystem HBE-RoboCar HanBack Electronics Seoul Republic ofKorea 2012
[34] K R Jadavm M A Lokhandwala and A P Gharge ldquoVisionbased moving object detection and trackingrdquo in Proceedings ofthe National Conference on Recent Trends in Engineering andTechnology Anand India 2011
[35] A Chandak K Gosavi S Giri S Agrawal and P Kulka-rni ldquoPath planning for mobile robot navigation using imageprocessingrdquo International Journal of Scientific amp EngineeringResearch vol 4 pp 1490ndash1495 2013
International Journal of
AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Active and Passive Electronic Components
Control Scienceand Engineering
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
RotatingMachinery
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation httpwwwhindawicom
Journal ofEngineeringVolume 2014
Submit your manuscripts athttpwwwhindawicom
VLSI Design
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Shock and Vibration
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Civil EngineeringAdvances in
Acoustics and VibrationAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Advances inOptoElectronics
Hindawi Publishing Corporation httpwwwhindawicom
Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
SensorsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Chemical EngineeringInternational Journal of Antennas and
Propagation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Navigation and Observation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
DistributedSensor Networks
International Journal of
6 Journal of Sensors
yg
yr
xr xg
y
x
120579
(a)
III
III IV
y
x
(b)
Figure 6 Robot heading angle computation approach
Each of coordinates 119909119908 119910119908 119911119908of two red (robot) and green
(goal) points is used as the input into the vision system and1198751015840 is computed by
(
119909119904
119910119904
119911119904
) = (
119891 0 0 0
0 119891 0 0
0 0 1 0
)(
119909119908
119910119908
119911119908
1
) (24)
where 119891 is the focal length of the camera 1198751015840 is calculated atthe pixel coordinates divided by 120588
119906and 120588V and the values of
the pixel of the image 119906 and V are the pixel coordinates andthey are obtained from the following equations
(
1199061015840
V1015840
1199081015840
) =(
1
120588119906
0 1199060
01
120588VV0
0 0 1
)(
119891 0 0 0
0 119891 0 0
0 0 1 0
)(
119883119882
119884119882
119885119882
1
)
1198751015840
= (
119906
V) = (
1199061015840
1199081015840
V1015840
1199081015840
) (
1199061015840
V1015840
1199081015840
) = 119862(
119909119908
119910119908
119911119908
1
)
(25)
In the experiment the final goal and robots are recognized bya built-in RGB camera In addition the distance of an objectis measured within mm accuracy using IR camera and thetarget objectrsquos pixel coordinates (119909
119904 119910119904) are estimated by using
a color-based detection approach In this work the distancemeasured between the planning field and the Kinect sensoris 2700mm which becomes the depth camerarsquos detectablerange The horizontal and vertical coordinates of an objectare calculated as follows
(1) horizontal coordinate
120572 = arctan(119909119904
119891)
119909119908= 119911119908times sin120572
(26)
(2) vertical coordinate
120572 = arctan(119910119904
119891)
119910119908= 119911119908times sin120572
(27)
where 119911119908is the distance to the object obtained from
Kinect sensor Now those real-world coordinatesobtained in the above are used in dynamic pathplanning procedures
In general the camera intrinsic and extrinsic parametersof the color and depth cameras are set as default valuesand thus it is necessary to calibrate them for accurate testsThe calibration of the depth and RGB color cameras in theKinect sensor can be applied by using a mathematical modelof depth measurement and creating a depth annotation of achessboard by physically offsetting the chessboard from itsbackground and details of the calibration procedures can bereferred to in [30 31]
As indicated in the previous section for the proposedrelative velocity obstacle based dynamic motion planningthe accurate estimates of the range and orientation of anobject play an important role In this section an efficient newapproach is proposed to estimate the heading information ofthe robot using a color detection approach First the robot iscovered by green and red sections shown in Figure 6
Then using a color detectionmethod [32] the center loca-tion of the robot is calculated and after finding denominate
Journal of Sensors 7
Input Coordinate of Robot obstacle 1 obstacle 2 and goalOutput Speed of robotrsquos right and left wheelsCalculate 119897
119892119903using Kinect
While 119897119892119903gt 0 do
Calculate 119897119892119903and 120601
119892119903
Send robot speedif All119863
119894in CA
Calculate 119897119894119903using Kinect
if All 119897119894119903gt 0 then
There is no collision risk keep send robot speedelseConstruct the virtual planeTest the collision in the virtual planeif there is a collision risk thenCheck sonar sensor valueif sonar value is too small thenChose 120579
119903of quick motion control
Send robot speedelseConstruct the 120579-windowChose the appropriate values for 120579
119903
Send robot speedend if
end ifend if
end while
Algorithm 1 Hybrid reactive dynamic navigation algorithm
heading angle 120579 as shown in (28) new heading informationin each four different phase sections is computed by using thefollowing equations
Δ119909 = 119909119892minus 119909119903
Δ119910 = 119910119892minus 119910119903
120579 = tanminus1 (119910119892minus 119910119903
119909119892minus 119909119903
)
(28)
(1) 119909119892gt 119909119903 119910119892gt 119910119903
120579 = 120579 + 0
(2) 119909119892lt 119909119903 119910119892gt 119910119903
120579 = 314 minus 120579
(3) 119909119892lt 119909119903 119910119892lt 119910119903
120579 = 314 + 120579
(4) 119909119892gt 119909119903 119910119892lt 119910119903
120579 = 628 minus 120579
(29)
Finally the relative velocity obstacle based reactivedynamic navigation algorithm with the capability of collisionavoidance is summarized in Algorithm 1
4 Experimental Results
For the evaluation and verification of the proposed sensorbased reactive dynamic navigation algorithms both simu-lation study and experimental tests are carried out with arealistic experimental setup
41 Experimental Scenario and Setup For experimental teststwo robots are assigned asmoving obstacles and the third oneis used as a master robot that generates control commands toavoid the dynamic obstacles based on the suggested reactivemotion planning algorithms For the moving obstacles twoNXT Mindstorm based vehicles that can either move in arandom direction or follow a designated path are developedThe HBE-RoboCAR equipped with ultrasonic and encodersensors is used as a master robot as shown in Figure 7
TheHBE-RoboCAR [33] has 8-bit AVRATmega128L pro-cessor The robot is equipped with multiembedded processormodules (embedded processor FPGA MCU) It providesdetection of obstacles with ultrasonic and infrared sensormotion control with acceleration sensor and motor encoderHBE-RoboCAR has the ability to communicate with otherdevice either wireless or wired technology such as Bluetoothmodule and ISP UART interfaces respectively In this workHBE-RoboCAR is connected to a computer on the groundcontrol station using Bluetooth wireless communicationFigure 7 shows the hardware specification and sensor systemsfor the robot platform and Figure 8 shows the interface andcontrol architecture for the embedded components of HBE-RoboCAR [33]
8 Journal of Sensors
Power switch1Power LED and reset switch2Interface connector3Wireless network connector4Ultrasonic sensor5PSD sensor6
LED 7Phototransistors8Voltmeter9DC inCharge inBattery in
101112
1
23
4
5 5
55
5 5
55
6
6
7 7
7 7
7 7
77
8
9
101112
Figure 7 Mobile robot hardware and sensor systems (HBE-RoboCAR [33])
Embedded processor
FPGA module
(Altera Xilinx)
AVRboard
Inte
rface
bo
ard
Inte
rface
bo
ard
Inte
rface
bo
ard
Con
nect
or
ATmega128
Acceleration sensor
(ADXL202E)
POWER
ATmega8 Voltage gauge
Motor driver
Sens
or C
ON
Mot
or C
ON
Photo sensor
(GP2Y0A21)
Infrared sensor
(IL-8L EL-8L)
Ultrasonic sensor
(MA40)
Front Back
DC geared motor(2EA)
DC geared encoder motor
(2EA)
Control board(compatibility with 3 kinds)
HBE-RoboCAR board
Base board Sensor board
LED
Motor(+5V +33V)
times 2)(Write (Red times 2)
Figure 8 Block diagram of the HBE-RoboCAR embedded system
For the dynamic obstacle avoidance the relative velocityobstacle based navigation laws require the range and headinginformation from sensors For the range estimation Kinectsensor is utilized If the Kinect sensor detects the final goalusing a color-based detection algorithm [32 34 35] it sendsthe information to the master robot After receiving thetarget point the master robot starts the onboard navigationalgorithm to reach the goal while avoiding dynamic obstaclesWhen the robot navigates in the experimental field thedistance to each moving obstacle is measured by the Kinectsensor and the range information is fed back to the masterrobot via Bluetooth communication as inputs to the reactivemotion planning algorithms The detailed scenario for theexperimental setup is illustrated in Figure 9
42 Simulation Results Figure 10 shows the simulationresults of the reactive motion planning on both the virtualplane and the real plane In this simulation the trajectoriesof two obstacles were described by the blue and black colorlines the trajectory of the master robot was depicted in thered line and the goal was indicated by green dot As canbe clearly seen in the real plane and the virtual plane inFigures 10(b) and 10(a) the master robot avoided the firstobstaclewhichwasmoving into themaster robot and success-fully reached the target goal after avoiding the collision withthe second moving obstacle just before reaching the targetWhile the master robot avoids the obstacles it generates acollision cone by choosing a deviation point on the virtualplane On the virtual plane the radius of the collision cone
Journal of Sensors 9
Bluetoothcommunication
Kinect
PC
Obstacle 1
Obstacle 2
Robot
Destination
Figure 9 Test scenario and architecture for experimental setup
Deviation point
Deviation point
First obstaclersquoscollision coneSecond obstaclersquos
collision cone
0 500 1000 1500 2000 2500 3000minus500
0500
1000150020002500
x coordinate
y coo
rdin
ate
Virtual plane
(a)
Second obstacle
First obstacleRobot
Goal
Trajectory of robot
Coverage area
Initial locations ofrobot and obstacles
minus500 0 500 1000 1500 2000 2500 3000 3500 4000minus1000
01000200030004000
y coo
rdin
ate
Real plane
x coordinate
(b)
Figure 10 Simulation results on virtual plane and real plane
0 10 20 30 40 50 60 7050
100150200
Cycle time (times of program running)
Targ
et an
gle
(deg
) an
gle (
deg)
(d
eg)
0 10 20 30 40 50 60 7050
100150200
Cycle time (times of program running)
Robo
t hea
ding
0 10 20 30 40 50 60 70minus100
0100
Cycle time (times of program running)
Varia
nce a
ngle
Figure 11 Orientation angle information target angle robot head-ing angle and variance angle
is the same as the obstaclersquos one and the distance betweenthe deviation point and the collision cone is dependent onthe radius of the master robotThe ellipses indicate the initiallocations of the robot and the obstacles
In Figure 11 the orientation angle information used forthe robot control was illustrated The upper top plot showedthe angle of the moving robot to the target from the virtualplane the second plot showed the robot heading anglecommanded for the navigation control and the third plotshowed the angle difference between the target and robotheading angle At the final stage of the path planning the
0 10 20 30 40 50 60 70100110120130140
Cycle time (times of program running)
Obs
tacle
1 an
gle
(deg
ree)
1840 1860 1880 1900 1920 1940 1960 1980 2000 2020 2040500
1000
1500
2000
x coordinate (mm)
y coo
rdin
ate (
mm
) Obstacle 1 position
0 10 20 30 40 50 60 700
20
40
60
Cycle time (times of program running)
Obs
tacle
1 sp
eed
Figure 12 Results of firstmoving obstaclersquos orientation angle speedand position in 119909-119910 coordinates
commanded orientation angle and the target angle to the goalpoint become the same Instead of controlling the robot withthe orientation angle the speed of the master robot can beused to avoid the moving obstacles
Figures 12 and 13 show each moving obstaclersquos headingangle linear velocity and trajectory As can be seen in order
10 Journal of Sensors
0 10 20 30 40 50 60 70150
200
250
300
Cycle time (times of program running)
Obs
tacle
2 an
gle
(deg
ree)
1000 1010 1020 1030 1040 1050 1060 10700
500
1000
1500
x coordinate (mm)
y coo
rdin
ate (
mm
) Obstacle 2 position
0 10 20 30 40 50 60 700
20
40
60
Obs
tacle
2 sp
eed
Cycle time (times of program running)
Figure 13 Results of second moving obstaclersquos orientation anglespeed and position in 119909-119910 coordinates
500 1000 1500 2000 2500 30000
2000
4000
x coordinate (mm)
y co
ordi
nate
(mm
)
Robot position
0 10 20 30 40 50 60 700
100
200
Cycle time (times of program running)
Robo
t spe
ed
0 10 20 30 40 50 60 700
50
100
Cycle time (times of program running)
Righ
t whe
el sp
eed
0 10 20 30 40 50 60 700
50
100
Cycle time (times of program running)
Left
whe
el sp
eed
Figure 14 Results of robotrsquos trajectory speed and right and leftwheels speed
to carry out a dynamic path planning experiment the speedand the heading angle were changed during the simulationresulting in uncertain cluttered environments
Figure 14 shows the mobile robotrsquos trajectory from thestart point to the goal point and also the forward velocity andeach wheel speed from encoder As can be seen the trajectoryof the robot has a sharp turn around the location (1500mm
Obstacle 1rsquos startingposition
Obstacle 2rsquos startingposition
Goal x
y
Robotrsquos startingposition
(a)
0 500 1000 1500 2000 2500 30000
1000
2000
3000Virtual plane
x (mm)
y (m
m)
minus1000 minus500 0 500 1000 1500 2000 2500 3000 3500 4000
0
1000
2000
Real plane
R (27513368 21269749)
3000
x (mm)
y (m
m)
(b)
Figure 15 (a) Experiment environment (initial stage) (b) Plots ofthe experimented data at initial stage
1000mm) in order to avoid the second moving obstacle It isseen that the right and left wheel speeds are mirrored along atime axis Also we can see relationship of the variance angleand robotrsquos right and left wheels speed
43 Field Test and Results Further verification of the per-formance of the proposed hybrid dynamic path planningapproach for real experiments was carried out with thesame scenario used in the previous simulation part In theexperiment two moving obstacles are used and a masterrobot moves without any collision with the obstacles to thetarget point as shown in Figure 15(a) and the initial locationsof the obstacles and the robot are shown in Figure 15(b) Thered dot is the initial starting position of the master robot at(2750mm 2126mm) the black dot is the initial location ofthe second obstacle at (2050mm 1900mm) and the blue dotis the initial location of the first moving obstacle at (1050mm2000mm) In the virtual plane the collision cone of the firstobstacle is depicted as shown in the top plot of Figure 15(b)
Journal of Sensors 11
Goal x
y
Robotrsquos currentposition
Obstacle 2rsquos current Obstacle 1rsquos currentposition and direction position and direction
(a)
0 500 1000 1500 2000 2500 30000
1000
2000
3000Virtual plane
x (mm)
y (m
m)
minus1000 minus500 0 500 1000 1500 2000 2500 3000 3500 4000
0
1000
2000
3000
Real plane
x (mm)
y (m
m)
(b)
Figure 16 (a) Experiment result of collision avoidance with the firstobstacle (b) Plot of the trajectories of the robot and the obstacles inthe virtual and real planes during the collision avoidance with thefirst moving obstacle
and the robot carries out its motion control based on thecollision cone in the virtual plane until it avoids the firstobstacle
Figure 16 showed the collision avoidance performancewith the fist moving obstacle and as can be seen the masterrobot avoided the commanded orientation control into theleft direction without any collisions which is described indetail in the virtual plane in the top plot of Figure 16(b) Thetrajectory and movement of the robot and the obstacles weredepicted in the real plane in Figure 16(b) for the detailedanalysis
In a similar way Figure 17(a) illustrated the collisionavoidance with the second moving obstacle and the detailedpath and trajectory are described in Figure 17(b) The topplot of Figure 17(b) shows the motion planning in thevirtual plane where the initial location of the second movingobstacle is recognized at the center of (2050mm 1900mm)
Goal x
y
Robotrsquos currentposition
Obstacle 2rsquos currentObstacle 1rsquos currentposition and direction
position and direction
(a)
0 500 1000 1500 2000 2500 30000
1000
2000
3000Virtual plane
x (mm)
y (m
m)
minus1000 minus500 0 500 1000 1500 2000 2500 3000 3500 4000
0
1000
2000
3000
Real plane
x (mm)
y (m
m)
(b)
Figure 17 (a) Experiment result of collision avoidance with thesecond obstacle (b) Plot of the trajectories of the robot and theobstacles in the virtual and real planes during the collision avoidancewith the second moving obstacle
Based on this initial location the second collision cone isconstructed with a big green ellipse that allows the virtualrobot to navigate without any collision with the secondobstacle The trajectory of the robot motion planning in thereal plane is depicted in the bottom plot of Figure 17(b)
Now at the final phase after avoiding all the obstacles themaster robot reached the target goal with a motion controlas shown in Figure 18(a) The overall trajectories of the robotfrom the starting point to the final goal target in the virtualplane were depicted in the top plot of Figure 18(a) and thetrajectories of the robot in the real plane were depicted inthe bottom plot of Figure 18(b) Note that the trajectories ofthe robot location differ from each other in the virtual planeand the real plane However the orientation change gives thesame direction change of the robot in both the virtual andthe real plane In this plot the green dot is the final goalpoint and the robot trajectory is depicted with the red dotted
12 Journal of Sensors
Goalx
y
Obstacle 2rsquos current
Obstacle 1rsquos current
position and direction
position and direction
Robotrsquos currentposition
(a)
0 500 1000 1500 2000 2500 30000
1000
2000
3000Virtual plane
x (mm)
y (m
m)
minus1000 minus500 0 500 1000 1500 2000 2500 3000 3500 4000minus1000
01000200030004000
Real plane
x (mm)
y (m
m)
(b)
Figure 18 (a) Experiment result after the collision avoidance withall obstacles (b) Plot of the trajectories of the robot and the obstaclesin the virtual and real planes during the collision avoidance with allthe moving obstacles
circles The smooth trajectory was generated by using thelinear navigation laws as explained
From this experiment it is easily seen that the pro-posed hybrid reactive motion planning approach designedby the integration of the virtual plane approach and asensor based planning is very effective to dynamic collisionavoidance problems in cluttered uncertain environmentsThe effectiveness of the hybrid reactive motion planningmethod makes its usage very attractive to various dynamicnavigation applications of not only mobile robots but alsoother autonomous vehicles such as flying vehicles and self-driving vehicles
5 Conclusion
In this paper we proposed a hybrid reactive motion plan-ning method for an autonomous mobile robot in uncertain
dynamic environmentsThe hybrid reactive motion planningmethod combined a reactive path planning method whichcould transform dynamic moving obstacles into stationaryones with a sensor based approach which can provide relativeinformation of moving obstacles and environments The keyfeatures of the proposed method are twofold the first keyfeature is the simplification of complex dynamic motionplanning problems into stationary ones using the virtualplane approach while the second feature is the robustness ofa sensor basedmotion planning in which the pose estimationof moving obstacles is made by using a Kinect sensor whichprovides a ranging and color detection The sensor basedapproach improves the accuracy and robustness of the reac-tive motion planning approach by providing the informationof the obstacles and environments The performance ofthe proposed method was demonstrated through not onlysimulation studies but also field experiments using multiplemoving obstacles
In the further work a sensor fusion approach which couldimprove the heading estimation of a robot and the speedestimation of moving objects will be investigated more formore robust motion planning
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgments
This research was supported by the National Research Foun-dation ofKorea (NRF) (no 2014-017630 andno 2014-063396)and was also supported by the Human Resource TrainingProgram for Regional Innovation and Creativity through theMinistry of Education and National Research Foundation ofKorea (no 2014-066733)
References
[1] R Siegwart I R Nourbakhsh and D Scaramuzza Introductionto AutonomousMobile RobotsTheMITPress LondonUK 2ndedition 2011
[2] J P van den Berg and M H Overmars ldquoRoadmap-basedmotion planning in dynamic environmentsrdquo IEEE Transactionson Robotics vol 21 no 5 pp 885ndash897 2005
[3] D Hsu R Kindel J-C Latombe and S Rock ldquoRandomizedkinodynamic motion planning with moving obstaclesrdquo TheInternational Journal of Robotics Research vol 21 no 3 pp 233ndash255 2002
[4] R Gayle A Sud M C Lin and D Manocha ldquoReactivedeformation roadmaps motion planning of multiple robots indynamic environmentsrdquo in IEEERSJ International Conferenceon Intelligent Robots and Systems (IROS rsquo07) pp 3777ndash3783 SanDiego Calif USA November 2007
[5] S S Ge and Y J Cui ldquoDynamic motion planning for mobilerobots using potential field methodrdquo Autonomous Robots vol13 no 3 pp 207ndash222 2002
[6] K P Valavanis T Hebert R Kolluru and N TsourveloudisldquoMobile robot navigation in 2-D dynamic environments using
Journal of Sensors 13
an electrostatic potential fieldrdquo IEEE Transactions on SystemsMan and Cybernetics Part A Systems and Humans vol 30 no2 pp 187ndash196 2000
[7] X Gao Y Ou Y Fu et al ldquoA novel local path planningmethod considering both robot posture and path smoothnessrdquoin Proceeding of the IEEE International Conference on Roboticsand Biomimetics (ROBIO 13) pp 172ndash178 Shenzhen ChinaDecember 2013
[8] J Jiao Z Cao P Zhao X Liu and M Tan ldquoBezier curvebased path planning for a mobile manipulator in unknownenvironmentsrdquo in Proceedings of the IEEE International Confer-ence on Robotics and Biomimetics (ROBIO rsquo13) pp 1864ndash1868Shenzhen China December 2013
[9] A Stentz ldquoOptimal and efficient path planning for partially-known environmentsrdquo in Proceedings of the IEEE InternationalConference on Robotics and Automation vol 4 pp 3310ndash3317May 1994
[10] G Song H Wang J Zhang and T Meng ldquoAutomatic dockingsystem for recharging home surveillance robotsrdquo IEEE Transac-tions on Consumer Electronics vol 57 no 2 pp 428ndash435 2011
[11] M Seder and I Petrovic ldquoDynamic window based approachto mobile robot motion control in the presence of movingobstaclesrdquo in Proceedings of the IEEE International Conferenceon Robotics and Automation (ICRA rsquo07) pp 1986ndash1991 RomaItaly April 2007
[12] D Fox W Burgard and S Thrun ldquoThe dynamic windowapproach to collision avoidancerdquo IEEERobotics andAutomationMagazine vol 4 no 1 pp 23ndash33 1997
[13] J Borenstein and Y Koren ldquoReal-time obstacle avoidance forfast mobile robotsrdquo IEEE Transactions on Systems Man andCybernetics vol 19 no 5 pp 1179ndash1187 1989
[14] J Borenstein and Y Koren ldquoThe vector field histogrammdashfastobstacle avoidance for mobile robotsrdquo IEEE Transactions onRobotics and Automation vol 7 no 3 pp 278ndash288 1991
[15] R Simmons ldquoThe curvature-velocity method for local obsta-cle avoidancerdquo in Proceedings of the 13th IEEE InternationalConference on Robotics and Automation vol 4 pp 3375ndash3382Minneapolis Minn USA April 1996
[16] J Minguez and L Montano ldquoNearness Diagram (ND) nav-igation collision avoidance in troublesome scenariosrdquo IEEETransactions on Robotics and Automation vol 20 no 1 pp 45ndash59 2004
[17] E Owen and L Montano ldquoMotion planning in dynamicenvironments using the velocity spacerdquo in Proceedings of theIEEE IRSRSJ International Conference on Intelligent Robots andSystems (IROS 05) pp 997ndash1002 August 2005
[18] A Chakravarthy and D Ghose ldquoObstacle avoidance in adynamic environment a collision cone approachrdquo IEEE Trans-actions on Systems Man and Cybernetics Part A Systems andHumans vol 28 no 5 pp 562ndash574 1998
[19] F Belkhouche T Jin and C H Sung ldquoPlane collision coursedetection betweenmoving objects by transformation of coordi-natesrdquo in Proceedings of the 17th IEEE International Conferenceon Control Applications Part of IEEE Multi-Conference onSystems and Control San Antonio Tex USA 2008
[20] P Fiorini and Z Shiller ldquoMotion planning in dynamic environ-ments using velocity obstaclesrdquo International Journal of RoboticsResearch vol 17 no 7 pp 760ndash772 1998
[21] P Fiorini and Z Shiller ldquoMotion planning in dynamic envi-ronments using velocity obstaclesrdquoThe International Journal ofRobotics Research vol 17 no 7 pp 760ndash772 1998
[22] Z Shiller F Large and S Sekhavat ldquoMotion planning indynamic environments obstacles moving along arbitrary tra-jectoriesrdquo in Proceedings of the IEEE International Conferenceon Robotics and Automation (ICRA rsquo01) pp 3716ndash3721 SeoulRepublic of Korea May 2001
[23] F Belkhouche ldquoReactive path planning in a dynamic environ-mentrdquo IEEE Transactions on Robotics vol 25 no 4 pp 902ndash9112009
[24] B Dorj D Tuvshinjargal K T Chong D P Hong and DJ Lee ldquoMulti-sensor fusion based effective obstacle avoidanceand path-following technologyrdquo Advanced Science Letters vol20 pp 1751ndash1756 2014
[25] R Philippsen B Jensen and R Siegwart ldquoTowards real-timesensor based path planning in highly dynamic environmentsrdquoin Autonomous Navigation in Dynamic Environments vol 35of Tracts in Advanced Robotics pp 135ndash148 Springer BerlinGermany 2007
[26] K-T Song and C C Chang ldquoReactive navigation in dynamicenvironment using a multisensor predictorrdquo IEEE Transactionson Systems Man and Cybernetics Part B Cybernetics vol 29no 6 pp 870ndash880 1999
[27] K Khoshelham and S O Elberink ldquoAccuracy and resolutionof kinect depth data for indoor mapping applicationsrdquo Sensorsvol 12 no 2 pp 1437ndash1454 2012
[28] J Han L Shao D Xu and J Shotton ldquoEnhanced computervision with Microsoft Kinect sensor a reviewrdquo IEEE Transac-tions on Cybernetics vol 43 no 5 pp 1318ndash1334 2013
[29] F Kammergruber A Ebner and W A Gunthner ldquoNavigationin virtual reality using microsoft kinectrdquo in Proceedings of the12th International Conference on Construction Application ofVirtual Reality Taipei Taiwan November 2012
[30] Microsoft Kinect for X-BXO 360 httpwwwxboxcomen-USxbox-oneaccessorieskinect-for-xbox-one
[31] K Kronolige and P Mihelich ldquoTechnical Description of KinectCalibrationrdquo July 2014 httpwwwrosorgwikikinect calibra-tiontechnical
[32] G Shashank M Sharma and A Shukla ldquoRobot navigationusing image processing and isolated word recognitionrdquo Interna-tional Journal of Computer Science and Engineering vol 4 no5 pp 769ndash778 2012
[33] TW Choe H S Son J H Song and J H Park Intelligent RobotSystem HBE-RoboCar HanBack Electronics Seoul Republic ofKorea 2012
[34] K R Jadavm M A Lokhandwala and A P Gharge ldquoVisionbased moving object detection and trackingrdquo in Proceedings ofthe National Conference on Recent Trends in Engineering andTechnology Anand India 2011
[35] A Chandak K Gosavi S Giri S Agrawal and P Kulka-rni ldquoPath planning for mobile robot navigation using imageprocessingrdquo International Journal of Scientific amp EngineeringResearch vol 4 pp 1490ndash1495 2013
International Journal of
AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Active and Passive Electronic Components
Control Scienceand Engineering
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
RotatingMachinery
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation httpwwwhindawicom
Journal ofEngineeringVolume 2014
Submit your manuscripts athttpwwwhindawicom
VLSI Design
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Shock and Vibration
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Civil EngineeringAdvances in
Acoustics and VibrationAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Advances inOptoElectronics
Hindawi Publishing Corporation httpwwwhindawicom
Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
SensorsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Chemical EngineeringInternational Journal of Antennas and
Propagation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Navigation and Observation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
DistributedSensor Networks
International Journal of
Journal of Sensors 7
Input Coordinate of Robot obstacle 1 obstacle 2 and goalOutput Speed of robotrsquos right and left wheelsCalculate 119897
119892119903using Kinect
While 119897119892119903gt 0 do
Calculate 119897119892119903and 120601
119892119903
Send robot speedif All119863
119894in CA
Calculate 119897119894119903using Kinect
if All 119897119894119903gt 0 then
There is no collision risk keep send robot speedelseConstruct the virtual planeTest the collision in the virtual planeif there is a collision risk thenCheck sonar sensor valueif sonar value is too small thenChose 120579
119903of quick motion control
Send robot speedelseConstruct the 120579-windowChose the appropriate values for 120579
119903
Send robot speedend if
end ifend if
end while
Algorithm 1 Hybrid reactive dynamic navigation algorithm
heading angle 120579 as shown in (28) new heading informationin each four different phase sections is computed by using thefollowing equations
Δ119909 = 119909119892minus 119909119903
Δ119910 = 119910119892minus 119910119903
120579 = tanminus1 (119910119892minus 119910119903
119909119892minus 119909119903
)
(28)
(1) 119909119892gt 119909119903 119910119892gt 119910119903
120579 = 120579 + 0
(2) 119909119892lt 119909119903 119910119892gt 119910119903
120579 = 314 minus 120579
(3) 119909119892lt 119909119903 119910119892lt 119910119903
120579 = 314 + 120579
(4) 119909119892gt 119909119903 119910119892lt 119910119903
120579 = 628 minus 120579
(29)
Finally the relative velocity obstacle based reactivedynamic navigation algorithm with the capability of collisionavoidance is summarized in Algorithm 1
4 Experimental Results
For the evaluation and verification of the proposed sensorbased reactive dynamic navigation algorithms both simu-lation study and experimental tests are carried out with arealistic experimental setup
41 Experimental Scenario and Setup For experimental teststwo robots are assigned asmoving obstacles and the third oneis used as a master robot that generates control commands toavoid the dynamic obstacles based on the suggested reactivemotion planning algorithms For the moving obstacles twoNXT Mindstorm based vehicles that can either move in arandom direction or follow a designated path are developedThe HBE-RoboCAR equipped with ultrasonic and encodersensors is used as a master robot as shown in Figure 7
TheHBE-RoboCAR [33] has 8-bit AVRATmega128L pro-cessor The robot is equipped with multiembedded processormodules (embedded processor FPGA MCU) It providesdetection of obstacles with ultrasonic and infrared sensormotion control with acceleration sensor and motor encoderHBE-RoboCAR has the ability to communicate with otherdevice either wireless or wired technology such as Bluetoothmodule and ISP UART interfaces respectively In this workHBE-RoboCAR is connected to a computer on the groundcontrol station using Bluetooth wireless communicationFigure 7 shows the hardware specification and sensor systemsfor the robot platform and Figure 8 shows the interface andcontrol architecture for the embedded components of HBE-RoboCAR [33]
8 Journal of Sensors
Power switch1Power LED and reset switch2Interface connector3Wireless network connector4Ultrasonic sensor5PSD sensor6
LED 7Phototransistors8Voltmeter9DC inCharge inBattery in
101112
1
23
4
5 5
55
5 5
55
6
6
7 7
7 7
7 7
77
8
9
101112
Figure 7 Mobile robot hardware and sensor systems (HBE-RoboCAR [33])
Embedded processor
FPGA module
(Altera Xilinx)
AVRboard
Inte
rface
bo
ard
Inte
rface
bo
ard
Inte
rface
bo
ard
Con
nect
or
ATmega128
Acceleration sensor
(ADXL202E)
POWER
ATmega8 Voltage gauge
Motor driver
Sens
or C
ON
Mot
or C
ON
Photo sensor
(GP2Y0A21)
Infrared sensor
(IL-8L EL-8L)
Ultrasonic sensor
(MA40)
Front Back
DC geared motor(2EA)
DC geared encoder motor
(2EA)
Control board(compatibility with 3 kinds)
HBE-RoboCAR board
Base board Sensor board
LED
Motor(+5V +33V)
times 2)(Write (Red times 2)
Figure 8 Block diagram of the HBE-RoboCAR embedded system
For the dynamic obstacle avoidance the relative velocityobstacle based navigation laws require the range and headinginformation from sensors For the range estimation Kinectsensor is utilized If the Kinect sensor detects the final goalusing a color-based detection algorithm [32 34 35] it sendsthe information to the master robot After receiving thetarget point the master robot starts the onboard navigationalgorithm to reach the goal while avoiding dynamic obstaclesWhen the robot navigates in the experimental field thedistance to each moving obstacle is measured by the Kinectsensor and the range information is fed back to the masterrobot via Bluetooth communication as inputs to the reactivemotion planning algorithms The detailed scenario for theexperimental setup is illustrated in Figure 9
42 Simulation Results Figure 10 shows the simulationresults of the reactive motion planning on both the virtualplane and the real plane In this simulation the trajectoriesof two obstacles were described by the blue and black colorlines the trajectory of the master robot was depicted in thered line and the goal was indicated by green dot As canbe clearly seen in the real plane and the virtual plane inFigures 10(b) and 10(a) the master robot avoided the firstobstaclewhichwasmoving into themaster robot and success-fully reached the target goal after avoiding the collision withthe second moving obstacle just before reaching the targetWhile the master robot avoids the obstacles it generates acollision cone by choosing a deviation point on the virtualplane On the virtual plane the radius of the collision cone
Journal of Sensors 9
Bluetoothcommunication
Kinect
PC
Obstacle 1
Obstacle 2
Robot
Destination
Figure 9 Test scenario and architecture for experimental setup
Deviation point
Deviation point
First obstaclersquoscollision coneSecond obstaclersquos
collision cone
0 500 1000 1500 2000 2500 3000minus500
0500
1000150020002500
x coordinate
y coo
rdin
ate
Virtual plane
(a)
Second obstacle
First obstacleRobot
Goal
Trajectory of robot
Coverage area
Initial locations ofrobot and obstacles
minus500 0 500 1000 1500 2000 2500 3000 3500 4000minus1000
01000200030004000
y coo
rdin
ate
Real plane
x coordinate
(b)
Figure 10 Simulation results on virtual plane and real plane
0 10 20 30 40 50 60 7050
100150200
Cycle time (times of program running)
Targ
et an
gle
(deg
) an
gle (
deg)
(d
eg)
0 10 20 30 40 50 60 7050
100150200
Cycle time (times of program running)
Robo
t hea
ding
0 10 20 30 40 50 60 70minus100
0100
Cycle time (times of program running)
Varia
nce a
ngle
Figure 11 Orientation angle information target angle robot head-ing angle and variance angle
is the same as the obstaclersquos one and the distance betweenthe deviation point and the collision cone is dependent onthe radius of the master robotThe ellipses indicate the initiallocations of the robot and the obstacles
In Figure 11 the orientation angle information used forthe robot control was illustrated The upper top plot showedthe angle of the moving robot to the target from the virtualplane the second plot showed the robot heading anglecommanded for the navigation control and the third plotshowed the angle difference between the target and robotheading angle At the final stage of the path planning the
0 10 20 30 40 50 60 70100110120130140
Cycle time (times of program running)
Obs
tacle
1 an
gle
(deg
ree)
1840 1860 1880 1900 1920 1940 1960 1980 2000 2020 2040500
1000
1500
2000
x coordinate (mm)
y coo
rdin
ate (
mm
) Obstacle 1 position
0 10 20 30 40 50 60 700
20
40
60
Cycle time (times of program running)
Obs
tacle
1 sp
eed
Figure 12 Results of firstmoving obstaclersquos orientation angle speedand position in 119909-119910 coordinates
commanded orientation angle and the target angle to the goalpoint become the same Instead of controlling the robot withthe orientation angle the speed of the master robot can beused to avoid the moving obstacles
Figures 12 and 13 show each moving obstaclersquos headingangle linear velocity and trajectory As can be seen in order
10 Journal of Sensors
0 10 20 30 40 50 60 70150
200
250
300
Cycle time (times of program running)
Obs
tacle
2 an
gle
(deg
ree)
1000 1010 1020 1030 1040 1050 1060 10700
500
1000
1500
x coordinate (mm)
y coo
rdin
ate (
mm
) Obstacle 2 position
0 10 20 30 40 50 60 700
20
40
60
Obs
tacle
2 sp
eed
Cycle time (times of program running)
Figure 13 Results of second moving obstaclersquos orientation anglespeed and position in 119909-119910 coordinates
500 1000 1500 2000 2500 30000
2000
4000
x coordinate (mm)
y co
ordi
nate
(mm
)
Robot position
0 10 20 30 40 50 60 700
100
200
Cycle time (times of program running)
Robo
t spe
ed
0 10 20 30 40 50 60 700
50
100
Cycle time (times of program running)
Righ
t whe
el sp
eed
0 10 20 30 40 50 60 700
50
100
Cycle time (times of program running)
Left
whe
el sp
eed
Figure 14 Results of robotrsquos trajectory speed and right and leftwheels speed
to carry out a dynamic path planning experiment the speedand the heading angle were changed during the simulationresulting in uncertain cluttered environments
Figure 14 shows the mobile robotrsquos trajectory from thestart point to the goal point and also the forward velocity andeach wheel speed from encoder As can be seen the trajectoryof the robot has a sharp turn around the location (1500mm
Obstacle 1rsquos startingposition
Obstacle 2rsquos startingposition
Goal x
y
Robotrsquos startingposition
(a)
0 500 1000 1500 2000 2500 30000
1000
2000
3000Virtual plane
x (mm)
y (m
m)
minus1000 minus500 0 500 1000 1500 2000 2500 3000 3500 4000
0
1000
2000
Real plane
R (27513368 21269749)
3000
x (mm)
y (m
m)
(b)
Figure 15 (a) Experiment environment (initial stage) (b) Plots ofthe experimented data at initial stage
1000mm) in order to avoid the second moving obstacle It isseen that the right and left wheel speeds are mirrored along atime axis Also we can see relationship of the variance angleand robotrsquos right and left wheels speed
43 Field Test and Results Further verification of the per-formance of the proposed hybrid dynamic path planningapproach for real experiments was carried out with thesame scenario used in the previous simulation part In theexperiment two moving obstacles are used and a masterrobot moves without any collision with the obstacles to thetarget point as shown in Figure 15(a) and the initial locationsof the obstacles and the robot are shown in Figure 15(b) Thered dot is the initial starting position of the master robot at(2750mm 2126mm) the black dot is the initial location ofthe second obstacle at (2050mm 1900mm) and the blue dotis the initial location of the first moving obstacle at (1050mm2000mm) In the virtual plane the collision cone of the firstobstacle is depicted as shown in the top plot of Figure 15(b)
Journal of Sensors 11
Goal x
y
Robotrsquos currentposition
Obstacle 2rsquos current Obstacle 1rsquos currentposition and direction position and direction
(a)
0 500 1000 1500 2000 2500 30000
1000
2000
3000Virtual plane
x (mm)
y (m
m)
minus1000 minus500 0 500 1000 1500 2000 2500 3000 3500 4000
0
1000
2000
3000
Real plane
x (mm)
y (m
m)
(b)
Figure 16 (a) Experiment result of collision avoidance with the firstobstacle (b) Plot of the trajectories of the robot and the obstacles inthe virtual and real planes during the collision avoidance with thefirst moving obstacle
and the robot carries out its motion control based on thecollision cone in the virtual plane until it avoids the firstobstacle
Figure 16 showed the collision avoidance performancewith the fist moving obstacle and as can be seen the masterrobot avoided the commanded orientation control into theleft direction without any collisions which is described indetail in the virtual plane in the top plot of Figure 16(b) Thetrajectory and movement of the robot and the obstacles weredepicted in the real plane in Figure 16(b) for the detailedanalysis
In a similar way Figure 17(a) illustrated the collisionavoidance with the second moving obstacle and the detailedpath and trajectory are described in Figure 17(b) The topplot of Figure 17(b) shows the motion planning in thevirtual plane where the initial location of the second movingobstacle is recognized at the center of (2050mm 1900mm)
Goal x
y
Robotrsquos currentposition
Obstacle 2rsquos currentObstacle 1rsquos currentposition and direction
position and direction
(a)
0 500 1000 1500 2000 2500 30000
1000
2000
3000Virtual plane
x (mm)
y (m
m)
minus1000 minus500 0 500 1000 1500 2000 2500 3000 3500 4000
0
1000
2000
3000
Real plane
x (mm)
y (m
m)
(b)
Figure 17 (a) Experiment result of collision avoidance with thesecond obstacle (b) Plot of the trajectories of the robot and theobstacles in the virtual and real planes during the collision avoidancewith the second moving obstacle
Based on this initial location the second collision cone isconstructed with a big green ellipse that allows the virtualrobot to navigate without any collision with the secondobstacle The trajectory of the robot motion planning in thereal plane is depicted in the bottom plot of Figure 17(b)
Now at the final phase after avoiding all the obstacles themaster robot reached the target goal with a motion controlas shown in Figure 18(a) The overall trajectories of the robotfrom the starting point to the final goal target in the virtualplane were depicted in the top plot of Figure 18(a) and thetrajectories of the robot in the real plane were depicted inthe bottom plot of Figure 18(b) Note that the trajectories ofthe robot location differ from each other in the virtual planeand the real plane However the orientation change gives thesame direction change of the robot in both the virtual andthe real plane In this plot the green dot is the final goalpoint and the robot trajectory is depicted with the red dotted
12 Journal of Sensors
Goalx
y
Obstacle 2rsquos current
Obstacle 1rsquos current
position and direction
position and direction
Robotrsquos currentposition
(a)
0 500 1000 1500 2000 2500 30000
1000
2000
3000Virtual plane
x (mm)
y (m
m)
minus1000 minus500 0 500 1000 1500 2000 2500 3000 3500 4000minus1000
01000200030004000
Real plane
x (mm)
y (m
m)
(b)
Figure 18 (a) Experiment result after the collision avoidance withall obstacles (b) Plot of the trajectories of the robot and the obstaclesin the virtual and real planes during the collision avoidance with allthe moving obstacles
circles The smooth trajectory was generated by using thelinear navigation laws as explained
From this experiment it is easily seen that the pro-posed hybrid reactive motion planning approach designedby the integration of the virtual plane approach and asensor based planning is very effective to dynamic collisionavoidance problems in cluttered uncertain environmentsThe effectiveness of the hybrid reactive motion planningmethod makes its usage very attractive to various dynamicnavigation applications of not only mobile robots but alsoother autonomous vehicles such as flying vehicles and self-driving vehicles
5 Conclusion
In this paper we proposed a hybrid reactive motion plan-ning method for an autonomous mobile robot in uncertain
dynamic environmentsThe hybrid reactive motion planningmethod combined a reactive path planning method whichcould transform dynamic moving obstacles into stationaryones with a sensor based approach which can provide relativeinformation of moving obstacles and environments The keyfeatures of the proposed method are twofold the first keyfeature is the simplification of complex dynamic motionplanning problems into stationary ones using the virtualplane approach while the second feature is the robustness ofa sensor basedmotion planning in which the pose estimationof moving obstacles is made by using a Kinect sensor whichprovides a ranging and color detection The sensor basedapproach improves the accuracy and robustness of the reac-tive motion planning approach by providing the informationof the obstacles and environments The performance ofthe proposed method was demonstrated through not onlysimulation studies but also field experiments using multiplemoving obstacles
In the further work a sensor fusion approach which couldimprove the heading estimation of a robot and the speedestimation of moving objects will be investigated more formore robust motion planning
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgments
This research was supported by the National Research Foun-dation ofKorea (NRF) (no 2014-017630 andno 2014-063396)and was also supported by the Human Resource TrainingProgram for Regional Innovation and Creativity through theMinistry of Education and National Research Foundation ofKorea (no 2014-066733)
References
[1] R Siegwart I R Nourbakhsh and D Scaramuzza Introductionto AutonomousMobile RobotsTheMITPress LondonUK 2ndedition 2011
[2] J P van den Berg and M H Overmars ldquoRoadmap-basedmotion planning in dynamic environmentsrdquo IEEE Transactionson Robotics vol 21 no 5 pp 885ndash897 2005
[3] D Hsu R Kindel J-C Latombe and S Rock ldquoRandomizedkinodynamic motion planning with moving obstaclesrdquo TheInternational Journal of Robotics Research vol 21 no 3 pp 233ndash255 2002
[4] R Gayle A Sud M C Lin and D Manocha ldquoReactivedeformation roadmaps motion planning of multiple robots indynamic environmentsrdquo in IEEERSJ International Conferenceon Intelligent Robots and Systems (IROS rsquo07) pp 3777ndash3783 SanDiego Calif USA November 2007
[5] S S Ge and Y J Cui ldquoDynamic motion planning for mobilerobots using potential field methodrdquo Autonomous Robots vol13 no 3 pp 207ndash222 2002
[6] K P Valavanis T Hebert R Kolluru and N TsourveloudisldquoMobile robot navigation in 2-D dynamic environments using
Journal of Sensors 13
an electrostatic potential fieldrdquo IEEE Transactions on SystemsMan and Cybernetics Part A Systems and Humans vol 30 no2 pp 187ndash196 2000
[7] X Gao Y Ou Y Fu et al ldquoA novel local path planningmethod considering both robot posture and path smoothnessrdquoin Proceeding of the IEEE International Conference on Roboticsand Biomimetics (ROBIO 13) pp 172ndash178 Shenzhen ChinaDecember 2013
[8] J Jiao Z Cao P Zhao X Liu and M Tan ldquoBezier curvebased path planning for a mobile manipulator in unknownenvironmentsrdquo in Proceedings of the IEEE International Confer-ence on Robotics and Biomimetics (ROBIO rsquo13) pp 1864ndash1868Shenzhen China December 2013
[9] A Stentz ldquoOptimal and efficient path planning for partially-known environmentsrdquo in Proceedings of the IEEE InternationalConference on Robotics and Automation vol 4 pp 3310ndash3317May 1994
[10] G Song H Wang J Zhang and T Meng ldquoAutomatic dockingsystem for recharging home surveillance robotsrdquo IEEE Transac-tions on Consumer Electronics vol 57 no 2 pp 428ndash435 2011
[11] M Seder and I Petrovic ldquoDynamic window based approachto mobile robot motion control in the presence of movingobstaclesrdquo in Proceedings of the IEEE International Conferenceon Robotics and Automation (ICRA rsquo07) pp 1986ndash1991 RomaItaly April 2007
[12] D Fox W Burgard and S Thrun ldquoThe dynamic windowapproach to collision avoidancerdquo IEEERobotics andAutomationMagazine vol 4 no 1 pp 23ndash33 1997
[13] J Borenstein and Y Koren ldquoReal-time obstacle avoidance forfast mobile robotsrdquo IEEE Transactions on Systems Man andCybernetics vol 19 no 5 pp 1179ndash1187 1989
[14] J Borenstein and Y Koren ldquoThe vector field histogrammdashfastobstacle avoidance for mobile robotsrdquo IEEE Transactions onRobotics and Automation vol 7 no 3 pp 278ndash288 1991
[15] R Simmons ldquoThe curvature-velocity method for local obsta-cle avoidancerdquo in Proceedings of the 13th IEEE InternationalConference on Robotics and Automation vol 4 pp 3375ndash3382Minneapolis Minn USA April 1996
[16] J Minguez and L Montano ldquoNearness Diagram (ND) nav-igation collision avoidance in troublesome scenariosrdquo IEEETransactions on Robotics and Automation vol 20 no 1 pp 45ndash59 2004
[17] E Owen and L Montano ldquoMotion planning in dynamicenvironments using the velocity spacerdquo in Proceedings of theIEEE IRSRSJ International Conference on Intelligent Robots andSystems (IROS 05) pp 997ndash1002 August 2005
[18] A Chakravarthy and D Ghose ldquoObstacle avoidance in adynamic environment a collision cone approachrdquo IEEE Trans-actions on Systems Man and Cybernetics Part A Systems andHumans vol 28 no 5 pp 562ndash574 1998
[19] F Belkhouche T Jin and C H Sung ldquoPlane collision coursedetection betweenmoving objects by transformation of coordi-natesrdquo in Proceedings of the 17th IEEE International Conferenceon Control Applications Part of IEEE Multi-Conference onSystems and Control San Antonio Tex USA 2008
[20] P Fiorini and Z Shiller ldquoMotion planning in dynamic environ-ments using velocity obstaclesrdquo International Journal of RoboticsResearch vol 17 no 7 pp 760ndash772 1998
[21] P Fiorini and Z Shiller ldquoMotion planning in dynamic envi-ronments using velocity obstaclesrdquoThe International Journal ofRobotics Research vol 17 no 7 pp 760ndash772 1998
[22] Z Shiller F Large and S Sekhavat ldquoMotion planning indynamic environments obstacles moving along arbitrary tra-jectoriesrdquo in Proceedings of the IEEE International Conferenceon Robotics and Automation (ICRA rsquo01) pp 3716ndash3721 SeoulRepublic of Korea May 2001
[23] F Belkhouche ldquoReactive path planning in a dynamic environ-mentrdquo IEEE Transactions on Robotics vol 25 no 4 pp 902ndash9112009
[24] B Dorj D Tuvshinjargal K T Chong D P Hong and DJ Lee ldquoMulti-sensor fusion based effective obstacle avoidanceand path-following technologyrdquo Advanced Science Letters vol20 pp 1751ndash1756 2014
[25] R Philippsen B Jensen and R Siegwart ldquoTowards real-timesensor based path planning in highly dynamic environmentsrdquoin Autonomous Navigation in Dynamic Environments vol 35of Tracts in Advanced Robotics pp 135ndash148 Springer BerlinGermany 2007
[26] K-T Song and C C Chang ldquoReactive navigation in dynamicenvironment using a multisensor predictorrdquo IEEE Transactionson Systems Man and Cybernetics Part B Cybernetics vol 29no 6 pp 870ndash880 1999
[27] K Khoshelham and S O Elberink ldquoAccuracy and resolutionof kinect depth data for indoor mapping applicationsrdquo Sensorsvol 12 no 2 pp 1437ndash1454 2012
[28] J Han L Shao D Xu and J Shotton ldquoEnhanced computervision with Microsoft Kinect sensor a reviewrdquo IEEE Transac-tions on Cybernetics vol 43 no 5 pp 1318ndash1334 2013
[29] F Kammergruber A Ebner and W A Gunthner ldquoNavigationin virtual reality using microsoft kinectrdquo in Proceedings of the12th International Conference on Construction Application ofVirtual Reality Taipei Taiwan November 2012
[30] Microsoft Kinect for X-BXO 360 httpwwwxboxcomen-USxbox-oneaccessorieskinect-for-xbox-one
[31] K Kronolige and P Mihelich ldquoTechnical Description of KinectCalibrationrdquo July 2014 httpwwwrosorgwikikinect calibra-tiontechnical
[32] G Shashank M Sharma and A Shukla ldquoRobot navigationusing image processing and isolated word recognitionrdquo Interna-tional Journal of Computer Science and Engineering vol 4 no5 pp 769ndash778 2012
[33] TW Choe H S Son J H Song and J H Park Intelligent RobotSystem HBE-RoboCar HanBack Electronics Seoul Republic ofKorea 2012
[34] K R Jadavm M A Lokhandwala and A P Gharge ldquoVisionbased moving object detection and trackingrdquo in Proceedings ofthe National Conference on Recent Trends in Engineering andTechnology Anand India 2011
[35] A Chandak K Gosavi S Giri S Agrawal and P Kulka-rni ldquoPath planning for mobile robot navigation using imageprocessingrdquo International Journal of Scientific amp EngineeringResearch vol 4 pp 1490ndash1495 2013
International Journal of
AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Active and Passive Electronic Components
Control Scienceand Engineering
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
RotatingMachinery
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation httpwwwhindawicom
Journal ofEngineeringVolume 2014
Submit your manuscripts athttpwwwhindawicom
VLSI Design
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Shock and Vibration
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Civil EngineeringAdvances in
Acoustics and VibrationAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Advances inOptoElectronics
Hindawi Publishing Corporation httpwwwhindawicom
Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
SensorsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Chemical EngineeringInternational Journal of Antennas and
Propagation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Navigation and Observation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
DistributedSensor Networks
International Journal of
8 Journal of Sensors
Power switch1Power LED and reset switch2Interface connector3Wireless network connector4Ultrasonic sensor5PSD sensor6
LED 7Phototransistors8Voltmeter9DC inCharge inBattery in
101112
1
23
4
5 5
55
5 5
55
6
6
7 7
7 7
7 7
77
8
9
101112
Figure 7 Mobile robot hardware and sensor systems (HBE-RoboCAR [33])
Embedded processor
FPGA module
(Altera Xilinx)
AVRboard
Inte
rface
bo
ard
Inte
rface
bo
ard
Inte
rface
bo
ard
Con
nect
or
ATmega128
Acceleration sensor
(ADXL202E)
POWER
ATmega8 Voltage gauge
Motor driver
Sens
or C
ON
Mot
or C
ON
Photo sensor
(GP2Y0A21)
Infrared sensor
(IL-8L EL-8L)
Ultrasonic sensor
(MA40)
Front Back
DC geared motor(2EA)
DC geared encoder motor
(2EA)
Control board(compatibility with 3 kinds)
HBE-RoboCAR board
Base board Sensor board
LED
Motor(+5V +33V)
times 2)(Write (Red times 2)
Figure 8 Block diagram of the HBE-RoboCAR embedded system
For the dynamic obstacle avoidance the relative velocityobstacle based navigation laws require the range and headinginformation from sensors For the range estimation Kinectsensor is utilized If the Kinect sensor detects the final goalusing a color-based detection algorithm [32 34 35] it sendsthe information to the master robot After receiving thetarget point the master robot starts the onboard navigationalgorithm to reach the goal while avoiding dynamic obstaclesWhen the robot navigates in the experimental field thedistance to each moving obstacle is measured by the Kinectsensor and the range information is fed back to the masterrobot via Bluetooth communication as inputs to the reactivemotion planning algorithms The detailed scenario for theexperimental setup is illustrated in Figure 9
42 Simulation Results Figure 10 shows the simulationresults of the reactive motion planning on both the virtualplane and the real plane In this simulation the trajectoriesof two obstacles were described by the blue and black colorlines the trajectory of the master robot was depicted in thered line and the goal was indicated by green dot As canbe clearly seen in the real plane and the virtual plane inFigures 10(b) and 10(a) the master robot avoided the firstobstaclewhichwasmoving into themaster robot and success-fully reached the target goal after avoiding the collision withthe second moving obstacle just before reaching the targetWhile the master robot avoids the obstacles it generates acollision cone by choosing a deviation point on the virtualplane On the virtual plane the radius of the collision cone
Journal of Sensors 9
Bluetoothcommunication
Kinect
PC
Obstacle 1
Obstacle 2
Robot
Destination
Figure 9 Test scenario and architecture for experimental setup
Deviation point
Deviation point
First obstaclersquoscollision coneSecond obstaclersquos
collision cone
0 500 1000 1500 2000 2500 3000minus500
0500
1000150020002500
x coordinate
y coo
rdin
ate
Virtual plane
(a)
Second obstacle
First obstacleRobot
Goal
Trajectory of robot
Coverage area
Initial locations ofrobot and obstacles
minus500 0 500 1000 1500 2000 2500 3000 3500 4000minus1000
01000200030004000
y coo
rdin
ate
Real plane
x coordinate
(b)
Figure 10 Simulation results on virtual plane and real plane
0 10 20 30 40 50 60 7050
100150200
Cycle time (times of program running)
Targ
et an
gle
(deg
) an
gle (
deg)
(d
eg)
0 10 20 30 40 50 60 7050
100150200
Cycle time (times of program running)
Robo
t hea
ding
0 10 20 30 40 50 60 70minus100
0100
Cycle time (times of program running)
Varia
nce a
ngle
Figure 11 Orientation angle information target angle robot head-ing angle and variance angle
is the same as the obstaclersquos one and the distance betweenthe deviation point and the collision cone is dependent onthe radius of the master robotThe ellipses indicate the initiallocations of the robot and the obstacles
In Figure 11 the orientation angle information used forthe robot control was illustrated The upper top plot showedthe angle of the moving robot to the target from the virtualplane the second plot showed the robot heading anglecommanded for the navigation control and the third plotshowed the angle difference between the target and robotheading angle At the final stage of the path planning the
0 10 20 30 40 50 60 70100110120130140
Cycle time (times of program running)
Obs
tacle
1 an
gle
(deg
ree)
1840 1860 1880 1900 1920 1940 1960 1980 2000 2020 2040500
1000
1500
2000
x coordinate (mm)
y coo
rdin
ate (
mm
) Obstacle 1 position
0 10 20 30 40 50 60 700
20
40
60
Cycle time (times of program running)
Obs
tacle
1 sp
eed
Figure 12 Results of firstmoving obstaclersquos orientation angle speedand position in 119909-119910 coordinates
commanded orientation angle and the target angle to the goalpoint become the same Instead of controlling the robot withthe orientation angle the speed of the master robot can beused to avoid the moving obstacles
Figures 12 and 13 show each moving obstaclersquos headingangle linear velocity and trajectory As can be seen in order
10 Journal of Sensors
0 10 20 30 40 50 60 70150
200
250
300
Cycle time (times of program running)
Obs
tacle
2 an
gle
(deg
ree)
1000 1010 1020 1030 1040 1050 1060 10700
500
1000
1500
x coordinate (mm)
y coo
rdin
ate (
mm
) Obstacle 2 position
0 10 20 30 40 50 60 700
20
40
60
Obs
tacle
2 sp
eed
Cycle time (times of program running)
Figure 13 Results of second moving obstaclersquos orientation anglespeed and position in 119909-119910 coordinates
500 1000 1500 2000 2500 30000
2000
4000
x coordinate (mm)
y co
ordi
nate
(mm
)
Robot position
0 10 20 30 40 50 60 700
100
200
Cycle time (times of program running)
Robo
t spe
ed
0 10 20 30 40 50 60 700
50
100
Cycle time (times of program running)
Righ
t whe
el sp
eed
0 10 20 30 40 50 60 700
50
100
Cycle time (times of program running)
Left
whe
el sp
eed
Figure 14 Results of robotrsquos trajectory speed and right and leftwheels speed
to carry out a dynamic path planning experiment the speedand the heading angle were changed during the simulationresulting in uncertain cluttered environments
Figure 14 shows the mobile robotrsquos trajectory from thestart point to the goal point and also the forward velocity andeach wheel speed from encoder As can be seen the trajectoryof the robot has a sharp turn around the location (1500mm
Obstacle 1rsquos startingposition
Obstacle 2rsquos startingposition
Goal x
y
Robotrsquos startingposition
(a)
0 500 1000 1500 2000 2500 30000
1000
2000
3000Virtual plane
x (mm)
y (m
m)
minus1000 minus500 0 500 1000 1500 2000 2500 3000 3500 4000
0
1000
2000
Real plane
R (27513368 21269749)
3000
x (mm)
y (m
m)
(b)
Figure 15 (a) Experiment environment (initial stage) (b) Plots ofthe experimented data at initial stage
1000mm) in order to avoid the second moving obstacle It isseen that the right and left wheel speeds are mirrored along atime axis Also we can see relationship of the variance angleand robotrsquos right and left wheels speed
43 Field Test and Results Further verification of the per-formance of the proposed hybrid dynamic path planningapproach for real experiments was carried out with thesame scenario used in the previous simulation part In theexperiment two moving obstacles are used and a masterrobot moves without any collision with the obstacles to thetarget point as shown in Figure 15(a) and the initial locationsof the obstacles and the robot are shown in Figure 15(b) Thered dot is the initial starting position of the master robot at(2750mm 2126mm) the black dot is the initial location ofthe second obstacle at (2050mm 1900mm) and the blue dotis the initial location of the first moving obstacle at (1050mm2000mm) In the virtual plane the collision cone of the firstobstacle is depicted as shown in the top plot of Figure 15(b)
Journal of Sensors 11
Goal x
y
Robotrsquos currentposition
Obstacle 2rsquos current Obstacle 1rsquos currentposition and direction position and direction
(a)
0 500 1000 1500 2000 2500 30000
1000
2000
3000Virtual plane
x (mm)
y (m
m)
minus1000 minus500 0 500 1000 1500 2000 2500 3000 3500 4000
0
1000
2000
3000
Real plane
x (mm)
y (m
m)
(b)
Figure 16 (a) Experiment result of collision avoidance with the firstobstacle (b) Plot of the trajectories of the robot and the obstacles inthe virtual and real planes during the collision avoidance with thefirst moving obstacle
and the robot carries out its motion control based on thecollision cone in the virtual plane until it avoids the firstobstacle
Figure 16 showed the collision avoidance performancewith the fist moving obstacle and as can be seen the masterrobot avoided the commanded orientation control into theleft direction without any collisions which is described indetail in the virtual plane in the top plot of Figure 16(b) Thetrajectory and movement of the robot and the obstacles weredepicted in the real plane in Figure 16(b) for the detailedanalysis
In a similar way Figure 17(a) illustrated the collisionavoidance with the second moving obstacle and the detailedpath and trajectory are described in Figure 17(b) The topplot of Figure 17(b) shows the motion planning in thevirtual plane where the initial location of the second movingobstacle is recognized at the center of (2050mm 1900mm)
Goal x
y
Robotrsquos currentposition
Obstacle 2rsquos currentObstacle 1rsquos currentposition and direction
position and direction
(a)
0 500 1000 1500 2000 2500 30000
1000
2000
3000Virtual plane
x (mm)
y (m
m)
minus1000 minus500 0 500 1000 1500 2000 2500 3000 3500 4000
0
1000
2000
3000
Real plane
x (mm)
y (m
m)
(b)
Figure 17 (a) Experiment result of collision avoidance with thesecond obstacle (b) Plot of the trajectories of the robot and theobstacles in the virtual and real planes during the collision avoidancewith the second moving obstacle
Based on this initial location the second collision cone isconstructed with a big green ellipse that allows the virtualrobot to navigate without any collision with the secondobstacle The trajectory of the robot motion planning in thereal plane is depicted in the bottom plot of Figure 17(b)
Now at the final phase after avoiding all the obstacles themaster robot reached the target goal with a motion controlas shown in Figure 18(a) The overall trajectories of the robotfrom the starting point to the final goal target in the virtualplane were depicted in the top plot of Figure 18(a) and thetrajectories of the robot in the real plane were depicted inthe bottom plot of Figure 18(b) Note that the trajectories ofthe robot location differ from each other in the virtual planeand the real plane However the orientation change gives thesame direction change of the robot in both the virtual andthe real plane In this plot the green dot is the final goalpoint and the robot trajectory is depicted with the red dotted
12 Journal of Sensors
Goalx
y
Obstacle 2rsquos current
Obstacle 1rsquos current
position and direction
position and direction
Robotrsquos currentposition
(a)
0 500 1000 1500 2000 2500 30000
1000
2000
3000Virtual plane
x (mm)
y (m
m)
minus1000 minus500 0 500 1000 1500 2000 2500 3000 3500 4000minus1000
01000200030004000
Real plane
x (mm)
y (m
m)
(b)
Figure 18 (a) Experiment result after the collision avoidance withall obstacles (b) Plot of the trajectories of the robot and the obstaclesin the virtual and real planes during the collision avoidance with allthe moving obstacles
circles The smooth trajectory was generated by using thelinear navigation laws as explained
From this experiment it is easily seen that the pro-posed hybrid reactive motion planning approach designedby the integration of the virtual plane approach and asensor based planning is very effective to dynamic collisionavoidance problems in cluttered uncertain environmentsThe effectiveness of the hybrid reactive motion planningmethod makes its usage very attractive to various dynamicnavigation applications of not only mobile robots but alsoother autonomous vehicles such as flying vehicles and self-driving vehicles
5 Conclusion
In this paper we proposed a hybrid reactive motion plan-ning method for an autonomous mobile robot in uncertain
dynamic environmentsThe hybrid reactive motion planningmethod combined a reactive path planning method whichcould transform dynamic moving obstacles into stationaryones with a sensor based approach which can provide relativeinformation of moving obstacles and environments The keyfeatures of the proposed method are twofold the first keyfeature is the simplification of complex dynamic motionplanning problems into stationary ones using the virtualplane approach while the second feature is the robustness ofa sensor basedmotion planning in which the pose estimationof moving obstacles is made by using a Kinect sensor whichprovides a ranging and color detection The sensor basedapproach improves the accuracy and robustness of the reac-tive motion planning approach by providing the informationof the obstacles and environments The performance ofthe proposed method was demonstrated through not onlysimulation studies but also field experiments using multiplemoving obstacles
In the further work a sensor fusion approach which couldimprove the heading estimation of a robot and the speedestimation of moving objects will be investigated more formore robust motion planning
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgments
This research was supported by the National Research Foun-dation ofKorea (NRF) (no 2014-017630 andno 2014-063396)and was also supported by the Human Resource TrainingProgram for Regional Innovation and Creativity through theMinistry of Education and National Research Foundation ofKorea (no 2014-066733)
References
[1] R Siegwart I R Nourbakhsh and D Scaramuzza Introductionto AutonomousMobile RobotsTheMITPress LondonUK 2ndedition 2011
[2] J P van den Berg and M H Overmars ldquoRoadmap-basedmotion planning in dynamic environmentsrdquo IEEE Transactionson Robotics vol 21 no 5 pp 885ndash897 2005
[3] D Hsu R Kindel J-C Latombe and S Rock ldquoRandomizedkinodynamic motion planning with moving obstaclesrdquo TheInternational Journal of Robotics Research vol 21 no 3 pp 233ndash255 2002
[4] R Gayle A Sud M C Lin and D Manocha ldquoReactivedeformation roadmaps motion planning of multiple robots indynamic environmentsrdquo in IEEERSJ International Conferenceon Intelligent Robots and Systems (IROS rsquo07) pp 3777ndash3783 SanDiego Calif USA November 2007
[5] S S Ge and Y J Cui ldquoDynamic motion planning for mobilerobots using potential field methodrdquo Autonomous Robots vol13 no 3 pp 207ndash222 2002
[6] K P Valavanis T Hebert R Kolluru and N TsourveloudisldquoMobile robot navigation in 2-D dynamic environments using
Journal of Sensors 13
an electrostatic potential fieldrdquo IEEE Transactions on SystemsMan and Cybernetics Part A Systems and Humans vol 30 no2 pp 187ndash196 2000
[7] X Gao Y Ou Y Fu et al ldquoA novel local path planningmethod considering both robot posture and path smoothnessrdquoin Proceeding of the IEEE International Conference on Roboticsand Biomimetics (ROBIO 13) pp 172ndash178 Shenzhen ChinaDecember 2013
[8] J Jiao Z Cao P Zhao X Liu and M Tan ldquoBezier curvebased path planning for a mobile manipulator in unknownenvironmentsrdquo in Proceedings of the IEEE International Confer-ence on Robotics and Biomimetics (ROBIO rsquo13) pp 1864ndash1868Shenzhen China December 2013
[9] A Stentz ldquoOptimal and efficient path planning for partially-known environmentsrdquo in Proceedings of the IEEE InternationalConference on Robotics and Automation vol 4 pp 3310ndash3317May 1994
[10] G Song H Wang J Zhang and T Meng ldquoAutomatic dockingsystem for recharging home surveillance robotsrdquo IEEE Transac-tions on Consumer Electronics vol 57 no 2 pp 428ndash435 2011
[11] M Seder and I Petrovic ldquoDynamic window based approachto mobile robot motion control in the presence of movingobstaclesrdquo in Proceedings of the IEEE International Conferenceon Robotics and Automation (ICRA rsquo07) pp 1986ndash1991 RomaItaly April 2007
[12] D Fox W Burgard and S Thrun ldquoThe dynamic windowapproach to collision avoidancerdquo IEEERobotics andAutomationMagazine vol 4 no 1 pp 23ndash33 1997
[13] J Borenstein and Y Koren ldquoReal-time obstacle avoidance forfast mobile robotsrdquo IEEE Transactions on Systems Man andCybernetics vol 19 no 5 pp 1179ndash1187 1989
[14] J Borenstein and Y Koren ldquoThe vector field histogrammdashfastobstacle avoidance for mobile robotsrdquo IEEE Transactions onRobotics and Automation vol 7 no 3 pp 278ndash288 1991
[15] R Simmons ldquoThe curvature-velocity method for local obsta-cle avoidancerdquo in Proceedings of the 13th IEEE InternationalConference on Robotics and Automation vol 4 pp 3375ndash3382Minneapolis Minn USA April 1996
[16] J Minguez and L Montano ldquoNearness Diagram (ND) nav-igation collision avoidance in troublesome scenariosrdquo IEEETransactions on Robotics and Automation vol 20 no 1 pp 45ndash59 2004
[17] E Owen and L Montano ldquoMotion planning in dynamicenvironments using the velocity spacerdquo in Proceedings of theIEEE IRSRSJ International Conference on Intelligent Robots andSystems (IROS 05) pp 997ndash1002 August 2005
[18] A Chakravarthy and D Ghose ldquoObstacle avoidance in adynamic environment a collision cone approachrdquo IEEE Trans-actions on Systems Man and Cybernetics Part A Systems andHumans vol 28 no 5 pp 562ndash574 1998
[19] F Belkhouche T Jin and C H Sung ldquoPlane collision coursedetection betweenmoving objects by transformation of coordi-natesrdquo in Proceedings of the 17th IEEE International Conferenceon Control Applications Part of IEEE Multi-Conference onSystems and Control San Antonio Tex USA 2008
[20] P Fiorini and Z Shiller ldquoMotion planning in dynamic environ-ments using velocity obstaclesrdquo International Journal of RoboticsResearch vol 17 no 7 pp 760ndash772 1998
[21] P Fiorini and Z Shiller ldquoMotion planning in dynamic envi-ronments using velocity obstaclesrdquoThe International Journal ofRobotics Research vol 17 no 7 pp 760ndash772 1998
[22] Z Shiller F Large and S Sekhavat ldquoMotion planning indynamic environments obstacles moving along arbitrary tra-jectoriesrdquo in Proceedings of the IEEE International Conferenceon Robotics and Automation (ICRA rsquo01) pp 3716ndash3721 SeoulRepublic of Korea May 2001
[23] F Belkhouche ldquoReactive path planning in a dynamic environ-mentrdquo IEEE Transactions on Robotics vol 25 no 4 pp 902ndash9112009
[24] B Dorj D Tuvshinjargal K T Chong D P Hong and DJ Lee ldquoMulti-sensor fusion based effective obstacle avoidanceand path-following technologyrdquo Advanced Science Letters vol20 pp 1751ndash1756 2014
[25] R Philippsen B Jensen and R Siegwart ldquoTowards real-timesensor based path planning in highly dynamic environmentsrdquoin Autonomous Navigation in Dynamic Environments vol 35of Tracts in Advanced Robotics pp 135ndash148 Springer BerlinGermany 2007
[26] K-T Song and C C Chang ldquoReactive navigation in dynamicenvironment using a multisensor predictorrdquo IEEE Transactionson Systems Man and Cybernetics Part B Cybernetics vol 29no 6 pp 870ndash880 1999
[27] K Khoshelham and S O Elberink ldquoAccuracy and resolutionof kinect depth data for indoor mapping applicationsrdquo Sensorsvol 12 no 2 pp 1437ndash1454 2012
[28] J Han L Shao D Xu and J Shotton ldquoEnhanced computervision with Microsoft Kinect sensor a reviewrdquo IEEE Transac-tions on Cybernetics vol 43 no 5 pp 1318ndash1334 2013
[29] F Kammergruber A Ebner and W A Gunthner ldquoNavigationin virtual reality using microsoft kinectrdquo in Proceedings of the12th International Conference on Construction Application ofVirtual Reality Taipei Taiwan November 2012
[30] Microsoft Kinect for X-BXO 360 httpwwwxboxcomen-USxbox-oneaccessorieskinect-for-xbox-one
[31] K Kronolige and P Mihelich ldquoTechnical Description of KinectCalibrationrdquo July 2014 httpwwwrosorgwikikinect calibra-tiontechnical
[32] G Shashank M Sharma and A Shukla ldquoRobot navigationusing image processing and isolated word recognitionrdquo Interna-tional Journal of Computer Science and Engineering vol 4 no5 pp 769ndash778 2012
[33] TW Choe H S Son J H Song and J H Park Intelligent RobotSystem HBE-RoboCar HanBack Electronics Seoul Republic ofKorea 2012
[34] K R Jadavm M A Lokhandwala and A P Gharge ldquoVisionbased moving object detection and trackingrdquo in Proceedings ofthe National Conference on Recent Trends in Engineering andTechnology Anand India 2011
[35] A Chandak K Gosavi S Giri S Agrawal and P Kulka-rni ldquoPath planning for mobile robot navigation using imageprocessingrdquo International Journal of Scientific amp EngineeringResearch vol 4 pp 1490ndash1495 2013
International Journal of
AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Active and Passive Electronic Components
Control Scienceand Engineering
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
RotatingMachinery
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation httpwwwhindawicom
Journal ofEngineeringVolume 2014
Submit your manuscripts athttpwwwhindawicom
VLSI Design
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Shock and Vibration
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Civil EngineeringAdvances in
Acoustics and VibrationAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Advances inOptoElectronics
Hindawi Publishing Corporation httpwwwhindawicom
Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
SensorsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Chemical EngineeringInternational Journal of Antennas and
Propagation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Navigation and Observation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
DistributedSensor Networks
International Journal of
Journal of Sensors 9
Bluetoothcommunication
Kinect
PC
Obstacle 1
Obstacle 2
Robot
Destination
Figure 9 Test scenario and architecture for experimental setup
Deviation point
Deviation point
First obstaclersquoscollision coneSecond obstaclersquos
collision cone
0 500 1000 1500 2000 2500 3000minus500
0500
1000150020002500
x coordinate
y coo
rdin
ate
Virtual plane
(a)
Second obstacle
First obstacleRobot
Goal
Trajectory of robot
Coverage area
Initial locations ofrobot and obstacles
minus500 0 500 1000 1500 2000 2500 3000 3500 4000minus1000
01000200030004000
y coo
rdin
ate
Real plane
x coordinate
(b)
Figure 10 Simulation results on virtual plane and real plane
0 10 20 30 40 50 60 7050
100150200
Cycle time (times of program running)
Targ
et an
gle
(deg
) an
gle (
deg)
(d
eg)
0 10 20 30 40 50 60 7050
100150200
Cycle time (times of program running)
Robo
t hea
ding
0 10 20 30 40 50 60 70minus100
0100
Cycle time (times of program running)
Varia
nce a
ngle
Figure 11 Orientation angle information target angle robot head-ing angle and variance angle
is the same as the obstaclersquos one and the distance betweenthe deviation point and the collision cone is dependent onthe radius of the master robotThe ellipses indicate the initiallocations of the robot and the obstacles
In Figure 11 the orientation angle information used forthe robot control was illustrated The upper top plot showedthe angle of the moving robot to the target from the virtualplane the second plot showed the robot heading anglecommanded for the navigation control and the third plotshowed the angle difference between the target and robotheading angle At the final stage of the path planning the
0 10 20 30 40 50 60 70100110120130140
Cycle time (times of program running)
Obs
tacle
1 an
gle
(deg
ree)
1840 1860 1880 1900 1920 1940 1960 1980 2000 2020 2040500
1000
1500
2000
x coordinate (mm)
y coo
rdin
ate (
mm
) Obstacle 1 position
0 10 20 30 40 50 60 700
20
40
60
Cycle time (times of program running)
Obs
tacle
1 sp
eed
Figure 12 Results of firstmoving obstaclersquos orientation angle speedand position in 119909-119910 coordinates
commanded orientation angle and the target angle to the goalpoint become the same Instead of controlling the robot withthe orientation angle the speed of the master robot can beused to avoid the moving obstacles
Figures 12 and 13 show each moving obstaclersquos headingangle linear velocity and trajectory As can be seen in order
10 Journal of Sensors
0 10 20 30 40 50 60 70150
200
250
300
Cycle time (times of program running)
Obs
tacle
2 an
gle
(deg
ree)
1000 1010 1020 1030 1040 1050 1060 10700
500
1000
1500
x coordinate (mm)
y coo
rdin
ate (
mm
) Obstacle 2 position
0 10 20 30 40 50 60 700
20
40
60
Obs
tacle
2 sp
eed
Cycle time (times of program running)
Figure 13 Results of second moving obstaclersquos orientation anglespeed and position in 119909-119910 coordinates
500 1000 1500 2000 2500 30000
2000
4000
x coordinate (mm)
y co
ordi
nate
(mm
)
Robot position
0 10 20 30 40 50 60 700
100
200
Cycle time (times of program running)
Robo
t spe
ed
0 10 20 30 40 50 60 700
50
100
Cycle time (times of program running)
Righ
t whe
el sp
eed
0 10 20 30 40 50 60 700
50
100
Cycle time (times of program running)
Left
whe
el sp
eed
Figure 14 Results of robotrsquos trajectory speed and right and leftwheels speed
to carry out a dynamic path planning experiment the speedand the heading angle were changed during the simulationresulting in uncertain cluttered environments
Figure 14 shows the mobile robotrsquos trajectory from thestart point to the goal point and also the forward velocity andeach wheel speed from encoder As can be seen the trajectoryof the robot has a sharp turn around the location (1500mm
Obstacle 1rsquos startingposition
Obstacle 2rsquos startingposition
Goal x
y
Robotrsquos startingposition
(a)
0 500 1000 1500 2000 2500 30000
1000
2000
3000Virtual plane
x (mm)
y (m
m)
minus1000 minus500 0 500 1000 1500 2000 2500 3000 3500 4000
0
1000
2000
Real plane
R (27513368 21269749)
3000
x (mm)
y (m
m)
(b)
Figure 15 (a) Experiment environment (initial stage) (b) Plots ofthe experimented data at initial stage
1000mm) in order to avoid the second moving obstacle It isseen that the right and left wheel speeds are mirrored along atime axis Also we can see relationship of the variance angleand robotrsquos right and left wheels speed
43 Field Test and Results Further verification of the per-formance of the proposed hybrid dynamic path planningapproach for real experiments was carried out with thesame scenario used in the previous simulation part In theexperiment two moving obstacles are used and a masterrobot moves without any collision with the obstacles to thetarget point as shown in Figure 15(a) and the initial locationsof the obstacles and the robot are shown in Figure 15(b) Thered dot is the initial starting position of the master robot at(2750mm 2126mm) the black dot is the initial location ofthe second obstacle at (2050mm 1900mm) and the blue dotis the initial location of the first moving obstacle at (1050mm2000mm) In the virtual plane the collision cone of the firstobstacle is depicted as shown in the top plot of Figure 15(b)
Journal of Sensors 11
Goal x
y
Robotrsquos currentposition
Obstacle 2rsquos current Obstacle 1rsquos currentposition and direction position and direction
(a)
0 500 1000 1500 2000 2500 30000
1000
2000
3000Virtual plane
x (mm)
y (m
m)
minus1000 minus500 0 500 1000 1500 2000 2500 3000 3500 4000
0
1000
2000
3000
Real plane
x (mm)
y (m
m)
(b)
Figure 16 (a) Experiment result of collision avoidance with the firstobstacle (b) Plot of the trajectories of the robot and the obstacles inthe virtual and real planes during the collision avoidance with thefirst moving obstacle
and the robot carries out its motion control based on thecollision cone in the virtual plane until it avoids the firstobstacle
Figure 16 showed the collision avoidance performancewith the fist moving obstacle and as can be seen the masterrobot avoided the commanded orientation control into theleft direction without any collisions which is described indetail in the virtual plane in the top plot of Figure 16(b) Thetrajectory and movement of the robot and the obstacles weredepicted in the real plane in Figure 16(b) for the detailedanalysis
In a similar way Figure 17(a) illustrated the collisionavoidance with the second moving obstacle and the detailedpath and trajectory are described in Figure 17(b) The topplot of Figure 17(b) shows the motion planning in thevirtual plane where the initial location of the second movingobstacle is recognized at the center of (2050mm 1900mm)
Goal x
y
Robotrsquos currentposition
Obstacle 2rsquos currentObstacle 1rsquos currentposition and direction
position and direction
(a)
0 500 1000 1500 2000 2500 30000
1000
2000
3000Virtual plane
x (mm)
y (m
m)
minus1000 minus500 0 500 1000 1500 2000 2500 3000 3500 4000
0
1000
2000
3000
Real plane
x (mm)
y (m
m)
(b)
Figure 17 (a) Experiment result of collision avoidance with thesecond obstacle (b) Plot of the trajectories of the robot and theobstacles in the virtual and real planes during the collision avoidancewith the second moving obstacle
Based on this initial location the second collision cone isconstructed with a big green ellipse that allows the virtualrobot to navigate without any collision with the secondobstacle The trajectory of the robot motion planning in thereal plane is depicted in the bottom plot of Figure 17(b)
Now at the final phase after avoiding all the obstacles themaster robot reached the target goal with a motion controlas shown in Figure 18(a) The overall trajectories of the robotfrom the starting point to the final goal target in the virtualplane were depicted in the top plot of Figure 18(a) and thetrajectories of the robot in the real plane were depicted inthe bottom plot of Figure 18(b) Note that the trajectories ofthe robot location differ from each other in the virtual planeand the real plane However the orientation change gives thesame direction change of the robot in both the virtual andthe real plane In this plot the green dot is the final goalpoint and the robot trajectory is depicted with the red dotted
12 Journal of Sensors
Goalx
y
Obstacle 2rsquos current
Obstacle 1rsquos current
position and direction
position and direction
Robotrsquos currentposition
(a)
0 500 1000 1500 2000 2500 30000
1000
2000
3000Virtual plane
x (mm)
y (m
m)
minus1000 minus500 0 500 1000 1500 2000 2500 3000 3500 4000minus1000
01000200030004000
Real plane
x (mm)
y (m
m)
(b)
Figure 18 (a) Experiment result after the collision avoidance withall obstacles (b) Plot of the trajectories of the robot and the obstaclesin the virtual and real planes during the collision avoidance with allthe moving obstacles
circles The smooth trajectory was generated by using thelinear navigation laws as explained
From this experiment it is easily seen that the pro-posed hybrid reactive motion planning approach designedby the integration of the virtual plane approach and asensor based planning is very effective to dynamic collisionavoidance problems in cluttered uncertain environmentsThe effectiveness of the hybrid reactive motion planningmethod makes its usage very attractive to various dynamicnavigation applications of not only mobile robots but alsoother autonomous vehicles such as flying vehicles and self-driving vehicles
5 Conclusion
In this paper we proposed a hybrid reactive motion plan-ning method for an autonomous mobile robot in uncertain
dynamic environmentsThe hybrid reactive motion planningmethod combined a reactive path planning method whichcould transform dynamic moving obstacles into stationaryones with a sensor based approach which can provide relativeinformation of moving obstacles and environments The keyfeatures of the proposed method are twofold the first keyfeature is the simplification of complex dynamic motionplanning problems into stationary ones using the virtualplane approach while the second feature is the robustness ofa sensor basedmotion planning in which the pose estimationof moving obstacles is made by using a Kinect sensor whichprovides a ranging and color detection The sensor basedapproach improves the accuracy and robustness of the reac-tive motion planning approach by providing the informationof the obstacles and environments The performance ofthe proposed method was demonstrated through not onlysimulation studies but also field experiments using multiplemoving obstacles
In the further work a sensor fusion approach which couldimprove the heading estimation of a robot and the speedestimation of moving objects will be investigated more formore robust motion planning
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgments
This research was supported by the National Research Foun-dation ofKorea (NRF) (no 2014-017630 andno 2014-063396)and was also supported by the Human Resource TrainingProgram for Regional Innovation and Creativity through theMinistry of Education and National Research Foundation ofKorea (no 2014-066733)
References
[1] R Siegwart I R Nourbakhsh and D Scaramuzza Introductionto AutonomousMobile RobotsTheMITPress LondonUK 2ndedition 2011
[2] J P van den Berg and M H Overmars ldquoRoadmap-basedmotion planning in dynamic environmentsrdquo IEEE Transactionson Robotics vol 21 no 5 pp 885ndash897 2005
[3] D Hsu R Kindel J-C Latombe and S Rock ldquoRandomizedkinodynamic motion planning with moving obstaclesrdquo TheInternational Journal of Robotics Research vol 21 no 3 pp 233ndash255 2002
[4] R Gayle A Sud M C Lin and D Manocha ldquoReactivedeformation roadmaps motion planning of multiple robots indynamic environmentsrdquo in IEEERSJ International Conferenceon Intelligent Robots and Systems (IROS rsquo07) pp 3777ndash3783 SanDiego Calif USA November 2007
[5] S S Ge and Y J Cui ldquoDynamic motion planning for mobilerobots using potential field methodrdquo Autonomous Robots vol13 no 3 pp 207ndash222 2002
[6] K P Valavanis T Hebert R Kolluru and N TsourveloudisldquoMobile robot navigation in 2-D dynamic environments using
Journal of Sensors 13
an electrostatic potential fieldrdquo IEEE Transactions on SystemsMan and Cybernetics Part A Systems and Humans vol 30 no2 pp 187ndash196 2000
[7] X Gao Y Ou Y Fu et al ldquoA novel local path planningmethod considering both robot posture and path smoothnessrdquoin Proceeding of the IEEE International Conference on Roboticsand Biomimetics (ROBIO 13) pp 172ndash178 Shenzhen ChinaDecember 2013
[8] J Jiao Z Cao P Zhao X Liu and M Tan ldquoBezier curvebased path planning for a mobile manipulator in unknownenvironmentsrdquo in Proceedings of the IEEE International Confer-ence on Robotics and Biomimetics (ROBIO rsquo13) pp 1864ndash1868Shenzhen China December 2013
[9] A Stentz ldquoOptimal and efficient path planning for partially-known environmentsrdquo in Proceedings of the IEEE InternationalConference on Robotics and Automation vol 4 pp 3310ndash3317May 1994
[10] G Song H Wang J Zhang and T Meng ldquoAutomatic dockingsystem for recharging home surveillance robotsrdquo IEEE Transac-tions on Consumer Electronics vol 57 no 2 pp 428ndash435 2011
[11] M Seder and I Petrovic ldquoDynamic window based approachto mobile robot motion control in the presence of movingobstaclesrdquo in Proceedings of the IEEE International Conferenceon Robotics and Automation (ICRA rsquo07) pp 1986ndash1991 RomaItaly April 2007
[12] D Fox W Burgard and S Thrun ldquoThe dynamic windowapproach to collision avoidancerdquo IEEERobotics andAutomationMagazine vol 4 no 1 pp 23ndash33 1997
[13] J Borenstein and Y Koren ldquoReal-time obstacle avoidance forfast mobile robotsrdquo IEEE Transactions on Systems Man andCybernetics vol 19 no 5 pp 1179ndash1187 1989
[14] J Borenstein and Y Koren ldquoThe vector field histogrammdashfastobstacle avoidance for mobile robotsrdquo IEEE Transactions onRobotics and Automation vol 7 no 3 pp 278ndash288 1991
[15] R Simmons ldquoThe curvature-velocity method for local obsta-cle avoidancerdquo in Proceedings of the 13th IEEE InternationalConference on Robotics and Automation vol 4 pp 3375ndash3382Minneapolis Minn USA April 1996
[16] J Minguez and L Montano ldquoNearness Diagram (ND) nav-igation collision avoidance in troublesome scenariosrdquo IEEETransactions on Robotics and Automation vol 20 no 1 pp 45ndash59 2004
[17] E Owen and L Montano ldquoMotion planning in dynamicenvironments using the velocity spacerdquo in Proceedings of theIEEE IRSRSJ International Conference on Intelligent Robots andSystems (IROS 05) pp 997ndash1002 August 2005
[18] A Chakravarthy and D Ghose ldquoObstacle avoidance in adynamic environment a collision cone approachrdquo IEEE Trans-actions on Systems Man and Cybernetics Part A Systems andHumans vol 28 no 5 pp 562ndash574 1998
[19] F Belkhouche T Jin and C H Sung ldquoPlane collision coursedetection betweenmoving objects by transformation of coordi-natesrdquo in Proceedings of the 17th IEEE International Conferenceon Control Applications Part of IEEE Multi-Conference onSystems and Control San Antonio Tex USA 2008
[20] P Fiorini and Z Shiller ldquoMotion planning in dynamic environ-ments using velocity obstaclesrdquo International Journal of RoboticsResearch vol 17 no 7 pp 760ndash772 1998
[21] P Fiorini and Z Shiller ldquoMotion planning in dynamic envi-ronments using velocity obstaclesrdquoThe International Journal ofRobotics Research vol 17 no 7 pp 760ndash772 1998
[22] Z Shiller F Large and S Sekhavat ldquoMotion planning indynamic environments obstacles moving along arbitrary tra-jectoriesrdquo in Proceedings of the IEEE International Conferenceon Robotics and Automation (ICRA rsquo01) pp 3716ndash3721 SeoulRepublic of Korea May 2001
[23] F Belkhouche ldquoReactive path planning in a dynamic environ-mentrdquo IEEE Transactions on Robotics vol 25 no 4 pp 902ndash9112009
[24] B Dorj D Tuvshinjargal K T Chong D P Hong and DJ Lee ldquoMulti-sensor fusion based effective obstacle avoidanceand path-following technologyrdquo Advanced Science Letters vol20 pp 1751ndash1756 2014
[25] R Philippsen B Jensen and R Siegwart ldquoTowards real-timesensor based path planning in highly dynamic environmentsrdquoin Autonomous Navigation in Dynamic Environments vol 35of Tracts in Advanced Robotics pp 135ndash148 Springer BerlinGermany 2007
[26] K-T Song and C C Chang ldquoReactive navigation in dynamicenvironment using a multisensor predictorrdquo IEEE Transactionson Systems Man and Cybernetics Part B Cybernetics vol 29no 6 pp 870ndash880 1999
[27] K Khoshelham and S O Elberink ldquoAccuracy and resolutionof kinect depth data for indoor mapping applicationsrdquo Sensorsvol 12 no 2 pp 1437ndash1454 2012
[28] J Han L Shao D Xu and J Shotton ldquoEnhanced computervision with Microsoft Kinect sensor a reviewrdquo IEEE Transac-tions on Cybernetics vol 43 no 5 pp 1318ndash1334 2013
[29] F Kammergruber A Ebner and W A Gunthner ldquoNavigationin virtual reality using microsoft kinectrdquo in Proceedings of the12th International Conference on Construction Application ofVirtual Reality Taipei Taiwan November 2012
[30] Microsoft Kinect for X-BXO 360 httpwwwxboxcomen-USxbox-oneaccessorieskinect-for-xbox-one
[31] K Kronolige and P Mihelich ldquoTechnical Description of KinectCalibrationrdquo July 2014 httpwwwrosorgwikikinect calibra-tiontechnical
[32] G Shashank M Sharma and A Shukla ldquoRobot navigationusing image processing and isolated word recognitionrdquo Interna-tional Journal of Computer Science and Engineering vol 4 no5 pp 769ndash778 2012
[33] TW Choe H S Son J H Song and J H Park Intelligent RobotSystem HBE-RoboCar HanBack Electronics Seoul Republic ofKorea 2012
[34] K R Jadavm M A Lokhandwala and A P Gharge ldquoVisionbased moving object detection and trackingrdquo in Proceedings ofthe National Conference on Recent Trends in Engineering andTechnology Anand India 2011
[35] A Chandak K Gosavi S Giri S Agrawal and P Kulka-rni ldquoPath planning for mobile robot navigation using imageprocessingrdquo International Journal of Scientific amp EngineeringResearch vol 4 pp 1490ndash1495 2013
International Journal of
AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Active and Passive Electronic Components
Control Scienceand Engineering
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
RotatingMachinery
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation httpwwwhindawicom
Journal ofEngineeringVolume 2014
Submit your manuscripts athttpwwwhindawicom
VLSI Design
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Shock and Vibration
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Civil EngineeringAdvances in
Acoustics and VibrationAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Advances inOptoElectronics
Hindawi Publishing Corporation httpwwwhindawicom
Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
SensorsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Chemical EngineeringInternational Journal of Antennas and
Propagation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Navigation and Observation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
DistributedSensor Networks
International Journal of
10 Journal of Sensors
0 10 20 30 40 50 60 70150
200
250
300
Cycle time (times of program running)
Obs
tacle
2 an
gle
(deg
ree)
1000 1010 1020 1030 1040 1050 1060 10700
500
1000
1500
x coordinate (mm)
y coo
rdin
ate (
mm
) Obstacle 2 position
0 10 20 30 40 50 60 700
20
40
60
Obs
tacle
2 sp
eed
Cycle time (times of program running)
Figure 13 Results of second moving obstaclersquos orientation anglespeed and position in 119909-119910 coordinates
500 1000 1500 2000 2500 30000
2000
4000
x coordinate (mm)
y co
ordi
nate
(mm
)
Robot position
0 10 20 30 40 50 60 700
100
200
Cycle time (times of program running)
Robo
t spe
ed
0 10 20 30 40 50 60 700
50
100
Cycle time (times of program running)
Righ
t whe
el sp
eed
0 10 20 30 40 50 60 700
50
100
Cycle time (times of program running)
Left
whe
el sp
eed
Figure 14 Results of robotrsquos trajectory speed and right and leftwheels speed
to carry out a dynamic path planning experiment the speedand the heading angle were changed during the simulationresulting in uncertain cluttered environments
Figure 14 shows the mobile robotrsquos trajectory from thestart point to the goal point and also the forward velocity andeach wheel speed from encoder As can be seen the trajectoryof the robot has a sharp turn around the location (1500mm
Obstacle 1rsquos startingposition
Obstacle 2rsquos startingposition
Goal x
y
Robotrsquos startingposition
(a)
0 500 1000 1500 2000 2500 30000
1000
2000
3000Virtual plane
x (mm)
y (m
m)
minus1000 minus500 0 500 1000 1500 2000 2500 3000 3500 4000
0
1000
2000
Real plane
R (27513368 21269749)
3000
x (mm)
y (m
m)
(b)
Figure 15 (a) Experiment environment (initial stage) (b) Plots ofthe experimented data at initial stage
1000mm) in order to avoid the second moving obstacle It isseen that the right and left wheel speeds are mirrored along atime axis Also we can see relationship of the variance angleand robotrsquos right and left wheels speed
43 Field Test and Results Further verification of the per-formance of the proposed hybrid dynamic path planningapproach for real experiments was carried out with thesame scenario used in the previous simulation part In theexperiment two moving obstacles are used and a masterrobot moves without any collision with the obstacles to thetarget point as shown in Figure 15(a) and the initial locationsof the obstacles and the robot are shown in Figure 15(b) Thered dot is the initial starting position of the master robot at(2750mm 2126mm) the black dot is the initial location ofthe second obstacle at (2050mm 1900mm) and the blue dotis the initial location of the first moving obstacle at (1050mm2000mm) In the virtual plane the collision cone of the firstobstacle is depicted as shown in the top plot of Figure 15(b)
Journal of Sensors 11
Goal x
y
Robotrsquos currentposition
Obstacle 2rsquos current Obstacle 1rsquos currentposition and direction position and direction
(a)
0 500 1000 1500 2000 2500 30000
1000
2000
3000Virtual plane
x (mm)
y (m
m)
minus1000 minus500 0 500 1000 1500 2000 2500 3000 3500 4000
0
1000
2000
3000
Real plane
x (mm)
y (m
m)
(b)
Figure 16 (a) Experiment result of collision avoidance with the firstobstacle (b) Plot of the trajectories of the robot and the obstacles inthe virtual and real planes during the collision avoidance with thefirst moving obstacle
and the robot carries out its motion control based on thecollision cone in the virtual plane until it avoids the firstobstacle
Figure 16 showed the collision avoidance performancewith the fist moving obstacle and as can be seen the masterrobot avoided the commanded orientation control into theleft direction without any collisions which is described indetail in the virtual plane in the top plot of Figure 16(b) Thetrajectory and movement of the robot and the obstacles weredepicted in the real plane in Figure 16(b) for the detailedanalysis
In a similar way Figure 17(a) illustrated the collisionavoidance with the second moving obstacle and the detailedpath and trajectory are described in Figure 17(b) The topplot of Figure 17(b) shows the motion planning in thevirtual plane where the initial location of the second movingobstacle is recognized at the center of (2050mm 1900mm)
Goal x
y
Robotrsquos currentposition
Obstacle 2rsquos currentObstacle 1rsquos currentposition and direction
position and direction
(a)
0 500 1000 1500 2000 2500 30000
1000
2000
3000Virtual plane
x (mm)
y (m
m)
minus1000 minus500 0 500 1000 1500 2000 2500 3000 3500 4000
0
1000
2000
3000
Real plane
x (mm)
y (m
m)
(b)
Figure 17 (a) Experiment result of collision avoidance with thesecond obstacle (b) Plot of the trajectories of the robot and theobstacles in the virtual and real planes during the collision avoidancewith the second moving obstacle
Based on this initial location the second collision cone isconstructed with a big green ellipse that allows the virtualrobot to navigate without any collision with the secondobstacle The trajectory of the robot motion planning in thereal plane is depicted in the bottom plot of Figure 17(b)
Now at the final phase after avoiding all the obstacles themaster robot reached the target goal with a motion controlas shown in Figure 18(a) The overall trajectories of the robotfrom the starting point to the final goal target in the virtualplane were depicted in the top plot of Figure 18(a) and thetrajectories of the robot in the real plane were depicted inthe bottom plot of Figure 18(b) Note that the trajectories ofthe robot location differ from each other in the virtual planeand the real plane However the orientation change gives thesame direction change of the robot in both the virtual andthe real plane In this plot the green dot is the final goalpoint and the robot trajectory is depicted with the red dotted
12 Journal of Sensors
Goalx
y
Obstacle 2rsquos current
Obstacle 1rsquos current
position and direction
position and direction
Robotrsquos currentposition
(a)
0 500 1000 1500 2000 2500 30000
1000
2000
3000Virtual plane
x (mm)
y (m
m)
minus1000 minus500 0 500 1000 1500 2000 2500 3000 3500 4000minus1000
01000200030004000
Real plane
x (mm)
y (m
m)
(b)
Figure 18 (a) Experiment result after the collision avoidance withall obstacles (b) Plot of the trajectories of the robot and the obstaclesin the virtual and real planes during the collision avoidance with allthe moving obstacles
circles The smooth trajectory was generated by using thelinear navigation laws as explained
From this experiment it is easily seen that the pro-posed hybrid reactive motion planning approach designedby the integration of the virtual plane approach and asensor based planning is very effective to dynamic collisionavoidance problems in cluttered uncertain environmentsThe effectiveness of the hybrid reactive motion planningmethod makes its usage very attractive to various dynamicnavigation applications of not only mobile robots but alsoother autonomous vehicles such as flying vehicles and self-driving vehicles
5 Conclusion
In this paper we proposed a hybrid reactive motion plan-ning method for an autonomous mobile robot in uncertain
dynamic environmentsThe hybrid reactive motion planningmethod combined a reactive path planning method whichcould transform dynamic moving obstacles into stationaryones with a sensor based approach which can provide relativeinformation of moving obstacles and environments The keyfeatures of the proposed method are twofold the first keyfeature is the simplification of complex dynamic motionplanning problems into stationary ones using the virtualplane approach while the second feature is the robustness ofa sensor basedmotion planning in which the pose estimationof moving obstacles is made by using a Kinect sensor whichprovides a ranging and color detection The sensor basedapproach improves the accuracy and robustness of the reac-tive motion planning approach by providing the informationof the obstacles and environments The performance ofthe proposed method was demonstrated through not onlysimulation studies but also field experiments using multiplemoving obstacles
In the further work a sensor fusion approach which couldimprove the heading estimation of a robot and the speedestimation of moving objects will be investigated more formore robust motion planning
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgments
This research was supported by the National Research Foun-dation ofKorea (NRF) (no 2014-017630 andno 2014-063396)and was also supported by the Human Resource TrainingProgram for Regional Innovation and Creativity through theMinistry of Education and National Research Foundation ofKorea (no 2014-066733)
References
[1] R Siegwart I R Nourbakhsh and D Scaramuzza Introductionto AutonomousMobile RobotsTheMITPress LondonUK 2ndedition 2011
[2] J P van den Berg and M H Overmars ldquoRoadmap-basedmotion planning in dynamic environmentsrdquo IEEE Transactionson Robotics vol 21 no 5 pp 885ndash897 2005
[3] D Hsu R Kindel J-C Latombe and S Rock ldquoRandomizedkinodynamic motion planning with moving obstaclesrdquo TheInternational Journal of Robotics Research vol 21 no 3 pp 233ndash255 2002
[4] R Gayle A Sud M C Lin and D Manocha ldquoReactivedeformation roadmaps motion planning of multiple robots indynamic environmentsrdquo in IEEERSJ International Conferenceon Intelligent Robots and Systems (IROS rsquo07) pp 3777ndash3783 SanDiego Calif USA November 2007
[5] S S Ge and Y J Cui ldquoDynamic motion planning for mobilerobots using potential field methodrdquo Autonomous Robots vol13 no 3 pp 207ndash222 2002
[6] K P Valavanis T Hebert R Kolluru and N TsourveloudisldquoMobile robot navigation in 2-D dynamic environments using
Journal of Sensors 13
an electrostatic potential fieldrdquo IEEE Transactions on SystemsMan and Cybernetics Part A Systems and Humans vol 30 no2 pp 187ndash196 2000
[7] X Gao Y Ou Y Fu et al ldquoA novel local path planningmethod considering both robot posture and path smoothnessrdquoin Proceeding of the IEEE International Conference on Roboticsand Biomimetics (ROBIO 13) pp 172ndash178 Shenzhen ChinaDecember 2013
[8] J Jiao Z Cao P Zhao X Liu and M Tan ldquoBezier curvebased path planning for a mobile manipulator in unknownenvironmentsrdquo in Proceedings of the IEEE International Confer-ence on Robotics and Biomimetics (ROBIO rsquo13) pp 1864ndash1868Shenzhen China December 2013
[9] A Stentz ldquoOptimal and efficient path planning for partially-known environmentsrdquo in Proceedings of the IEEE InternationalConference on Robotics and Automation vol 4 pp 3310ndash3317May 1994
[10] G Song H Wang J Zhang and T Meng ldquoAutomatic dockingsystem for recharging home surveillance robotsrdquo IEEE Transac-tions on Consumer Electronics vol 57 no 2 pp 428ndash435 2011
[11] M Seder and I Petrovic ldquoDynamic window based approachto mobile robot motion control in the presence of movingobstaclesrdquo in Proceedings of the IEEE International Conferenceon Robotics and Automation (ICRA rsquo07) pp 1986ndash1991 RomaItaly April 2007
[12] D Fox W Burgard and S Thrun ldquoThe dynamic windowapproach to collision avoidancerdquo IEEERobotics andAutomationMagazine vol 4 no 1 pp 23ndash33 1997
[13] J Borenstein and Y Koren ldquoReal-time obstacle avoidance forfast mobile robotsrdquo IEEE Transactions on Systems Man andCybernetics vol 19 no 5 pp 1179ndash1187 1989
[14] J Borenstein and Y Koren ldquoThe vector field histogrammdashfastobstacle avoidance for mobile robotsrdquo IEEE Transactions onRobotics and Automation vol 7 no 3 pp 278ndash288 1991
[15] R Simmons ldquoThe curvature-velocity method for local obsta-cle avoidancerdquo in Proceedings of the 13th IEEE InternationalConference on Robotics and Automation vol 4 pp 3375ndash3382Minneapolis Minn USA April 1996
[16] J Minguez and L Montano ldquoNearness Diagram (ND) nav-igation collision avoidance in troublesome scenariosrdquo IEEETransactions on Robotics and Automation vol 20 no 1 pp 45ndash59 2004
[17] E Owen and L Montano ldquoMotion planning in dynamicenvironments using the velocity spacerdquo in Proceedings of theIEEE IRSRSJ International Conference on Intelligent Robots andSystems (IROS 05) pp 997ndash1002 August 2005
[18] A Chakravarthy and D Ghose ldquoObstacle avoidance in adynamic environment a collision cone approachrdquo IEEE Trans-actions on Systems Man and Cybernetics Part A Systems andHumans vol 28 no 5 pp 562ndash574 1998
[19] F Belkhouche T Jin and C H Sung ldquoPlane collision coursedetection betweenmoving objects by transformation of coordi-natesrdquo in Proceedings of the 17th IEEE International Conferenceon Control Applications Part of IEEE Multi-Conference onSystems and Control San Antonio Tex USA 2008
[20] P Fiorini and Z Shiller ldquoMotion planning in dynamic environ-ments using velocity obstaclesrdquo International Journal of RoboticsResearch vol 17 no 7 pp 760ndash772 1998
[21] P Fiorini and Z Shiller ldquoMotion planning in dynamic envi-ronments using velocity obstaclesrdquoThe International Journal ofRobotics Research vol 17 no 7 pp 760ndash772 1998
[22] Z Shiller F Large and S Sekhavat ldquoMotion planning indynamic environments obstacles moving along arbitrary tra-jectoriesrdquo in Proceedings of the IEEE International Conferenceon Robotics and Automation (ICRA rsquo01) pp 3716ndash3721 SeoulRepublic of Korea May 2001
[23] F Belkhouche ldquoReactive path planning in a dynamic environ-mentrdquo IEEE Transactions on Robotics vol 25 no 4 pp 902ndash9112009
[24] B Dorj D Tuvshinjargal K T Chong D P Hong and DJ Lee ldquoMulti-sensor fusion based effective obstacle avoidanceand path-following technologyrdquo Advanced Science Letters vol20 pp 1751ndash1756 2014
[25] R Philippsen B Jensen and R Siegwart ldquoTowards real-timesensor based path planning in highly dynamic environmentsrdquoin Autonomous Navigation in Dynamic Environments vol 35of Tracts in Advanced Robotics pp 135ndash148 Springer BerlinGermany 2007
[26] K-T Song and C C Chang ldquoReactive navigation in dynamicenvironment using a multisensor predictorrdquo IEEE Transactionson Systems Man and Cybernetics Part B Cybernetics vol 29no 6 pp 870ndash880 1999
[27] K Khoshelham and S O Elberink ldquoAccuracy and resolutionof kinect depth data for indoor mapping applicationsrdquo Sensorsvol 12 no 2 pp 1437ndash1454 2012
[28] J Han L Shao D Xu and J Shotton ldquoEnhanced computervision with Microsoft Kinect sensor a reviewrdquo IEEE Transac-tions on Cybernetics vol 43 no 5 pp 1318ndash1334 2013
[29] F Kammergruber A Ebner and W A Gunthner ldquoNavigationin virtual reality using microsoft kinectrdquo in Proceedings of the12th International Conference on Construction Application ofVirtual Reality Taipei Taiwan November 2012
[30] Microsoft Kinect for X-BXO 360 httpwwwxboxcomen-USxbox-oneaccessorieskinect-for-xbox-one
[31] K Kronolige and P Mihelich ldquoTechnical Description of KinectCalibrationrdquo July 2014 httpwwwrosorgwikikinect calibra-tiontechnical
[32] G Shashank M Sharma and A Shukla ldquoRobot navigationusing image processing and isolated word recognitionrdquo Interna-tional Journal of Computer Science and Engineering vol 4 no5 pp 769ndash778 2012
[33] TW Choe H S Son J H Song and J H Park Intelligent RobotSystem HBE-RoboCar HanBack Electronics Seoul Republic ofKorea 2012
[34] K R Jadavm M A Lokhandwala and A P Gharge ldquoVisionbased moving object detection and trackingrdquo in Proceedings ofthe National Conference on Recent Trends in Engineering andTechnology Anand India 2011
[35] A Chandak K Gosavi S Giri S Agrawal and P Kulka-rni ldquoPath planning for mobile robot navigation using imageprocessingrdquo International Journal of Scientific amp EngineeringResearch vol 4 pp 1490ndash1495 2013
International Journal of
AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Active and Passive Electronic Components
Control Scienceand Engineering
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
RotatingMachinery
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation httpwwwhindawicom
Journal ofEngineeringVolume 2014
Submit your manuscripts athttpwwwhindawicom
VLSI Design
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Shock and Vibration
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Civil EngineeringAdvances in
Acoustics and VibrationAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Advances inOptoElectronics
Hindawi Publishing Corporation httpwwwhindawicom
Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
SensorsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Chemical EngineeringInternational Journal of Antennas and
Propagation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Navigation and Observation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
DistributedSensor Networks
International Journal of
Journal of Sensors 11
Goal x
y
Robotrsquos currentposition
Obstacle 2rsquos current Obstacle 1rsquos currentposition and direction position and direction
(a)
0 500 1000 1500 2000 2500 30000
1000
2000
3000Virtual plane
x (mm)
y (m
m)
minus1000 minus500 0 500 1000 1500 2000 2500 3000 3500 4000
0
1000
2000
3000
Real plane
x (mm)
y (m
m)
(b)
Figure 16 (a) Experiment result of collision avoidance with the firstobstacle (b) Plot of the trajectories of the robot and the obstacles inthe virtual and real planes during the collision avoidance with thefirst moving obstacle
and the robot carries out its motion control based on thecollision cone in the virtual plane until it avoids the firstobstacle
Figure 16 showed the collision avoidance performancewith the fist moving obstacle and as can be seen the masterrobot avoided the commanded orientation control into theleft direction without any collisions which is described indetail in the virtual plane in the top plot of Figure 16(b) Thetrajectory and movement of the robot and the obstacles weredepicted in the real plane in Figure 16(b) for the detailedanalysis
In a similar way Figure 17(a) illustrated the collisionavoidance with the second moving obstacle and the detailedpath and trajectory are described in Figure 17(b) The topplot of Figure 17(b) shows the motion planning in thevirtual plane where the initial location of the second movingobstacle is recognized at the center of (2050mm 1900mm)
Goal x
y
Robotrsquos currentposition
Obstacle 2rsquos currentObstacle 1rsquos currentposition and direction
position and direction
(a)
0 500 1000 1500 2000 2500 30000
1000
2000
3000Virtual plane
x (mm)
y (m
m)
minus1000 minus500 0 500 1000 1500 2000 2500 3000 3500 4000
0
1000
2000
3000
Real plane
x (mm)
y (m
m)
(b)
Figure 17 (a) Experiment result of collision avoidance with thesecond obstacle (b) Plot of the trajectories of the robot and theobstacles in the virtual and real planes during the collision avoidancewith the second moving obstacle
Based on this initial location the second collision cone isconstructed with a big green ellipse that allows the virtualrobot to navigate without any collision with the secondobstacle The trajectory of the robot motion planning in thereal plane is depicted in the bottom plot of Figure 17(b)
Now at the final phase after avoiding all the obstacles themaster robot reached the target goal with a motion controlas shown in Figure 18(a) The overall trajectories of the robotfrom the starting point to the final goal target in the virtualplane were depicted in the top plot of Figure 18(a) and thetrajectories of the robot in the real plane were depicted inthe bottom plot of Figure 18(b) Note that the trajectories ofthe robot location differ from each other in the virtual planeand the real plane However the orientation change gives thesame direction change of the robot in both the virtual andthe real plane In this plot the green dot is the final goalpoint and the robot trajectory is depicted with the red dotted
12 Journal of Sensors
Goalx
y
Obstacle 2rsquos current
Obstacle 1rsquos current
position and direction
position and direction
Robotrsquos currentposition
(a)
0 500 1000 1500 2000 2500 30000
1000
2000
3000Virtual plane
x (mm)
y (m
m)
minus1000 minus500 0 500 1000 1500 2000 2500 3000 3500 4000minus1000
01000200030004000
Real plane
x (mm)
y (m
m)
(b)
Figure 18 (a) Experiment result after the collision avoidance withall obstacles (b) Plot of the trajectories of the robot and the obstaclesin the virtual and real planes during the collision avoidance with allthe moving obstacles
circles The smooth trajectory was generated by using thelinear navigation laws as explained
From this experiment it is easily seen that the pro-posed hybrid reactive motion planning approach designedby the integration of the virtual plane approach and asensor based planning is very effective to dynamic collisionavoidance problems in cluttered uncertain environmentsThe effectiveness of the hybrid reactive motion planningmethod makes its usage very attractive to various dynamicnavigation applications of not only mobile robots but alsoother autonomous vehicles such as flying vehicles and self-driving vehicles
5 Conclusion
In this paper we proposed a hybrid reactive motion plan-ning method for an autonomous mobile robot in uncertain
dynamic environmentsThe hybrid reactive motion planningmethod combined a reactive path planning method whichcould transform dynamic moving obstacles into stationaryones with a sensor based approach which can provide relativeinformation of moving obstacles and environments The keyfeatures of the proposed method are twofold the first keyfeature is the simplification of complex dynamic motionplanning problems into stationary ones using the virtualplane approach while the second feature is the robustness ofa sensor basedmotion planning in which the pose estimationof moving obstacles is made by using a Kinect sensor whichprovides a ranging and color detection The sensor basedapproach improves the accuracy and robustness of the reac-tive motion planning approach by providing the informationof the obstacles and environments The performance ofthe proposed method was demonstrated through not onlysimulation studies but also field experiments using multiplemoving obstacles
In the further work a sensor fusion approach which couldimprove the heading estimation of a robot and the speedestimation of moving objects will be investigated more formore robust motion planning
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgments
This research was supported by the National Research Foun-dation ofKorea (NRF) (no 2014-017630 andno 2014-063396)and was also supported by the Human Resource TrainingProgram for Regional Innovation and Creativity through theMinistry of Education and National Research Foundation ofKorea (no 2014-066733)
References
[1] R Siegwart I R Nourbakhsh and D Scaramuzza Introductionto AutonomousMobile RobotsTheMITPress LondonUK 2ndedition 2011
[2] J P van den Berg and M H Overmars ldquoRoadmap-basedmotion planning in dynamic environmentsrdquo IEEE Transactionson Robotics vol 21 no 5 pp 885ndash897 2005
[3] D Hsu R Kindel J-C Latombe and S Rock ldquoRandomizedkinodynamic motion planning with moving obstaclesrdquo TheInternational Journal of Robotics Research vol 21 no 3 pp 233ndash255 2002
[4] R Gayle A Sud M C Lin and D Manocha ldquoReactivedeformation roadmaps motion planning of multiple robots indynamic environmentsrdquo in IEEERSJ International Conferenceon Intelligent Robots and Systems (IROS rsquo07) pp 3777ndash3783 SanDiego Calif USA November 2007
[5] S S Ge and Y J Cui ldquoDynamic motion planning for mobilerobots using potential field methodrdquo Autonomous Robots vol13 no 3 pp 207ndash222 2002
[6] K P Valavanis T Hebert R Kolluru and N TsourveloudisldquoMobile robot navigation in 2-D dynamic environments using
Journal of Sensors 13
an electrostatic potential fieldrdquo IEEE Transactions on SystemsMan and Cybernetics Part A Systems and Humans vol 30 no2 pp 187ndash196 2000
[7] X Gao Y Ou Y Fu et al ldquoA novel local path planningmethod considering both robot posture and path smoothnessrdquoin Proceeding of the IEEE International Conference on Roboticsand Biomimetics (ROBIO 13) pp 172ndash178 Shenzhen ChinaDecember 2013
[8] J Jiao Z Cao P Zhao X Liu and M Tan ldquoBezier curvebased path planning for a mobile manipulator in unknownenvironmentsrdquo in Proceedings of the IEEE International Confer-ence on Robotics and Biomimetics (ROBIO rsquo13) pp 1864ndash1868Shenzhen China December 2013
[9] A Stentz ldquoOptimal and efficient path planning for partially-known environmentsrdquo in Proceedings of the IEEE InternationalConference on Robotics and Automation vol 4 pp 3310ndash3317May 1994
[10] G Song H Wang J Zhang and T Meng ldquoAutomatic dockingsystem for recharging home surveillance robotsrdquo IEEE Transac-tions on Consumer Electronics vol 57 no 2 pp 428ndash435 2011
[11] M Seder and I Petrovic ldquoDynamic window based approachto mobile robot motion control in the presence of movingobstaclesrdquo in Proceedings of the IEEE International Conferenceon Robotics and Automation (ICRA rsquo07) pp 1986ndash1991 RomaItaly April 2007
[12] D Fox W Burgard and S Thrun ldquoThe dynamic windowapproach to collision avoidancerdquo IEEERobotics andAutomationMagazine vol 4 no 1 pp 23ndash33 1997
[13] J Borenstein and Y Koren ldquoReal-time obstacle avoidance forfast mobile robotsrdquo IEEE Transactions on Systems Man andCybernetics vol 19 no 5 pp 1179ndash1187 1989
[14] J Borenstein and Y Koren ldquoThe vector field histogrammdashfastobstacle avoidance for mobile robotsrdquo IEEE Transactions onRobotics and Automation vol 7 no 3 pp 278ndash288 1991
[15] R Simmons ldquoThe curvature-velocity method for local obsta-cle avoidancerdquo in Proceedings of the 13th IEEE InternationalConference on Robotics and Automation vol 4 pp 3375ndash3382Minneapolis Minn USA April 1996
[16] J Minguez and L Montano ldquoNearness Diagram (ND) nav-igation collision avoidance in troublesome scenariosrdquo IEEETransactions on Robotics and Automation vol 20 no 1 pp 45ndash59 2004
[17] E Owen and L Montano ldquoMotion planning in dynamicenvironments using the velocity spacerdquo in Proceedings of theIEEE IRSRSJ International Conference on Intelligent Robots andSystems (IROS 05) pp 997ndash1002 August 2005
[18] A Chakravarthy and D Ghose ldquoObstacle avoidance in adynamic environment a collision cone approachrdquo IEEE Trans-actions on Systems Man and Cybernetics Part A Systems andHumans vol 28 no 5 pp 562ndash574 1998
[19] F Belkhouche T Jin and C H Sung ldquoPlane collision coursedetection betweenmoving objects by transformation of coordi-natesrdquo in Proceedings of the 17th IEEE International Conferenceon Control Applications Part of IEEE Multi-Conference onSystems and Control San Antonio Tex USA 2008
[20] P Fiorini and Z Shiller ldquoMotion planning in dynamic environ-ments using velocity obstaclesrdquo International Journal of RoboticsResearch vol 17 no 7 pp 760ndash772 1998
[21] P Fiorini and Z Shiller ldquoMotion planning in dynamic envi-ronments using velocity obstaclesrdquoThe International Journal ofRobotics Research vol 17 no 7 pp 760ndash772 1998
[22] Z Shiller F Large and S Sekhavat ldquoMotion planning indynamic environments obstacles moving along arbitrary tra-jectoriesrdquo in Proceedings of the IEEE International Conferenceon Robotics and Automation (ICRA rsquo01) pp 3716ndash3721 SeoulRepublic of Korea May 2001
[23] F Belkhouche ldquoReactive path planning in a dynamic environ-mentrdquo IEEE Transactions on Robotics vol 25 no 4 pp 902ndash9112009
[24] B Dorj D Tuvshinjargal K T Chong D P Hong and DJ Lee ldquoMulti-sensor fusion based effective obstacle avoidanceand path-following technologyrdquo Advanced Science Letters vol20 pp 1751ndash1756 2014
[25] R Philippsen B Jensen and R Siegwart ldquoTowards real-timesensor based path planning in highly dynamic environmentsrdquoin Autonomous Navigation in Dynamic Environments vol 35of Tracts in Advanced Robotics pp 135ndash148 Springer BerlinGermany 2007
[26] K-T Song and C C Chang ldquoReactive navigation in dynamicenvironment using a multisensor predictorrdquo IEEE Transactionson Systems Man and Cybernetics Part B Cybernetics vol 29no 6 pp 870ndash880 1999
[27] K Khoshelham and S O Elberink ldquoAccuracy and resolutionof kinect depth data for indoor mapping applicationsrdquo Sensorsvol 12 no 2 pp 1437ndash1454 2012
[28] J Han L Shao D Xu and J Shotton ldquoEnhanced computervision with Microsoft Kinect sensor a reviewrdquo IEEE Transac-tions on Cybernetics vol 43 no 5 pp 1318ndash1334 2013
[29] F Kammergruber A Ebner and W A Gunthner ldquoNavigationin virtual reality using microsoft kinectrdquo in Proceedings of the12th International Conference on Construction Application ofVirtual Reality Taipei Taiwan November 2012
[30] Microsoft Kinect for X-BXO 360 httpwwwxboxcomen-USxbox-oneaccessorieskinect-for-xbox-one
[31] K Kronolige and P Mihelich ldquoTechnical Description of KinectCalibrationrdquo July 2014 httpwwwrosorgwikikinect calibra-tiontechnical
[32] G Shashank M Sharma and A Shukla ldquoRobot navigationusing image processing and isolated word recognitionrdquo Interna-tional Journal of Computer Science and Engineering vol 4 no5 pp 769ndash778 2012
[33] TW Choe H S Son J H Song and J H Park Intelligent RobotSystem HBE-RoboCar HanBack Electronics Seoul Republic ofKorea 2012
[34] K R Jadavm M A Lokhandwala and A P Gharge ldquoVisionbased moving object detection and trackingrdquo in Proceedings ofthe National Conference on Recent Trends in Engineering andTechnology Anand India 2011
[35] A Chandak K Gosavi S Giri S Agrawal and P Kulka-rni ldquoPath planning for mobile robot navigation using imageprocessingrdquo International Journal of Scientific amp EngineeringResearch vol 4 pp 1490ndash1495 2013
International Journal of
AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Active and Passive Electronic Components
Control Scienceand Engineering
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
RotatingMachinery
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation httpwwwhindawicom
Journal ofEngineeringVolume 2014
Submit your manuscripts athttpwwwhindawicom
VLSI Design
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Shock and Vibration
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Civil EngineeringAdvances in
Acoustics and VibrationAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Advances inOptoElectronics
Hindawi Publishing Corporation httpwwwhindawicom
Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
SensorsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Chemical EngineeringInternational Journal of Antennas and
Propagation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Navigation and Observation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
DistributedSensor Networks
International Journal of
12 Journal of Sensors
Goalx
y
Obstacle 2rsquos current
Obstacle 1rsquos current
position and direction
position and direction
Robotrsquos currentposition
(a)
0 500 1000 1500 2000 2500 30000
1000
2000
3000Virtual plane
x (mm)
y (m
m)
minus1000 minus500 0 500 1000 1500 2000 2500 3000 3500 4000minus1000
01000200030004000
Real plane
x (mm)
y (m
m)
(b)
Figure 18 (a) Experiment result after the collision avoidance withall obstacles (b) Plot of the trajectories of the robot and the obstaclesin the virtual and real planes during the collision avoidance with allthe moving obstacles
circles The smooth trajectory was generated by using thelinear navigation laws as explained
From this experiment it is easily seen that the pro-posed hybrid reactive motion planning approach designedby the integration of the virtual plane approach and asensor based planning is very effective to dynamic collisionavoidance problems in cluttered uncertain environmentsThe effectiveness of the hybrid reactive motion planningmethod makes its usage very attractive to various dynamicnavigation applications of not only mobile robots but alsoother autonomous vehicles such as flying vehicles and self-driving vehicles
5 Conclusion
In this paper we proposed a hybrid reactive motion plan-ning method for an autonomous mobile robot in uncertain
dynamic environmentsThe hybrid reactive motion planningmethod combined a reactive path planning method whichcould transform dynamic moving obstacles into stationaryones with a sensor based approach which can provide relativeinformation of moving obstacles and environments The keyfeatures of the proposed method are twofold the first keyfeature is the simplification of complex dynamic motionplanning problems into stationary ones using the virtualplane approach while the second feature is the robustness ofa sensor basedmotion planning in which the pose estimationof moving obstacles is made by using a Kinect sensor whichprovides a ranging and color detection The sensor basedapproach improves the accuracy and robustness of the reac-tive motion planning approach by providing the informationof the obstacles and environments The performance ofthe proposed method was demonstrated through not onlysimulation studies but also field experiments using multiplemoving obstacles
In the further work a sensor fusion approach which couldimprove the heading estimation of a robot and the speedestimation of moving objects will be investigated more formore robust motion planning
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgments
This research was supported by the National Research Foun-dation ofKorea (NRF) (no 2014-017630 andno 2014-063396)and was also supported by the Human Resource TrainingProgram for Regional Innovation and Creativity through theMinistry of Education and National Research Foundation ofKorea (no 2014-066733)
References
[1] R Siegwart I R Nourbakhsh and D Scaramuzza Introductionto AutonomousMobile RobotsTheMITPress LondonUK 2ndedition 2011
[2] J P van den Berg and M H Overmars ldquoRoadmap-basedmotion planning in dynamic environmentsrdquo IEEE Transactionson Robotics vol 21 no 5 pp 885ndash897 2005
[3] D Hsu R Kindel J-C Latombe and S Rock ldquoRandomizedkinodynamic motion planning with moving obstaclesrdquo TheInternational Journal of Robotics Research vol 21 no 3 pp 233ndash255 2002
[4] R Gayle A Sud M C Lin and D Manocha ldquoReactivedeformation roadmaps motion planning of multiple robots indynamic environmentsrdquo in IEEERSJ International Conferenceon Intelligent Robots and Systems (IROS rsquo07) pp 3777ndash3783 SanDiego Calif USA November 2007
[5] S S Ge and Y J Cui ldquoDynamic motion planning for mobilerobots using potential field methodrdquo Autonomous Robots vol13 no 3 pp 207ndash222 2002
[6] K P Valavanis T Hebert R Kolluru and N TsourveloudisldquoMobile robot navigation in 2-D dynamic environments using
Journal of Sensors 13
an electrostatic potential fieldrdquo IEEE Transactions on SystemsMan and Cybernetics Part A Systems and Humans vol 30 no2 pp 187ndash196 2000
[7] X Gao Y Ou Y Fu et al ldquoA novel local path planningmethod considering both robot posture and path smoothnessrdquoin Proceeding of the IEEE International Conference on Roboticsand Biomimetics (ROBIO 13) pp 172ndash178 Shenzhen ChinaDecember 2013
[8] J Jiao Z Cao P Zhao X Liu and M Tan ldquoBezier curvebased path planning for a mobile manipulator in unknownenvironmentsrdquo in Proceedings of the IEEE International Confer-ence on Robotics and Biomimetics (ROBIO rsquo13) pp 1864ndash1868Shenzhen China December 2013
[9] A Stentz ldquoOptimal and efficient path planning for partially-known environmentsrdquo in Proceedings of the IEEE InternationalConference on Robotics and Automation vol 4 pp 3310ndash3317May 1994
[10] G Song H Wang J Zhang and T Meng ldquoAutomatic dockingsystem for recharging home surveillance robotsrdquo IEEE Transac-tions on Consumer Electronics vol 57 no 2 pp 428ndash435 2011
[11] M Seder and I Petrovic ldquoDynamic window based approachto mobile robot motion control in the presence of movingobstaclesrdquo in Proceedings of the IEEE International Conferenceon Robotics and Automation (ICRA rsquo07) pp 1986ndash1991 RomaItaly April 2007
[12] D Fox W Burgard and S Thrun ldquoThe dynamic windowapproach to collision avoidancerdquo IEEERobotics andAutomationMagazine vol 4 no 1 pp 23ndash33 1997
[13] J Borenstein and Y Koren ldquoReal-time obstacle avoidance forfast mobile robotsrdquo IEEE Transactions on Systems Man andCybernetics vol 19 no 5 pp 1179ndash1187 1989
[14] J Borenstein and Y Koren ldquoThe vector field histogrammdashfastobstacle avoidance for mobile robotsrdquo IEEE Transactions onRobotics and Automation vol 7 no 3 pp 278ndash288 1991
[15] R Simmons ldquoThe curvature-velocity method for local obsta-cle avoidancerdquo in Proceedings of the 13th IEEE InternationalConference on Robotics and Automation vol 4 pp 3375ndash3382Minneapolis Minn USA April 1996
[16] J Minguez and L Montano ldquoNearness Diagram (ND) nav-igation collision avoidance in troublesome scenariosrdquo IEEETransactions on Robotics and Automation vol 20 no 1 pp 45ndash59 2004
[17] E Owen and L Montano ldquoMotion planning in dynamicenvironments using the velocity spacerdquo in Proceedings of theIEEE IRSRSJ International Conference on Intelligent Robots andSystems (IROS 05) pp 997ndash1002 August 2005
[18] A Chakravarthy and D Ghose ldquoObstacle avoidance in adynamic environment a collision cone approachrdquo IEEE Trans-actions on Systems Man and Cybernetics Part A Systems andHumans vol 28 no 5 pp 562ndash574 1998
[19] F Belkhouche T Jin and C H Sung ldquoPlane collision coursedetection betweenmoving objects by transformation of coordi-natesrdquo in Proceedings of the 17th IEEE International Conferenceon Control Applications Part of IEEE Multi-Conference onSystems and Control San Antonio Tex USA 2008
[20] P Fiorini and Z Shiller ldquoMotion planning in dynamic environ-ments using velocity obstaclesrdquo International Journal of RoboticsResearch vol 17 no 7 pp 760ndash772 1998
[21] P Fiorini and Z Shiller ldquoMotion planning in dynamic envi-ronments using velocity obstaclesrdquoThe International Journal ofRobotics Research vol 17 no 7 pp 760ndash772 1998
[22] Z Shiller F Large and S Sekhavat ldquoMotion planning indynamic environments obstacles moving along arbitrary tra-jectoriesrdquo in Proceedings of the IEEE International Conferenceon Robotics and Automation (ICRA rsquo01) pp 3716ndash3721 SeoulRepublic of Korea May 2001
[23] F Belkhouche ldquoReactive path planning in a dynamic environ-mentrdquo IEEE Transactions on Robotics vol 25 no 4 pp 902ndash9112009
[24] B Dorj D Tuvshinjargal K T Chong D P Hong and DJ Lee ldquoMulti-sensor fusion based effective obstacle avoidanceand path-following technologyrdquo Advanced Science Letters vol20 pp 1751ndash1756 2014
[25] R Philippsen B Jensen and R Siegwart ldquoTowards real-timesensor based path planning in highly dynamic environmentsrdquoin Autonomous Navigation in Dynamic Environments vol 35of Tracts in Advanced Robotics pp 135ndash148 Springer BerlinGermany 2007
[26] K-T Song and C C Chang ldquoReactive navigation in dynamicenvironment using a multisensor predictorrdquo IEEE Transactionson Systems Man and Cybernetics Part B Cybernetics vol 29no 6 pp 870ndash880 1999
[27] K Khoshelham and S O Elberink ldquoAccuracy and resolutionof kinect depth data for indoor mapping applicationsrdquo Sensorsvol 12 no 2 pp 1437ndash1454 2012
[28] J Han L Shao D Xu and J Shotton ldquoEnhanced computervision with Microsoft Kinect sensor a reviewrdquo IEEE Transac-tions on Cybernetics vol 43 no 5 pp 1318ndash1334 2013
[29] F Kammergruber A Ebner and W A Gunthner ldquoNavigationin virtual reality using microsoft kinectrdquo in Proceedings of the12th International Conference on Construction Application ofVirtual Reality Taipei Taiwan November 2012
[30] Microsoft Kinect for X-BXO 360 httpwwwxboxcomen-USxbox-oneaccessorieskinect-for-xbox-one
[31] K Kronolige and P Mihelich ldquoTechnical Description of KinectCalibrationrdquo July 2014 httpwwwrosorgwikikinect calibra-tiontechnical
[32] G Shashank M Sharma and A Shukla ldquoRobot navigationusing image processing and isolated word recognitionrdquo Interna-tional Journal of Computer Science and Engineering vol 4 no5 pp 769ndash778 2012
[33] TW Choe H S Son J H Song and J H Park Intelligent RobotSystem HBE-RoboCar HanBack Electronics Seoul Republic ofKorea 2012
[34] K R Jadavm M A Lokhandwala and A P Gharge ldquoVisionbased moving object detection and trackingrdquo in Proceedings ofthe National Conference on Recent Trends in Engineering andTechnology Anand India 2011
[35] A Chandak K Gosavi S Giri S Agrawal and P Kulka-rni ldquoPath planning for mobile robot navigation using imageprocessingrdquo International Journal of Scientific amp EngineeringResearch vol 4 pp 1490ndash1495 2013
International Journal of
AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Active and Passive Electronic Components
Control Scienceand Engineering
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
RotatingMachinery
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation httpwwwhindawicom
Journal ofEngineeringVolume 2014
Submit your manuscripts athttpwwwhindawicom
VLSI Design
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Shock and Vibration
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Civil EngineeringAdvances in
Acoustics and VibrationAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Advances inOptoElectronics
Hindawi Publishing Corporation httpwwwhindawicom
Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
SensorsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Chemical EngineeringInternational Journal of Antennas and
Propagation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Navigation and Observation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
DistributedSensor Networks
International Journal of
Journal of Sensors 13
an electrostatic potential fieldrdquo IEEE Transactions on SystemsMan and Cybernetics Part A Systems and Humans vol 30 no2 pp 187ndash196 2000
[7] X Gao Y Ou Y Fu et al ldquoA novel local path planningmethod considering both robot posture and path smoothnessrdquoin Proceeding of the IEEE International Conference on Roboticsand Biomimetics (ROBIO 13) pp 172ndash178 Shenzhen ChinaDecember 2013
[8] J Jiao Z Cao P Zhao X Liu and M Tan ldquoBezier curvebased path planning for a mobile manipulator in unknownenvironmentsrdquo in Proceedings of the IEEE International Confer-ence on Robotics and Biomimetics (ROBIO rsquo13) pp 1864ndash1868Shenzhen China December 2013
[9] A Stentz ldquoOptimal and efficient path planning for partially-known environmentsrdquo in Proceedings of the IEEE InternationalConference on Robotics and Automation vol 4 pp 3310ndash3317May 1994
[10] G Song H Wang J Zhang and T Meng ldquoAutomatic dockingsystem for recharging home surveillance robotsrdquo IEEE Transac-tions on Consumer Electronics vol 57 no 2 pp 428ndash435 2011
[11] M Seder and I Petrovic ldquoDynamic window based approachto mobile robot motion control in the presence of movingobstaclesrdquo in Proceedings of the IEEE International Conferenceon Robotics and Automation (ICRA rsquo07) pp 1986ndash1991 RomaItaly April 2007
[12] D Fox W Burgard and S Thrun ldquoThe dynamic windowapproach to collision avoidancerdquo IEEERobotics andAutomationMagazine vol 4 no 1 pp 23ndash33 1997
[13] J Borenstein and Y Koren ldquoReal-time obstacle avoidance forfast mobile robotsrdquo IEEE Transactions on Systems Man andCybernetics vol 19 no 5 pp 1179ndash1187 1989
[14] J Borenstein and Y Koren ldquoThe vector field histogrammdashfastobstacle avoidance for mobile robotsrdquo IEEE Transactions onRobotics and Automation vol 7 no 3 pp 278ndash288 1991
[15] R Simmons ldquoThe curvature-velocity method for local obsta-cle avoidancerdquo in Proceedings of the 13th IEEE InternationalConference on Robotics and Automation vol 4 pp 3375ndash3382Minneapolis Minn USA April 1996
[16] J Minguez and L Montano ldquoNearness Diagram (ND) nav-igation collision avoidance in troublesome scenariosrdquo IEEETransactions on Robotics and Automation vol 20 no 1 pp 45ndash59 2004
[17] E Owen and L Montano ldquoMotion planning in dynamicenvironments using the velocity spacerdquo in Proceedings of theIEEE IRSRSJ International Conference on Intelligent Robots andSystems (IROS 05) pp 997ndash1002 August 2005
[18] A Chakravarthy and D Ghose ldquoObstacle avoidance in adynamic environment a collision cone approachrdquo IEEE Trans-actions on Systems Man and Cybernetics Part A Systems andHumans vol 28 no 5 pp 562ndash574 1998
[19] F Belkhouche T Jin and C H Sung ldquoPlane collision coursedetection betweenmoving objects by transformation of coordi-natesrdquo in Proceedings of the 17th IEEE International Conferenceon Control Applications Part of IEEE Multi-Conference onSystems and Control San Antonio Tex USA 2008
[20] P Fiorini and Z Shiller ldquoMotion planning in dynamic environ-ments using velocity obstaclesrdquo International Journal of RoboticsResearch vol 17 no 7 pp 760ndash772 1998
[21] P Fiorini and Z Shiller ldquoMotion planning in dynamic envi-ronments using velocity obstaclesrdquoThe International Journal ofRobotics Research vol 17 no 7 pp 760ndash772 1998
[22] Z Shiller F Large and S Sekhavat ldquoMotion planning indynamic environments obstacles moving along arbitrary tra-jectoriesrdquo in Proceedings of the IEEE International Conferenceon Robotics and Automation (ICRA rsquo01) pp 3716ndash3721 SeoulRepublic of Korea May 2001
[23] F Belkhouche ldquoReactive path planning in a dynamic environ-mentrdquo IEEE Transactions on Robotics vol 25 no 4 pp 902ndash9112009
[24] B Dorj D Tuvshinjargal K T Chong D P Hong and DJ Lee ldquoMulti-sensor fusion based effective obstacle avoidanceand path-following technologyrdquo Advanced Science Letters vol20 pp 1751ndash1756 2014
[25] R Philippsen B Jensen and R Siegwart ldquoTowards real-timesensor based path planning in highly dynamic environmentsrdquoin Autonomous Navigation in Dynamic Environments vol 35of Tracts in Advanced Robotics pp 135ndash148 Springer BerlinGermany 2007
[26] K-T Song and C C Chang ldquoReactive navigation in dynamicenvironment using a multisensor predictorrdquo IEEE Transactionson Systems Man and Cybernetics Part B Cybernetics vol 29no 6 pp 870ndash880 1999
[27] K Khoshelham and S O Elberink ldquoAccuracy and resolutionof kinect depth data for indoor mapping applicationsrdquo Sensorsvol 12 no 2 pp 1437ndash1454 2012
[28] J Han L Shao D Xu and J Shotton ldquoEnhanced computervision with Microsoft Kinect sensor a reviewrdquo IEEE Transac-tions on Cybernetics vol 43 no 5 pp 1318ndash1334 2013
[29] F Kammergruber A Ebner and W A Gunthner ldquoNavigationin virtual reality using microsoft kinectrdquo in Proceedings of the12th International Conference on Construction Application ofVirtual Reality Taipei Taiwan November 2012
[30] Microsoft Kinect for X-BXO 360 httpwwwxboxcomen-USxbox-oneaccessorieskinect-for-xbox-one
[31] K Kronolige and P Mihelich ldquoTechnical Description of KinectCalibrationrdquo July 2014 httpwwwrosorgwikikinect calibra-tiontechnical
[32] G Shashank M Sharma and A Shukla ldquoRobot navigationusing image processing and isolated word recognitionrdquo Interna-tional Journal of Computer Science and Engineering vol 4 no5 pp 769ndash778 2012
[33] TW Choe H S Son J H Song and J H Park Intelligent RobotSystem HBE-RoboCar HanBack Electronics Seoul Republic ofKorea 2012
[34] K R Jadavm M A Lokhandwala and A P Gharge ldquoVisionbased moving object detection and trackingrdquo in Proceedings ofthe National Conference on Recent Trends in Engineering andTechnology Anand India 2011
[35] A Chandak K Gosavi S Giri S Agrawal and P Kulka-rni ldquoPath planning for mobile robot navigation using imageprocessingrdquo International Journal of Scientific amp EngineeringResearch vol 4 pp 1490ndash1495 2013
International Journal of
AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Active and Passive Electronic Components
Control Scienceand Engineering
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
RotatingMachinery
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation httpwwwhindawicom
Journal ofEngineeringVolume 2014
Submit your manuscripts athttpwwwhindawicom
VLSI Design
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Shock and Vibration
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Civil EngineeringAdvances in
Acoustics and VibrationAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Advances inOptoElectronics
Hindawi Publishing Corporation httpwwwhindawicom
Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
SensorsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Chemical EngineeringInternational Journal of Antennas and
Propagation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Navigation and Observation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
DistributedSensor Networks
International Journal of
International Journal of
AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Active and Passive Electronic Components
Control Scienceand Engineering
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
RotatingMachinery
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation httpwwwhindawicom
Journal ofEngineeringVolume 2014
Submit your manuscripts athttpwwwhindawicom
VLSI Design
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Shock and Vibration
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Civil EngineeringAdvances in
Acoustics and VibrationAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Advances inOptoElectronics
Hindawi Publishing Corporation httpwwwhindawicom
Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
SensorsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Chemical EngineeringInternational Journal of Antennas and
Propagation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Navigation and Observation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
DistributedSensor Networks
International Journal of