cmpe-537 computer vision

19
CmpE-537 Computer Vision Term Project Color and Illumination Independent Landmark Detection for Robot Soccer Domain By Tekin Meriçli Artificial Intelligence Laboratory Department of Computer Engineering Boğaziçi University 27/12/2007

Upload: freya-gaines

Post on 30-Dec-2015

31 views

Category:

Documents


1 download

DESCRIPTION

Term Project Color and Illumination Independent Landmark Detection for Robot Soccer Domain By Tekin Meriçli Artificial Intelligence Laboratory Department of Computer Engineering Boğaziçi University 27/12/2007. CmpE-537 Computer Vision. Outline. Introduction Related Work - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: CmpE-537 Computer Vision

CmpE-537 Computer Vision

Term Project

Color and Illumination Independent Landmark Detection for Robot Soccer

Domain

By

Tekin Meriçli

Artificial Intelligence LaboratoryDepartment of Computer Engineering

Boğaziçi University

27/12/2007

Page 2: CmpE-537 Computer Vision

2

Outline• Introduction• Related Work• Proposed Approach• Experimental Setup• Results• Conclusion• References

Page 3: CmpE-537 Computer Vision

3

Introduction

• Three fundamental questions of mobile robotics– “Where am I?”,– “Where am I going?”,– “How can I get there?”

• The aim of this project is to answer the first question for robot soccer domain– Specifically RoboCup Standard Platform League

(former 4-Legged League)– Robots with vision sensors (i.e. cameras) are used

Page 4: CmpE-537 Computer Vision

4

Introduction

Page 5: CmpE-537 Computer Vision

5

Introduction

• All important objects on the field, that is the ball, the beacons, and the goals, are color-coded

• This makes vision, and hence localization modules highly dependent on illimunation– The robots may not be able to detect the beacons at

all, or calculate their distances and orientations to the beacons wrong if there is even a small change in the illumination level

• Main motivation is to make the vision / localization processes color and illumination independent in the Standard Platform League domain

Page 6: CmpE-537 Computer Vision

6

Related Work

• Color / illumination dependent approach– Color segmentation / pixel classification on the

image– Connected component analysis to build regions– Sanity checks to remove noise and illogical

perceptions• aspect ratio, minimum area, etc.

• Most of the RoboCup teams use this approach [1–4]

Page 7: CmpE-537 Computer Vision

7

Related Work

• Feature detection / recognition based approach– Used for simultaneous localization and

mapping (SLAM) purposes– Scale-invariant feature transform (or SIFT) can

be used in algorithms for tasks like matchin different views of an object or scene (e.g. for stereo vision) and object recognition [7]

• SURF, which stands for Speeded-Up Robust Features, approximates SIFT

Page 8: CmpE-537 Computer Vision

8

Proposed Approach

• Image labeling process that has been used in color segmentation-based approach is replaced with region labeling in which the landmarks and their immediate surrounding are covered– The robot is placed at a location where it can

see the landmark, and then a region is selected around the landmark to specify the region in which the robot should find the SURF features and associate them with that particular landmark

Page 9: CmpE-537 Computer Vision

9

Proposed Approach

Page 10: CmpE-537 Computer Vision

10

Proposed Approach

• This process is repeated for all landmarks on the soccer field from different angles and distances

• Supervised learning is used to learn the associations between the feature descriptors and the landmarks

• The distance values for landmarks are calculated using the inter-feature distances

Page 11: CmpE-537 Computer Vision

11

Experimental Setup

• A real Aibo ERS-7 robot is placed on the field facing a particular landmark with different angles and distances to take pictures

• An offline visualizer tool is implemented to show the SURF points on the image and run tests on various images

Page 12: CmpE-537 Computer Vision

12

Experimental Setup

Page 13: CmpE-537 Computer Vision

13

Experimental Setup

• SURF points are shown as little circles

• Details of the descriptors are listed on the text area

• Similar feature points are observed on different images even though the distance and angle values are different– Similarity is defined as the

distance between feature points in 64 dimensional feature space

Page 14: CmpE-537 Computer Vision

14

Experimental Setup

• First step is to process the training images and define the landmark regions by clicking on the image

• The next step is to test images to check whether the landmark in the image is recognized and whether the distance and angle estimates are correct

Page 15: CmpE-537 Computer Vision

15

Results

• SURF computation took an average of 56ms on 354x290 images– Aibo robots capture 208x160 images, but have a

slower processor; hence, SURF computation takes 59ms on average, which is approximately 17fps

• Landmark recognition performance was better than distance estimates– Due to the cylindrical shape of landmarks, some

feature points may be closer to or farther from each other depending on the angle, or may totally be hidden

– Doing the computations on groups of feature points rather than individuals may improve the performance

Page 16: CmpE-537 Computer Vision

16

Conclusion

• A feature-based landmark detection approach is explored

• Runs with reasonable fps rate• Main contribution is that this approach provides

color (and illumination to some extent) independence in vision and localization processes in robot soccer domain– It has not been tried by any of the RoboCup teams

so far• Trying different SURF parameters and running

experiments on physical robots are left as future work

Page 17: CmpE-537 Computer Vision

17

References• [1] H. L. Akın et.al. “Cerberus 2006 Team Report”. 2006.

• [2] Kaplan, K., B. Celik, T. Mericli, C. Mericli, and H. L. Akın. “Practical Extensions to Vision-Based Monte Carlo Localization Methods for Robot Soccer Domain”, In RoboCup International Symposium 2005, Osaka, July 18-19, 2005.

• [3] Peter Stone, Peggy Fidelman, Nate Kohl, Gregory Kuhlmann, Tekin Mericli, Mohan Sridharan, and Shao-en Yu. “The UT Austin Villa 2006 RoboCup Four-Legged Team”. Technical Report UT-AI-TR-06-337, The University of Texas at Austin, Department of Computer Sciences, AI Laboratory, 2006.

• [4] M. J. Quinlan et.al. “The 2006 NUbots Team Report”, 2007.

• [5] Thomas Roefer et.al. “GermanTeam2006”, 2006.

• [6] Herbert Bay, Tinne Tuytelaars, Luc J. Van Gool. “SURF: Speeded Up Robust Features”, In ECCV’06, pp.404-417, 2006.

• [7] Lowe, D. G., “Distinctive Image Features from Scale-Invariant Keypoints”, In International Journal of Computer Vision, 60, 2, pp. 91-110, 2004.

Page 18: CmpE-537 Computer Vision

18

References• [8] M. Ballesta, A. Gil, O. Martnez Mozos, and O. Reinoso. “Local descriptors

for visual slam”. In Proc. of the Workshop on Robotics and Mathematics, Coimbra, Portugal, 2007.

• [9] Barfoot, T D, “Online Visual Motion Estimation using FastSLAM with SIFT Features”. In Proc. of the Int. Conf. on Robotics and Intelligent Systems (IROS), Edmonton, Alberta, August 2-6, 2005.

• [10] Pantelis Elinas and James J. Little. “Stereo vision SLAM: Near real-time learning of 3D point-landmark and 2D occupancy-grid maps using particle lters”. In IROS07, 2007.

• [11] J. Little, S. Se, and D.G. Lowe. Vision-based mobile robot localization and mapping using scale-invariant features. In IEEE Int. Conf. on Robotics & Automation, 2001.

• [12] Mart´nez Mozos, O. and Gil, A. and Ballesta, M. and Reinoso, O. “Interest Point Detectors for Visual SLAM”. In Lecture Notes in Artificial Intelligence, vol4788, 2007.

Page 19: CmpE-537 Computer Vision

19

?