november 10, 2004 prof. christopher rasmussen [email protected] lab web page: vision.cis.udel.edu

25
November 10, 2004 Prof. Christopher Rasmussen [email protected] Lab web page: vision.cis.udel.edu

Upload: edward-fields

Post on 31-Dec-2015

216 views

Category:

Documents


1 download

TRANSCRIPT

November 10, 2004

Prof. Christopher [email protected]

Lab web page:vision.cis.udel.edu

Research in the DV lab

• Tracking, segmentation• Model-building,

mapping, and learning• Cue combination and

selection• Auto-calibration of

sensors• Current projects:

– Road following, architectural modeling

Road Following: Background

• Edge-based methods: Fit curves to lane lines or road borders – [Taylor et al., 1996; Southall & Taylor, 2001; Apostoloff &

Zelinsky, 2003]

• Region-based methods: Segment image based on discriminating charac- teristic such as color or texture

– [Crisman & Thorpe, 1991; Zhang & Nagel, 1994; Rasmussen, 2002; Apostoloff & Zelinsky, 2003]

from Apostoloff& Zelinsky, 2003

Problematic Scenes for Standard Approaches

No good contrast or edges, but organizing feature is vanishing point, which indicates road direction

Grand Challenge sample terrain Antarctic “ice highway”

Results: Curve Tracking

Integrate vanishing point directions to get points along curves parallel to (but not necessarily on) road

Panoramic camera v2.0a

~1.5 inches

Correspondence-based Mosaicing

• Minimum of 4 corresponding points in two images sufficient to define transformation warping one into other

• Can be done manually or automatically

Correspondence-based Mosaicing

Translation only

Road Shape Estimation (3 cameras)

• Road edge tracking– Estimate quadratic curvature via

Kalman filter with Sobel edge measurements

Motion-based Mosaicing

• It’s possible to make mosaics of cameras with non-overlapping fields of view provided we have sequences from them (Irani et al., 2001)– Overlapping pixels are wasted pixels

• We’re working on approaches for n cameras > 2

Motivation: DARPA Grand Challenge

• Organized by DARPA (the U. S. Defense Advanced Research Projects Agency)

• A robot road race through the desert from Barstow, CA to Las Vegas, NV on March 13, 2004

• Prize for the winning team: $1 million (nobody won)

• Running again next October with $2 million prize

Caltech’s 2004 DGC entry “Bob”

Problem: How to Use Roads as Cues?

Bob’s track relative to course corridors

(No road following)

We’re working on integrating camera views from vehicle with aerial photos

Tracing Roads in Aerial Photos

Structure-based Obstacle Avoidance with a LADAR

Merging Structure into Local Map

• Integrate raw depth measurements from several successive frames using vehicle inertial estimates

• Combine with camera information• We’re working on calibration techniques

courtesy of A. Zelinsky

Laser-Camera Registration

Range image (180 x 32)90° horiz. x 15° vert.

Video frame (360 x 240)

Registeredlaser, camera

3-D Building Models from Images

courtesy of F. van den HeuvelShow VRML model

Robot Platform for Mapping Project

PTZ camera

Wirelessethernet GPS antenna

Onboard computer

Analog video capture card

Not shown: electronic compass, tilt sensor

View Planning

• Where to take the photos from?• Hard constraints: Need overlapping fields of view for stereo

correspondences• Soft constraints: Balance accuracy of estimated 3-D model,

quality of appearance (texture maps) with acquisition, computation time– Based on camera field of view, height of building, placement of

occluding objects like trees and other buildings

Path Planning

• How to get a robot from point A to point B?– Criteria: Distance, difficulty, uncertainty

Path Planning

GPS-referenced CAD map of campus buildings is available

Aerial photos contain information aboutpaths, vegetation as well as buildings

Obstacle Avoidance

How to detect trash cans, people, walls, bushes, trees, etc. and smoothly combine detours around them with global path planned from map and executed with GPS?

Segmentation-Based Path Following

Segmentation of Road Images Using Different Cues

Texture Color +T+L

Laser C+T+L