visibility map (high low) surveillance with visual tagging and camera placement j. zhao and s.-c....

1
Visibility map (high low) Surveillance with Visual Tagging and Camera Placement J. Zhao and S.-C. Cheung — Center for Visualization and Virtual Environment, University of Kentucky INTRODUCTION Summary and Future work Visual Tagging To identify and locate common objects across disparate camera views Based on identifying “semantically rich” visual features such as faces, gaits or artificial markers The “Camera Placement” Question: Given a surveillance environment, how many cameras and how should the cameras be placed to achieve the best visual tagging performance? Contributions: A general statistical framework of calculating visual tagging performance of a camera network Analytical solution for a single camera Monte-Carlo based solution for any placement with arbitrary number of cameras Iterative integer-programming based algorithm to compute “optimal” camera placement Application in “Privacy-protected” camera network I. Statistical Visibility Model II. Visibility from a single camera Visibility Model It is unnecessary for the tag to be visible to all cameras. All it takes are TWO cameras! Two cases: 1. Uniquely Identified Tags (e.g. faces) - need homographies between camera pairs - get tag location by intersecting epipolar lines 2. Ambiguous Tags (e.g. colored tags) - need full calibration - get tag location by intersecting light rays III. Visibility for arbitrary numbers of cameras Optimal Camera placement II. Deciding the grid density Experimental results A generic metric model for camera placement on “Visual Tagging” problem. Optimal placement by adaptive grid-based BP Application in privacy protected surveillance Occlusion from multiple objects Ambiguity caused by similar tags The binary visibility function indicates whether the tag P can be successfully detected from the camera C is We need the projected tag to be at least T pixel long for proper detection: Problem: Solution may not exist for a dense tag grid Adaptive Algorithm: 1. Starting from a sparse grid lattice 2. Increase density of gridC & gridP until A predefined average target visibility, or Density of gridC exceeds a limit. Fixed Parameters: easily measured room topology cameras’ intrinsic parameters dimensions (lengths) of a tag number of tags Design Parameters: we can control number of cameras position of each camera orientation of each camera Random Parameters: little or no control position (x,y) of a tag orientation of a tag assume a a-prior statistical model Simple 2D geometry (in paper) shows that, the length l of the image of the tag is given by l = optimal placement maximizing the visibility metric. Very challenging because Nonlinear No analytic solution Proposed Approximate solution: Discretize the domain into grid points Progressive refinement on grid density I. Solving the discrete problem Divide the environment into a lattice gridP, N P grid points for the tag gridC, N C grid points for cameras Visibility = tag visible to at least two cameras or with ) , ( max i k i C P I Visibility map (high low) Objective function : Constraints: b i indicates whether to put a camera on the i th grid points Require each tag is visible to at least 2 cameras Each physical position has at most one camera Standard Binary Programming – solved by lp_solve III. Results The followings show the results after 1, 3, 5 iterations: camera grid tag grid computed camera position & pose Corresponding visibility map and average visibility: Simulation of Optimal Camera Placement: - Twelve “optimal” camera views (iteration 5) of a randomly moving humanoid with a tag Application in Privacy Protected Surveillance: - Even though the tag is not visible in Cam3, its location is determined using epipolar geometry. Contact: [email protected] , [email protected] Visit: http://www.vis.uky.edu/mialab

Upload: george-reed

Post on 02-Jan-2016

213 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Visibility map (high low) Surveillance with Visual Tagging and Camera Placement J. Zhao and S.-C. Cheung — Center for Visualization and Virtual Environment,

Visibility map (high low)

Surveillance with Visual Tagging and Camera PlacementJ. Zhao and S.-C. Cheung — Center for Visualization and Virtual Environment, University of Kentucky

INTRODUCTIONINTRODUCTION

Summary and Future workSummary and Future work

Visual Tagging To identify and locate common objects across disparate camera views Based on identifying “semantically rich” visual

features such as faces, gaits or artificial markers

The “Camera Placement” Question:Given a surveillance environment, how many cameras and how should the cameras be placed to achieve the best visual tagging performance?

Contributions: A general statistical framework of calculating

visual tagging performance of a camera network Analytical solution for a single camera Monte-Carlo based solution for any placement

with arbitrary number of cameras Iterative integer-programming based algorithm

to compute “optimal” camera placement Application in “Privacy-protected” camera network

I. Statistical Visibility Model

II. Visibility from a single camera

Visibility ModelVisibility Model

It is unnecessary for the tag to be visible to all cameras. All it takes are TWO cameras! Two cases:

1. Uniquely Identified Tags (e.g. faces)- need homographies between camera pairs- get tag location by intersecting epipolar lines

2. Ambiguous Tags (e.g. colored tags)- need full calibration- get tag location by intersecting light rays

III. Visibility for arbitrary numbers of cameras

Optimal Camera placementOptimal Camera placement

II. Deciding the grid density

Experimental resultsExperimental results

A generic metric model for camera placement on “Visual Tagging” problem.

Optimal placement by adaptive grid-based BP Application in privacy protected surveillance

Occlusion from multiple objects Ambiguity caused by similar tags

The binary visibility function indicates whether the tag P can be successfully detected from the camera C is

We need the projected tag to be at least T pixel long for proper detection:

Problem: Solution may not exist for a dense tag grid

Adaptive Algorithm:1. Starting from a sparse grid lattice2. Increase density of gridC & gridP until

A predefined average target visibility, or Density of gridC exceeds a limit.

Fixed Parameters: easily measured room topology cameras’ intrinsic parameters dimensions (lengths) of a tag number of tags

Design Parameters: we can control number of cameras position of each camera orientation of each camera

Random Parameters: little or no control position (x,y) of a tag orientation of a tag

assume a a-prior statistical model

Simple 2D geometry (in paper) shows that, the length l of the image of the tag is given by

l =

optimal placement maximizing the visibility metric.

Very challenging because Nonlinear No analytic solution

Proposed Approximate solution: Discretize the domain into grid points Progressive refinement on grid density

I. Solving the discrete problem

Divide the environment into a lattice gridP, NP grid points for the tag gridC, NC grid points for cameras

Visibility = tag visible to at least two cameras

or

with

),(max iki CPI

Visibility map (high low)

Objective function:

Constraints:

bi indicates whether to put a camera on

the ith grid points

Require each tag is visible to at least 2 cameras

Each physical position has at most one camera

Standard Binary Programming – solved by lp_solve

III. Results

The followings show the results after 1, 3, 5 iterations:

camera gridtag gridcomputed camera position & pose

Corresponding visibility map and average visibility:

Simulation of Optimal Camera Placement:- Twelve “optimal” camera views (iteration 5) of a

randomly moving humanoid with a tag

Application in Privacy Protected Surveillance:- Even though the tag is not visible in Cam3, its

location is determined using epipolar geometry.

Contact: [email protected], [email protected]: http://www.vis.uky.edu/mialab