scale invariant feature transform

Post on 17-Dec-2014

646 Views

Category:

Education

5 Downloads

Preview:

Click to see full reader

DESCRIPTION

Scale Invariant Feature Transform Algorithm

TRANSCRIPT

1

Scale Invariant Feature Transform

Team Members : Chinmay Samant

Rajdeep Mandrekar

Shanker Naik

Laxman Pednekar

Guide : Prof. Rachael Dhanraj

Sub-Image Matching

• Sub-Image Matching – the main part of our project.

• Rejection of the Chain code Algorithm.

• Using Scale invariant Feature Transform (or SIFT) Algorithm.

3

Scale-invariant feature transform Algorithm• Creating Scale-space and Difference of

Gaussian pyramid• Extrema detection• Noise Elimination• Orientation assignment• Descriptor Computation• Keypoints matching

4

Sub-Image Matching

Creating Scale-space and Difference of Gaussian pyramid

• In scale Space we take the image and generate progressively blurred out images, then resize the original image to half and generate blurred images.

• Images that are of same size but different scale are called octaves.

5

How Blurring is performed?

• Mathematically blurring is defined as convolution of Gaussian operator and image.

• where G= Gaussian Blur operator

6

Difference of Gaussian(DoG)

7

Extrema detection

8

In the image X is current pixel, while green circles are its neighbors, X is marked as Keypoint if it is greatest or least of all 26 neighboring pixels.

First and last scale are not checked for keypoints as there are not enough neighbors to compare.

Noise Elimination

1. Removing Low Contrast features- If magnitude of intensity at current pixel is less

than certain value then it is rejected.

2. Removing edges– For poorly defined peaks in the DoG function,

the principal curvature across the edge would be much larger than the principal curvature along it

– To determine edges Hessian matrix is used.

9

Tr (H) = Dxx + Dyy

Det(H) = DxxDyy - (Dxy )2

R=Tr(H)^2/Det(H)

If the value of R is greater for a candidate keypoint, then that keypoint is poorly localized and hence rejected.

10

Orientation assignment

• The gradient magnitude, m(x, y), and orientation, θ(x, y), is precomputed using pixel differences:

11

Orientation assignment

12

Descriptor Computation

13

Keypoints matching

• Each keypoint in the original image is compared to every keypoints in the transformed image using the descriptors.

• The descriptors of the two respective, keypoints must be closest. Then match is found.

14

Thank You

15

top related