digital photogrammetry 7

26
Digital Image processing Methods used in photogrammetry

Upload: krishna-reddy-konda

Post on 17-Jan-2016

224 views

Category:

Documents


0 download

DESCRIPTION

digital photogrammetry

TRANSCRIPT

Page 1: Digital Photogrammetry 7

Digital Image processingMethods used in photogrammetry

Page 2: Digital Photogrammetry 7

Fundamentals

Photogrammetric image procesing are developed and applied in the fields of image acquisition, pre-processing and segmentation.

Methods: Image measuring Line following Image matching Object recognition

Page 3: Digital Photogrammetry 7

Handling image data

Image pyramids Gaussian shmooted Dimension +30%

Page 4: Digital Photogrammetry 7

Compression

Lossless compression

Lossy compression

Page 5: Digital Photogrammetry 7

Image pre-processing Histogram

Provides the frequency distribution of the pixel values in the image

The most important are:

Relative frequency

Min,Max

contrast

Mean, variance

entropy, simmetry

Page 6: Digital Photogrammetry 7

Contrast enhancement

Manipulating brightness and contrast of an image results in a change of the pixel value distribution, for example along an image edge.

Linear contrast stretching Histogram equalization

Page 7: Digital Photogrammetry 7

Other simple operations

Thresholding

Image combination (aritmetic, Logical, bitwise)

Page 8: Digital Photogrammetry 7

Filter operations

Smoothing filters (low-pass filter) are mainly used for pixel noise suppression. Gaussian filter possess optimal smoothing properties

Page 9: Digital Photogrammetry 7

Smoothing filter

Page 10: Digital Photogrammetry 7

Smoothing filter

Page 11: Digital Photogrammetry 7

Morphological operators

Application of a non-linear filter for the enhancement or suppression of black an white regions with (known) properties.

Two fundamental functions based on boolean operator are defined:

Erosion, which leads to the shrinking of regions. The value 1 is set up if all pixels in the filter region (e.g. 3x3 elements) correspond to the structure elements. Otherwise 0.

Dilatationwhich yields to the extension of connected regions. 1 is setted if at least one matching pixel is present.

Page 12: Digital Photogrammetry 7

Morphological operators

Sequential application of dilatation and erosion can be used. Opening erososion followed by dilatation Remove

small objects closing dilatation followed by erosion close gaps

Page 13: Digital Photogrammetry 7

Edge extraction

Edge are primary images structures used by human visual system for object recognition.

Charachteristics a significant change in adjacent pixel values

perpendicular to the edge direction. edges have direction and magnitude formed by small images structures High frequencies into frequency domain.

Page 14: Digital Photogrammetry 7

Edge extraction

First order differential filter

Sobel operator

Page 15: Digital Photogrammetry 7

Edge extraction

Second order differential filter

Laplacian filter

Page 16: Digital Photogrammetry 7

Edge extraction

Laplacian of Gaussian filter

Laplacian filter is senitive to noise a better results would be expected if the image is smooted in advance with a gaussian filter.

Page 17: Digital Photogrammetry 7

Hough Transform

The hough transform is based on the condition that all point on an nalytical curve can be defined by one common set of parameters.

Based on that fact, the straight line y = mx + b can be represented as a point (r, φ) in the parameter space

Page 18: Digital Photogrammetry 7

Sample

Page 19: Digital Photogrammetry 7

Enanched Edge operator

Simple methods for edge detector often do not deliver satisfactory results. An edge filter should have following chaacteristics: Robustness simple parametrization (with out interactive input) High subpixel accuracy minimum computational effort.

Canny and Deriche operator

edge extraction in image pyramids (kÖthe 1997)

Least square edge operators (el hakim 1996)

Page 20: Digital Photogrammetry 7

Geometric Image transformation

The term rectification denotes a general modification of pixel coordinates e.g. for:

Translation and rotation Change of scale or size Correction of distrosion effect Projective rectification orthophoto production Texture mapping

Generally is performed into two stages Transformation Of Pixel Coordinates Calculation of (output) pixel values

Page 21: Digital Photogrammetry 7

Geometric Image transformation

Stereo image rectification

This process is useful for stereo vision, because the 2-D stereo correspondence problem is reduced to a 1-D problem.

Steps: Find homologous points

Remove outliners

Compute fundamental matrix

Rectify images

Fusiello, Andrea (2000-03-17). "Epipolar Rectification”

Page 22: Digital Photogrammetry 7

Geometric Image transformation

Geometric Rectification

Raw remotely sensed data gathered by satellite or aircraft are representations of the irregular surface of the Earth. Remotely sensed images are distorted by both the curvatures of the Earth and the sensor being used. The process of shifting pixel locations to remove distortion is known as rectification or georectification.

Spatial interpolation

Intensity interpolation

The geometric relationship between the input pixel coordinates (column and row; referred to as x’, y’) and the associated map coordinates of this same point (X, Y) must be identified.

Polynomial equations are used to convert source file coordinates into the referencing map coordinates

Page 23: Digital Photogrammetry 7

Geometric Image transformation

Intensity interpolation involves the extraction of a brightness value from an x′, y′ location in the original (distorted) input image and its relocation to the appropriate x, y coordinate location in the rectified output image.

There are several methods of brightness value (BV) intensity interpolation that can be applied, including:

nearest neighbor,

bilinear interpolation, and

cubic convolution.

Page 24: Digital Photogrammetry 7

NEAREST NEIGHBOUR

Nearest neighbour resampling uses the digital value from the pixel in the original image which is nearest to the new pixel location in the corrected image. This is the simplest method and does not alter the original values, but may result in some pixel values being duplicated while others are lost. This method also tends to result in a disjointed or blocky image appearance

Page 25: Digital Photogrammetry 7

BILINEAR INTERPOLATION

Bilinear interpolation resampling takes a weighted average of 4 pixels in the original image nearest to the new pixel location. The averaging process alters the original pixel values and creates entirely new digital values in the output image. This may be undesirable if further processing and analysis, such as classification based on spectral response, is to be done. If this is the case, resampling may best be done after the classification process.

Page 26: Digital Photogrammetry 7

CUBIC CONVOLUTION

Cubic convolution resampling calculates a distance weighted average of a block of sixteen pixels from the original image which surround the new output pixel location. As with bilinear interpolation, this method results in completely new pixel values. However, these two methods both produce images which have a much sharper appearance and avoid the blocky appearance of the nearest neighbour method