chapter5 -motion detection, segmentation and wavelets

24
DIGITAL IMAGE DIGITAL IMAGE PROCESSING PROCESSING CHAPTER5 - CHAPTER5 - MOTION DETECTION, SEGMENTATION AND WAVELETS TMM24 TMM24 43 43 Motion Detection Motion detection is the process of detecting a change in position of an object relative to its surroundings or the c in the surroundings relative to an object. Motion detection Often from a static camera. Common in surveillance systems. Often performed on the pixel level on (due to speed constraints).

Upload: ashwin-josiah-samuel

Post on 03-Nov-2015

9 views

Category:

Documents


0 download

DESCRIPTION

MOTION DETECTION

TRANSCRIPT

  • DIGITAL IMAGE PROCESSINGCHAPTER5 - MOTION DETECTION, SEGMENTATION AND WAVELETS TMM2443

  • Motion DetectionMotion detection is the process of detecting a change in position of an object relative to its surroundings or the change in the surroundings relative to an object.Motion detection: Often from a static camera. Common in surveillance systems. Often performed on the pixel level only (due to speed constraints).

  • Motion DetectionMotion detection plays a fundamental role in any object tracking or video surveillance algorithm, to the extent that nearly all such algorithms start with motion detection.Actually, the reliability with which potential foreground objects in movement can be identified, directly impacts on the efficiency and performance level achievable by subsequent processing stages of tracking and/or recognition.However, detecting regions of change in images of the same scene is not a straightforward task since it does not only depend on the features of the foreground elements, but also on the characteristics of the background such as, for instance, the presence of vacillating elements.From this starting point, any detected changed pixel will be considered as part of a foreground object.

  • Motion Detection

  • Motion Detection (SAD)SAD is an algorithm for measuring the similarity between two video frames. It finds the motion by firstly subtracting the two frames. Secondly, the absolute value of the latter result is obtained. Thirdly, these differences are summed to create a simple metric of image motion.Let's take an example, sequences of frames are employed, the current frame and the next frame are taken into consideration at every computation. Then, the frames are changed (the next frame becomes present frame and the frame that comes after it becomes the next frame). The SAD algorithm is reliable because: it's fast, takes less memory, time, and number of steps to achieve the calculation.

  • Motion Detection (SAD)

  • Motion DetectionThe applications of motion detection are:

    Detection of unauthorized entry.Detection of ending of area occupancy to switch off the lights. Detection of a moving object which triggers a camera to record subsequent events.

  • BinarizationImage binarization: converts a gray-level or a colored image to a black and white image. Frequently, binarization is used as a pre-processoing step before Optical Character Recognition (OCR). In fact, most OCR packages on the market work only on bi-level (black & white) images. The simplest way to use image binarization is to choose a threshold value, and classify all pixels with values above this threshold as white, and all other pixels as black. The problem then is how to select the correct threshold. In many cases, finding one threshold compatible to the entire image is very difficult, and in many cases even impossible.

  • ThresholdingThresholding produces a binary image from a grey-scale or colour image by setting pixel values to 1 or 0 depending on whether they are above or below the threshold value. This is commonly used to separate or segment a region or object within the image based upon its pixel values, as shown in following Figure :Thresholding for object identication

  • ThresholdingIn its basic operation, thresholding operates on an image I as follows:

  • ThresholdingIn Matlab, this can be carried out using the function im2bw and a threshold in the range 0 to 1.The im2bw function automatically converts colour images (such as the input in the example) to grayscale and scales the threshold value supplied (from 0 to 1) according to the given range of the image being processed. For grey-scale images, whose pixels contain a single intensity value, a single threshold must be chosen. For colour images, a separate threshold can be dened for each channel (to correspond to a particular colour or to isolate different parts of each channel).

  • ThresholdingIn many applications, colour images are converted to grey scale prior to thresholding for simplicity.Thresholding is the work-horse operator for the separation of image foreground from background. One question that remains is how to select a good threshold. This topic is addressed on image segmentation.Thresholding of a complex image

  • SegmentationImage segmentation is the process of partitioning a digital image into multiple segments (sets of pixels, also known as superpixels). The goal of segmentation is to simplify and/or change the representation of an image into something that is more meaningful and easier to analyze.Image segmentation is typically used to locate objects and boundaries (lines, curves, etc.) in images. More precisely, image segmentation is the process of assigning a label to every pixel in an image such that pixels with the same label share certain visual characteristics.The result of image segmentation is a set of segments that collectively cover the entire image, or a set of contours extracted from the image.Each of the pixels in a region is similar with respect to some characteristic or computed property, such as color, intensity, or texture.

  • SegmentationThe Applications of Image Segmentation are:

    Medical imaging: Locate tumors, Measure tissue volumes.Object detection: Locate objects in satellite images (roads, forests).Recognition Tasks: Fingerprint recognition, Iris recognition.Traffic control systems.Content-based image retrieval.

  • SegmentationIn general, completely independent segmentation is one of the most difcult tasks in the design of computer vision systems and remains an active eld of image processing and machine vision research. Segmentation occupies a very important role in image processing because it is so often the vital rst step which must be successfully taken before subsequent tasks such as feature extraction, classication, description, etc. can be sensibly attempted. After all, if you cannot identify the objects in the rst place, how can you classify or describe them?The basic goal of segmentation, then, is to partition the image into mutually exclusive regions to which we can subsequently attach meaningful labels. The segmented objects are often termed the foreground and the rest of the image is the background.

  • SegmentationNote that, for any given image, we cannot generally speak of a single, correct segmentation. Rather, the correct segmentation of the image depends strongly on the types of object or region we are interested in identifying. What relationship must a given pixel have with respect to its neighbours and other pixels in the image in order that it be assigned to one region or another?This really is the central question in image segmentation and is usually approached through one of two basic routes:Edge/boundary methods This approach is based on the detection of edges as a means to identifying the boundary between regions. As such, it looks for sharp differences between groups of pixels.Region-based methods This approach assigns pixels to a given region based on their degree of mutual similarity.

  • Use of image properties and features in segmentationIn the most basic of segmentation techniques (intensity thresholding), the segmentation is used only on the absolute intensity of the individual pixels. However, more sophisticated properties and features of the image are usually required for successful segmentation. There are three basic properties/qualities in images which we can exploit in our attempts to segment images :Colour is, in certain cases, the simplest and most obvious way of discriminating between objects and background. Objects which are characterized by certain colour properties (i.e. are conned to a certain region of a colour space) may be separated from the background. For example, segmenting an orange from a background comprising a blue tablecloth is a trivial task.

  • Use of image properties and features in segmentationTexture is a somewhat loose concept in image processing. It does not have a single denition but, nonetheless, accords reasonably well with our everyday notions of a rough or smooth object. Thus, texture refers to the typical spatial variation in intensity or colour values in the image over a certain spatial scale. A number of texture metrics are based on calculation of the variance or other statistical moments of the intensity over a certain neighbourhood / spatial scale in the image. Motion of an object in a sequence of image frames can be a powerful cue. When it takes place against a stationary background, simple frame-by-frame subtraction techniques are often sufcient to yield an accurate outline of the moving object.In summary, most segmentation procedures will use and combine information on one of more of the properties colour, texture and motion.

  • Problems with thresholdingThere are several serious limitations to simple thresholding:there is no guarantee that the thresholded pixels will be contiguous (thresholding does not consider the spatial relationships between pixels);it is sensitive to accidental and uncontrolled variations in the illumination eld;it is only really applicable to those simple cases in which the entire image is divisible into a foreground of objects of similar intensity and a background of distinct intensity to the objects.

  • Region growing and region splittingRegion growing is an approach to segmentation in which pixels are grouped into larger regions based on their similarity according to predened similarity criteria. It should be apparent that specifying similarity criteria alone is not an effective basis for segmentation and it is necessary to consider the adjacency spatial relationships between pixels. In region growing, we typically start from a number of seed pixels randomly distributed over the image and append pixels in the neighbourhood to the same region if they satisfy similarity criteria relating to their intensity, colour or related statistical properties of their own neighbourhood.

  • Region growing and region splittingSimple examples of similarity criteria might be:(1) the absolute intensity difference between a candidate pixel and the seed pixel must lie within a specied range;(2) the absolute intensity difference between a candidate pixel and the running average intensity of the growing region must lie within a specied range;(3) the difference between the standard deviation in intensity over a specied local neighbourhood of the candidate pixel and that over a local neighbourhood of the candidate pixel must (or must not) exceed a certain threshold this is a basic roughness/smoothness criterion.

  • Region growing and region splitting

  • Edge DetectionEdge detection is one of the most important and widely studied aspects of image processing.If we can nd the boundary of an object by locating all its edges, then we have effectively segmented it. Supercially, edge detection seems a relatively straightforward affair. After all, edges are simply regions of intensity transition between one object and another. However, despite its conceptual simplicity, edge detection remains an active eld of research. Most edge detectors are fundamentally based on the use of gradient differential lters.

  • Edge DetectionTrying to actually nd an edge, several factors may complicate the situation. The rst relates to edge strength or, if you prefer, the context how large does the gradient have to be for the point to be designated part of an edge? The second is the effect of noise differential lters are very sensitive to noise and can return a large response at noisy points which do not actually belong to the edge. Third, where exactly does the edge occur? Most real edges are not discontinuous; they are smooth, in the sense that the gradient gradually increases and then decreases over a nite region.

  • Edge DetectionThe Canny edge detector is an edge detection operator that uses a multi-stage algorithm to detect a wide range of edges in images. It was developed by John F. Canny in 1986. Canny's aim was to discover the optimal edge detection algorithm. In this situation, an "optimal" edge detector means:Good detection the algorithm should mark as many real edges in the image as possible.Good localization edges marked should be as close as possible to the edge in the real image.Minimal response a given edge in the image should only be marked once, and where possible, image noise should not create false edges.

  • Edge DetectionThe Canny edge detector

  • Edge DetectionThe Canny edge detector

  • Edge DetectionTypes of the detected edges:A viewpoint independent edge typically reflects inherent properties of the three-dimensional objects, such as surface markings and surface shape.A viewpoint dependent edge may change as the viewpoint changes, and typically reflects the geometry of the scene, such as objects occluding one another.

  • Edge Detection The purpose of using edge detection methods:The purpose of detecting sharp changes in image brightness is to capture important events and changes in properties of the world.In the ideal case, the result of applying an edge detector to an image may lead to a set of connected curves that indicate the boundaries of objects, the boundaries of surface markings as well as curves that correspond to discontinuities in surface orientation. Thus, applying an edge detection algorithm to an image may significantly reduce the amount of data to be processed and may therefore filter out information that may be regarded as less relevant, while preserving the important structural properties of an image.

  • THE END