traffic project details

2
Vertical Projection Tracking Cars Using Background Estimation Results The model uses the background estimation technique that you specify in the Edit Parameters block to estimate the background. Here are descriptions of the available techniques: Estimating median over time - This algorithm updates the median value of the time series data based upon the new data sample. The example increments or decrements the median by an amount that is related to the running standard deviation and the size of the time series data. The example also applies a correction to the median value if it detects a local ramp in the time series data. Overall, the estimated median is constrained within Chebyshev's bounds, which are sqrt(3/5) of the standard deviation on either side of the mean of the data. Computing median over time - This method computes the median of the values at each pixel location over a time window of 30 frames. Eliminating moving objects - This algorithm identifies the moving objects in the first few image frames and labels the corresponding pixels as foreground pixels. Next, the algorithm identifies the incomplete background as the pixels that do not belong to the foreground pixels. As the foreground objects move, the algorithm estimates more and more of the background pixels. Once the example estimates the background, it subtracts the background from each video frame to produce foreground images. By thresholding and performing morphological closing on each foreground image, the model produces binary feature images. The model locates the cars in each binary feature image using the Blob Analysis block. Then it uses the Draw Shapes block to draw a green rectangle around the cars that pass beneath the white line. The counter in the upper left corner of the Results window tracks the number of cars in the region of interest. Traffic Warning Sign Templates The example uses two set of templates - one for detection and the other for recognition. To save computation, the detection templates are low resolution, and the example uses one detection template per sign. Also, because the red pixels are the distinguishing feature of the traffic warning signs, the example uses these pixels in the detection step. For the recognition step, accuracy is the highest priority. So, the example uses three high resolution templates for each sign. Each of these templates shows the sign in a slightly different orientation. Also, because the white pixels are the key to recognizing each traffic warning sign, the example uses these pixels in the recognition step. The Detection Templates window shows the traffic warning sign detection templates. The Recognition Templates window shows the traffic warning sign recognition templates. The templates were generated using vipwarningsigns_templates.m and were stored in vipwarningsigns_templates.mat. Detection The example analyzes each video frame in the YCbCr color space. By thresholding and

Upload: shekhar-imvu

Post on 19-Apr-2017

213 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Traffic Project Details

Vertical Projection

Tracking Cars Using Background Estimation ResultsThe model uses the background estimation technique that you specify in the Edit Parameters block to estimate the background. Here are descriptions of the available techniques:

Estimating median over time - This algorithm updates the median value of the time series data based upon the new data sample. The example increments or decrements the median by an amount that is related to the running standard deviation and the size of the time series data. The example also applies a correction to the median value if it detects a local ramp in the time series data. Overall, the estimated median is constrained within Chebyshev's bounds, which are sqrt(3/5) of the standard deviation on either side of the mean of the data.

Computing median over time - This method computes the median of the values at each pixel location over a time window of 30 frames.

Eliminating moving objects - This algorithm identifies the moving objects in the first few image frames and labels the corresponding pixels as foreground pixels. Next, the algorithm identifies the incomplete background as the pixels that do not belong to the foreground pixels. As the foreground objects move, the algorithm estimates more and more of the background pixels.

Once the example estimates the background, it subtracts the background from each video frame to produce foreground images. By thresholding and performing morphological closing on each foreground image, the model produces binary feature images. The model locates the cars in each binary feature image using the Blob Analysis block. Then it uses the Draw Shapes block to draw a green rectangle around the cars that pass beneath the white line. The counter in the upper left corner of the Results window tracks the number of cars in the region of interest.

Traffic Warning Sign Templates

The example uses two set of templates - one for detection and the other for recognition.

To save computation, the detection templates are low resolution, and the example uses one detection template per sign. Also, because the red pixels are the distinguishing feature of the traffic warning signs, the example uses these pixels in the detection step.

For the recognition step, accuracy is the highest priority. So, the example uses three high resolution templates for each sign. Each of these templates shows the sign in a slightly different orientation. Also, because the white pixels are the key to recognizing each traffic warning sign, the example uses these pixels in the recognition step.

The Detection Templates window shows the traffic warning sign detection templates.

The Recognition Templates window shows the traffic warning sign recognition templates.

The templates were generated using vipwarningsigns_templates.m and were stored in vipwarningsigns_templates.mat.

DetectionThe example analyzes each video frame in the YCbCr color space. By thresholding and performing morphological operations on the Cr channel, the example extracts the portions of the video frame that contain blobs of red pixels. Using the Blob Analysis block, the example finds the pixels and bounding box for each blob. The example then compares the blob with each warning sign detection template. If a blob is similar to any of the traffic warning sign detection templates, it is a potential traffic warning sign.

Tracking and RecognitionThe example compares the bounding boxes of the potential traffic warning signs in the current video frame with those in

Page 2: Traffic Project Details

the previous frame. Then the example counts the number of appearances of each potential traffic warning sign.

If a potential sign is detected in 4 contiguous video frames, the example compares it to the traffic warning sign recognition templates. If the potential traffic warning sign is similar enough to a traffic warning sign recognition template in 3 contiguous frames, the example considers the potential traffic warning sign to be an actual traffic warning sign.

When the example has recognized a sign, it continues to track it. However, to save computation, it no longer continues to recognize it.

DisplayAfter a potential sign has been detected in 4 or more video frames, the example uses the Draw Shape block to draw a yellow rectangle around it. When a sign has been recognized, the example uses the Insert Text block to write the name of the sign on the video stream. The example uses the term 'Tag' to indicate the order in which the sign is detected.