[ieee 2013 8th international conference on computer science & education (iccse) - colombo, sri...
TRANSCRIPT
The 8th International Conference on Computer Science & Education (ICCSE 2013) April 26-28, 2013. Colombo, Sri Lanka SuC1.6
Detecting Changed Areas in Images from Different
View Points
D.M.R Kulasekara
Department of Physics
University of Colombo
Colombo, Sri Lanka
Ahstract-This paper is about a method to extract changed
areas in images from different viewpoints using image processing techniques.It approximates or even outperforms previously
proposed schemes with regard to inexpensive, descriptive and
efficient, and a much faster image comparison method that can
be computed and compared. A recent report on discolouration of
world heritage Sigiri frescoes revealed that there is no proper
method in Sri Lanka to identify the discolouration and
distortions that may occur on archeologically valuable pictures.
Although many feature detection and feature matching methods
are available, these methods do only one to one feature matching
whereas archaeologists need the whole image for comparison.
Simplifying these methods to the essential images can give equal geometrical elevation using homographic transformations. Then
image subtraction methods can be applied from this subtraction
outcome to identify the areas that are different from the image
referred to. The approach in this project is to identify the
differences between the different view point images by
comparing the image of the object with the image referred to in
spite of huge or small colors gaps remaining between the original
and the new image and without considering the camera, the light
conditions and age difference between images.
Index Tenns- view point, Homographic transformations,
Image subtraction, data range, featuresdetection.
I. INTRODUCTION
Nowadays many archaeologists and scientists use the naked
eye to investigate and compare images from different viewpoints. This method is time consuming, expensive and
inefficient as well. The proposed method analyses the old
images with new images while overcoming the drawbacks in
the old method.
The main objective of the project is to compare a recently
taken image of a certain object with a reference image of the
same object and identify the changed areas of the recently
taken image.
The reference image can differ from the referred image by
age, position, where the image was taken and the angle of the
image taken, as shown in Figure 1.
There are many feature detection and feature machine
methods like Gradient Location and Orientation Histogram,
Scale-invariant feature transform, Principal Component
Analysis- Scale-invariant feature transform and Speeded Up
978-1-4673-4463-0/13/$31.00 ©2013 IEEE 525
S.M.B. Harshanath
Department of Information Technology
SUIT
Malabe, Sri Lanka
Robust Features. But these methods do only one to one feature
machine in images that have different angles and there is no
proper method for the whole image to compare the image
descriptively.
With the position and the angle of the camera an image
taken by it changes. So it is virtually impossible to have two
identical images of the same object. To answer these issues
common areas of different images must be used to compare
them with each other.
It is important to adjust sizes of different images to a
common size before the comparison process.
Image 1 (� .. --.--------------'. , ' , ' , ' , ' , ' , ' , ' , ' , ' , ' , ' , ' , ' , ' , ' , ' , ' .... _-----_._--_. __ .. ;,'
I
Image 2 , .. -_._--------------.. , I ' , ' , ' , ' , ' , ' , ' , ' , ' , ' , ' , ' , ' , ' , ' , ' , ' , ' , ' >._----------------_.'
Fig. 1. 20 appearance of objects can change radically with viewpoint.
II. THEORY
• Highly distinctive feature detection and one to one
feature matching is done by Speeded-Up Robust
Features algorithm (SURF) [7],[9]. • Homography transforms or maps between points on
two image planes that correspond to the same location
•
•
•
•
•
on a planar object in image. It can be shown that such a
mapping represents a single 3-by-3 orthogonal matrix
[9, 13].
Transformation from one image plane to another image
plane is done by Perspective transformation [9].
Simple blur operation is used for smoothing [9].
Image transformation using Erosion and Dilation is
done using a 3-by-3 kernel with 1 iteration [9].
Image subtraction uses pixel by pixel 3 channel
subtraction [9], [15].
Identified the howspread out data, calculate range for
the Pixels colors values using the standard deviation
and mean.[17],[18].
(a) Negatively skewed
Mode
NegatNe dlrecDOil
(b) Normal (no skew)
Mean MBdian Mode
The normal cul'le
repre&!nts a pertectly symmell1ca1 d!slrilXlllon
(e) Positively skewed
Mode
Positive direction
Fig. 2. Three types of data distribution. [17].
III. METHODOLOGY AND IMPLEMENTATION
Implementation of the project was done using Open Source
Computer Vision Library 2.1 commonly known as OpenCV,
Microsoft Visual Studio 2008 and Visual C++.
First, interest points were selected at distinctive locations
on the image, such as corners, blobs, and T-junctions. Then,
the descriptive vectors were matched between the reference
image and the object image. The matching is often based on
the distance between the vectors, such as Mahalanobis distance
or Euclidean distance. Once they were matched, the
corresponding points between two images were identified.
The orthogonal matrix transforms one plane image to
another plane image by the Perspective transformation. Then
the object image plane was converted to reference image plane.
Object image comers were applied to the orthogonal matrix to
identify the common area.
These corresponding points were used to convert image from one plane to other plane using Homography transform of
points on two image planes corresponding to the same location
on a planar object in image. The output of this mapping was
represented by a single 3-by-3 orthogonal matrix[19].
Then Simple blur operation was applied on both images for
smoothening. From this action camera noise was wiped out.
The reference image was subtracted from the Object image.
This colour image subtraction was done pixel by pixel.
526
SuC1.6
Q (I,j) =P'(I,j) -P2(I,j)
Output of the subtraction was converted to gray scale, and
then the used threshold value less than the gray values of pixels
was converted to zero. { 0 , if src[X,Yl < tbresbolded value ,
dis[X,y)= 255, else.
Tbresbolded value = L'Src[x,y] / / Number of pixels
This gray level image had some small white dots. So that
Erosion and Dilation were applied on the image.
The region of white pixels in the output was used for
identification. Then the reference image was compared with
the corresponding object image region. Then the respective means and variances of the regions in the two images were
compared. Range and standard deviation measure shows the
spread out pixels vales in detected regions. If that region has a
difference in spread type then respective changes in the
particular region were identified and represented in the output.
Figure 3 is a graphical explanation of the implemented system.
Reference Image Object Image
Smoothing
D Image SubtractioD
D _.:\.pply Erosion and Dilation
D :Compare Dat3 Spreading in Corresponding Regions
D Detecting Ch3Dged Are3s in Object 1m3ge
Fig. 3. System Diagram.
IV. RESULTS
Fig. 4. Reference image taken from Kandewiharaya temple[19].
The image in figure 4 is considered as reference or
previous/initial image. The dimension of this image is 3648 x
2736.The image taken is from Kandewiharaya temple.
Fig. 5. Object image with a change position and discoloured places are added to the image[19].
527
SuC1.6
After changing the position of the camera the second image
was taken. That image is shown in the figure 5. The dimension
of this image is 3648x2736. Noises were added and brightness, contrast and fill light were changed in this image. This image
was considered as the object image.
The output of the image is shown in figure 6.Discolurated and/or deformed areas were marked by A, B, C, D, E , F, G, H,
I, J, K, L, M&N. Since these same areas (A,B) are invisible to
the naked eye, the proposed system is the best way to analyze
them. The selected areas have different intensity compared
with reference image corresponding area.
Figure 6 is the Output image and contains the common area
of referenced image, figure 4 and figure 5 is the object image.
If there were places in that common area, with different color
values then these areas were marked by the black cycle. It is not always necessary to mark with a black cycle. The
deformed area can be shown marked as it is in the original.
Fig. 6. The discolouration and defomJations places are shown with a black mark[19].
V. CONCLUSION
Archaeologists use the naked eye comparison method to
compare images. This method is inefficient and less accurate. It
does not cover the whole image and some small areas in the
image can be missed. Further, it cannot be applied on pixel by
pixel. The color changed area size depends on the size of the
image. This is why the naked eye comparison method and
feature detection fail.
This research visually identifies the discolouration and
deformations of two-dimensional images with a system
implemented. The system compares an image, pixel by pixel
against the reference image and detects seven small areas
where small changes have occurred in the referred image
including even small points like dots. If an area has different color value greater than threshold value and that area is greater
than 9 pixels, it will be identified as a respective changed area.
When colors change suddenly within a small area, then the system selects this area as respective changedplaces but it is
wrong. This wrong decision was generated by the homographic
transformations. To rectify this several object images were
used .These images must be taken from different angles.
This system transforms an image into a large collection of
local feature vectors, each of which is invariant to image
translation, scaling, rotation and partially invariant to
illumination changes and affine. Although the image is rotated,
it would not be a problem for that method. The brightness, Contrast and focusing ability of the camera is different in each
image. Brightness and contrast are adjusted before the analysis.
Since these images were captured by cameras, they contain
noise. The system uses smoothing techniques to remove this
noise.
VI. COPYRIGHT FORMS
You must submit the IEEE Electronic Copyright Form
(ECF) as described in your author-kit message. THIS FORM
MUST BE SUBMITTED IN ORDER TO PUBLISH YOUR
PAPER.
ACKNOWLEDGMENT
Since childhood, it was a habit to visit places of historical
interest. There were various old paintings called frescoes to be
seen. This influenced the decision to work on a project to
compare the discolouration in those frescoes using a computer
based method. An interest in image processing drove me
towards this.
D.M.R Kulasekara offers his heartfelt gratitude to Dr.
Chandana Jayaratne for motivating and giving advice to
complete this project.
Finally we appreciate everyone who helped with our
project. It is a great pleasure for us to acknowledge the
contributions of all the individuals who lent us a helping hand.
REFERENCES
[1] Tony Lindeberg. "Feature detection with automatic scale
selection",pp. 1-24.
[2] David G. Lowe "Distinctive image features from scale-invariant keypoints", pp. 1-25, 2004.
[3] KrystianMikolajczyk,CordeJiaSchmid. "Indexing based on scale
invariant interest points", pp. 1-7.
[4] Tony Lindeberg. "Discrete Scale-Space Theory and the ScaleSpace Primal Sketch". 1991.
528
SuC1.6
[5] Tony Lindeberg, Lars Bretzner. "Real-time scale selection in hybrid multi-scale representations", pp 2-13, 2003.
[6] Matthew Brown, David Lowe. "Invariant features from interest point groups", pp. 1-6.
[7] Herbert Bay, TinneTuytelaars, Luc Van Gool ."Speeded Up Robust Features", pp. 1-12, 2008.
[8] lianboShi,CarloTomasi . "Good Features to track", pp. 1-13, 1994.
[9] Gary Bradski, Adrian Kaehler. "Learning OpenCV". First
Edition, pp. 31 to 76, 90 to 10,109 to 114, 129 to 130, 135 to
140, 153 to 171, 321 to 336, 2008.
[10] OpenCV 2.1 C Reference.2009.Drawing Functions. [Online] Available
http://opencv.willowgarage. coml documentation/ dra wing_f uncti ons.htm!.
[II] OpenCV.2011.Using OpenCV 2.1 with MS Studio.[Online] Available
//opencv. willowgarage.comlwikil.
Visual :http:
[12] Managed C++ and Windows Forms Image Viewer.2011.[Online] Available :http://www.codeproject.comlKB/miscctrl/mcppwinforms02.asp
x .
[13] Opencv2.1documentation.20 10. FeatureDetection.[Online]
Available: http://opencv.will ow garage. coml documentation/ cpp/f eature_detection.html.
[14] Codeproject.20 II.ImageResizing.[Online]Available:http://www .codeproject.comlKB/GDI-plus/imgresizoutperfgdiplus.aspx.
[15] OpenCV 2.0 C Reference.2009.Image Processing and Analysis Reference. [ Online ] Available http://www71O.univ- lyonl.fr /-bouakaz/OpenCV-0.9.5/docs/ ref /OpenCVRef _ImageProcessing.htm .
[16] DavidStavens. 'The OpenCVLibrary:Computing Optical Flow", pp. 1-19, 2006.
[17] "SKEWNESS".[ Online
A vai lab Ie: http://tophqbooks.com/books/783005
[18] A Comparison of the Mean, Median, and Mode . [Online]
Available
http://www.southalabama.edu/coe/bset/johnson/lectures/lecI5.ht m
[19] D.M.R Kulasekara and S.M.B. Harshanath. "Image Processing Technique to Detect Discoloration and Deformations in Ancient Pictures", presented at laffna University International Research
Conference, 1 affna, Sri Lanka 2012.