[ieee 2012 fourth international symposium on information science and engineering (isise) - shanghai,...

4
Local Covariant Region Detection based on Image Contour Corner Ling Xu School of Software Engineering Chongqing University Chongqing, China [email protected] Mengning Yang School of Software Engineering Chongqing University Chongqing, China [email protected] Hongxing Wang School of Electrical and Electronic Engineering Nanyang Technological University Singapore [email protected] Xiaoze Lin School of Software Engineering Chongqing University Chongqing, China [email protected] Abstract—In this paper, a local covariant region detection algorithm is proposed based on image contour corners. We define the corner response function as the DoB (Difference of B-spline) norm of the evolution difference of the image contour. A new feature region detection algorithm is designed combining corner points from the DoB algorithm and the scale invariance of the contour direction in the corner neighborhood. Finally, the feature covariant regions in multiple groups of interfered image are matched using the repetition rate criterion. The experimental results illustrate that the proposed algorithm has the properties such as simple calculation, easy realization and better robustness. Keywords-Image contours; local features;corner detection; covariant regions 1 INTRODUCTION Image covariant feature extraction, the research foundation of many problems in computer vision field, has been widely applied in the target recognition, image matching, image retrieval, panorama stitching and many other fields . Over the last years, some achievements have been made in local covariant feature region detection. Lindeberg and Garding [1] proposed blob-like affine invariant features detection using an iterative scheme. Tuytelaars and Van Gool [2][3] put forward two kinds of affine covariant region extraction methods, Mikolajczyk and Schmid [4] proposed Harris-Laplace scale invariant feature detection algorithm Matas et al. [5] used the watershed method in invariant region detection, put forward maximal stable extreme region detecting method. Lowe[6] used a DoG (Difference of Gaussian) filter extreme in the scale space ,forming the SIFT. Mikolajczyk and Schmid[7] proposed HA (Harris/Hessian-Affine) detector, which has good affine invariance. Bay et al. [8] proposed the SURF (Speeded up robust features) extraction method based on Hessian matrix with integer arithmetic in 2006. Tuytelaars and Van Gool [9] used EBR extraction algorithm to extract the contour and to detect affine invariant region. All methods above mostly extract the regional feature using the gray information. Thus the temporal complexity is high. However, the edge contour of the image is generally stable, without large data, easy to do detecting in the multi- scale space and suitable for real-time detection of the image. In this paper we proposed a new method to detection the covariant feature regions based on the corners of the image edge contour. This article is organized as follows: Section 2 presents a corner detector based on DoB (Difference of B- spline) of the edge a local covariant feature region detection algorithm is introduced in section 3. To heel, the experimental part with rich analysis is arranged. 2 FEATURE CORNER EXTRACTION ALGORITHM 2.1 B-spline fast convolution algorithm Let P = {p i = (x i , y i ),i = 1, 2, 3, …,n} denotes the extracted contour from an image, which is composed of n pairs of coordinates. n m B denotes n-order m-scale discrete B- spline function. P(m)is the contour of P after the m-scale evolvement. When the scale is m, the evolvement function of the contour is defined by ( ) * n m x m x B , ( ) * n m ym y B (1) where * means the convolution operator; x and y express the abscissa and the ordinate of the discrete curves in all points of the contour, respectively. 2012 Fourth International Symposium on Information Science and Engineering 978-0-7695-4951-4/13 $26.00 © 2013 IEEE DOI 10.1109/ISISE.2012.58 232

Upload: xiaoze

Post on 26-Feb-2017

214 views

Category:

Documents


1 download

TRANSCRIPT

Local Covariant Region Detection based on Image Contour Corner

Ling Xu School of Software Engineering

Chongqing University Chongqing, China [email protected]

Mengning Yang School of Software Engineering

Chongqing University Chongqing, China

[email protected]

Hongxing Wang School of Electrical and Electronic Engineering

Nanyang Technological University Singapore

[email protected]

Xiaoze Lin School of Software Engineering

Chongqing University Chongqing, China

[email protected]

Abstract—In this paper, a local covariant region detection algorithm is proposed based on image contour corners. We define the corner response function as the DoB (Difference of B-spline) norm of the evolution difference of the image contour. A new feature region detection algorithm is designed combining corner points from the DoB algorithm and the scale invariance of the contour direction in the corner neighborhood. Finally, the feature covariant regions in multiple groups of interfered image are matched using the repetition rate criterion. The experimental results illustrate that the proposed algorithm has the properties such as simple calculation, easy realization and better robustness.

Keywords-Image contours; local features;corner detection;

covariant regions

1 INTRODUCTION

Image covariant feature extraction, the research foundation of many problems in computer vision field, has been widely applied in the target recognition, image matching, image retrieval, panorama stitching and many other fields .

Over the last years, some achievements have been made in local covariant feature region detection. Lindeberg and Garding[1] proposed blob-like affine invariant features detection using an iterative scheme. Tuytelaars and Van Gool [2][3] put forward two kinds of affine covariant region extraction methods, Mikolajczyk and Schmid[4] proposed Harris-Laplace scale invariant feature detection algorithm Matas et al. [5] used the watershed method in invariant region detection, put forward maximal stable extreme region detecting method. Lowe[6] used a DoG (Difference of Gaussian) filter extreme in the scale space ,forming the SIFT. Mikolajczyk and Schmid[7] proposed HA (Harris/Hessian-Affine) detector, which has

good affine invariance. Bay et al. [8] proposed the SURF (Speeded up robust features) extraction method based on Hessian matrix with integer arithmetic in 2006. Tuytelaars and Van Gool[9] used EBR extraction algorithm to extract the contour and to detect affine invariant region.

All methods above mostly extract the regional feature using the gray information. Thus the temporal complexity is high. However, the edge contour of the image is generally stable, without large data, easy to do detecting in the multi-scale space and suitable for real-time detection of the image. In this paper we proposed a new method to detection the covariant feature regions based on the corners of the image edge contour. This article is organized as follows: Section 2 presents a corner detector based on DoB (Difference of B-spline) of the edge a local covariant feature region detection algorithm is introduced in section 3. To heel, the experimental part with rich analysis is arranged.

2 FEATURE CORNER EXTRACTION ALGORITHM

2.1 B-spline fast convolution algorithm

Let P = {pi = (xi, yi),i = 1, 2, 3, …,n} denotes the extracted contour from an image, which is composed of n pairs of coordinates. n

mB denotes n-order m-scale discrete B-spline function. P(m)is the contour of P after the m-scale evolvement. When the scale is m, the evolvement function of the contour is defined by

( ) * nmx m x B� , ( ) * n

my m y B� (1) where * means the convolution operator; x and y express the abscissa and the ordinate of the discrete curves in all points of the contour, respectively.

2012 Fourth International Symposium on Information Science and Engineering

978-0-7695-4951-4/13 $26.00 © 2013 IEEE

DOI 10.1109/ISISE.2012.58

232

The particularity of the B-spline function generates the convolution algorithm with high efficiency, known as: 1

0 0 0( ) * *...* * ( )n

nm m m mB f k B B B f k

� ����������������

(2) where f(k)denotes the discrete signal. The convolution of the n-order m-scale discrete B-spline function and signal f(k)can be transformed into the convolution of f(k) and n+1 zero-order m-scale discrete B-spline functions. It makes the convolution become very easy because every component of zero-order discrete B-spline has the same particularity.

The average sum technique is used to achieve 01* ( )m iB R k�

0

1( ) * ( )i m iR k B R k�� (3)

According to the literature [9], 1 1( ) ( 1) ( 1) ( 1)i i i iR k R k R k R k m� �� � � � � � �

(4) Let the initial value of the iterative algorithm

be 0 ( ) ( )R k f k� , then just the add operation is included in Equation (4). From Equation (4), the computation complexity is independent of the scale factor m if a discrete B-spline function is used to do convolution for the signal or the image, and is only dependent on the signal or the image.

The computation complexity of the convolution will increase significantly along with the growing scale factor � if the Gaussian function is used to do convolution. Thus, the convolution using the B-spline function has higher efficiency than that using the Gaussian function.

2.2 DoB operator of the contour curve

According to previous analysis, the image contour evolution differences in the B-spline scale space can well reflect the changes in the local image contour. We define the corner response function as the DoB norm of the evolution difference of the image profile. Using this way, the characteristics of different scales can be efficiently merged.

The multi-scale space of the image can be defined as the convolution of the image and the edge-scale B-spline function, as shown in Equation (5).

1 2( , ) ( ) ( ( ), ( ))( ( , ), ( , ))

n n nm m mC u m B C u B x u B y uX u m Y u m

� � � � �

� (5)

� �

2 1 2 1

2 1 2 1 22

2

2 2

( , ) ( , ) ( , ), ( , ) ( , )

* ( ) * ( ) * ( ) * ( )

[ 0 * ( )] [ 0 * ( )]

n n n nm m m m

D u m x u m x u m y u m y u m

B x u B x u B y u B y u

D B x u D B y u

� � �

� � � � � � � �

� �

(6)

where ),( muX denotes the evolution version of the m-scale contour abscissa; ( , )y u m stands for the evolution version

of the m-scale contour ordinate; 2

* is 2-norm of vector *; and

2 1

n nm mDoB B B� � .

Equation (5) is usually called the B-spline scale space of the contours, the characteristics of which are very similar to

the Gaussian scale space driven by the thermal diffusion equation. The reason is that B-spline will approximate Gaussian function based on the central limit theorem when the order n of the B-spline tends to infinity. And according to the previous analysis, the convolution with high computational complexity in Equation (6) can be transformed into the add operation, the complexity of which is just O(N), thus the effective computation method is obtained.

According to the norm features of the DoB discussed above, the corner response function is defined as follows:

2 2( ) ( ( ( )) ( ( ( ))R t DoB x t DoB y t� � (7) where t is the formal parameter of curves on any plane.

3 EXTRACTION METHOD AND PRINCIPLE ANALYSIS OF THE COVARIANT FEATURE REGIONS

3.1 Extraction of the feature regions

Suppose the contour with a single corner (x0, y0) is smoothing under the effect of Equation (8), and everywhere of the contour is differentiable. We denote the corner point at (x0, y0) as P0(x0,y0) and the contour point at (x,y) near the corner as P(x,y) which is shown in Equation (9).. Then the tangent is shown in Equation (10). � � � �1, tanx y dy dx� ��

(8)

� � � �10 0 0 0, tanx y dy dx� ��

(9) � � � �� �� �1

0 0tan tan , ,x y x y� � �� � � (10)

Since the corner has good rotational invariance, the corner is seen as the reference point of the contour curve, When the image is rotated, the angles between the corners and the contour points do not change.

3.2 Flow of the feature region detection algorithm

Based on the above analysis, we propose our algorithm as follows:

step1: The image edge is extracted through Canny algorithm, the contour curve is smoothed using the B-spline function.

step2: According to Equation (7), every corner of the contour curve image is detected using the DoB operator.

step3: The corner is set as the local reference point, the direction of every point on the contour is computed by Equation (10) on the smooth contour segment including corners;

step4: The inflection point or another corner is marked during moving along both sides of the corner. Stop moving and the marker with the greater pixels distance between the corner and the region is detected. A multiplier factor k [0, 1] is set for the Euclidean distance between the marker and corner l, therefore the feature circle radius is r = kl.

step5: The circular region is described using the descriptor.

233

4 EXPERIMENTAL RESULTS AND ANALYSIS

In this part, matching experiments are carried out on original images and transform images (include rotation, scale , noising and light). Each group of experimental environment uses Windows 7 operating system, Intel®Pentium 3.00GHz processor, and application software Matlab 7.0.1. Some experimental results are shown in Figure 1. From the results ,we can see that the areas of the matched feature regions are basically consistent and contain the same image content, which illustrates that the feature regions is invariant to rotation and scaling transformation, from which enough matching features can be obtained in the light and noise conditions, and are strong robust.

(a) Rotation transformation

(b) Scaling transformation

(c) Noise Jamming

(d) Light transformation

Figure 1. Extraction of the image feature regions.

Repetition rate can effectively measure the ability of the

detector’s extracting feature region. Repetition rate criterion [16] is based on Equation (11).

#Re %min{ _ 1, _ 2}

correspondencespeatabilitynumber image number image

(11) where #correspondences stands for the number of the matched feature region in the same scene of two images, number_image 1 and number_image2 are the numbers of

the detected feature regions in the same scenes of two images, respectively.

The experiment parameters are set as follows, three scales of the DoB corner detection are 2, 2.5 and 3,respectively. The point k of the feature direction fitting boundary is 8. The value n to obtain relatively covariant point is 3.Overlap error is 60%.

Figure 2 shows the image set for assessment in this paper[10]. Eight groups of the images contain five different variations(viewpoint, zoom and rotation, image blur and light). In order to facilitate the result analysis , we call the proposed algorithm RCBR (Round contour based region) algorithm. The RCBR algorithm is compared with the most representative contour region detection algorithm EBR and Haraiss (Harris-Affine algorithm).The experiment results are shown in Figure 3.

According to the detection effect, the content changes would influence the performance of the contour-based method whether the stable contour is or not. When the viewpoint changes, the contour is most unstable, so the performance of this method drops quickly. In other changes, the contour is relative stable; the downside of the curve is also slower. According to the repetition rate curve, the contour of the image has partly affected in the light change, the Haraiss method is better than RCBR and EBR. However, in other changes, this proposed method is obviously better than EBR and Haraiss method.

(a)Viewpoint change

(b)Zoom and rotation

(c)Image blur

234

(d) Light change.

Figure 2. Sets of test images

(a)viewpoint change

(b)zoom and rotation

(c)image blur

(d)light change

Figure 3. Comparison of the repetition rate of the proposed algorithm, EBR and Haraiss

In addition to the test sets in Figure 4, we have been implemented a lot of experiments to validate the effectiveness of the method. In general, the RCBR method is easy to implement with high velocity, and strong robustness in a variety of changes, so it has wide practicability.

REFERENCE [1] Lindeberg T,”Feature detection with automatic scale selection,”

Intemational Journal of Computer Vision.vol.30, pp. 79-116. 1998, February.

[2] Tuytelaars T and Gool L V, “Content based Image retrieval based on local affinely invariant regions,” International Conference on Visual Information Systems, ACM Press, 1999, pp.493 - 500.

[3] Tuytelaars T and Gool L V, “Wide baseline stereo matching based on local,affinely invariant regions,” The Eleventh British Machine Vision Conference, University of Bristol , UK; 2000.

[4] MikolajczykK and Schmid C, “Indexing based on scale invariant interest points,” International Conference on Computer Vision,IEEE Press; pp.525-531, 2001.

[5] Matas J, Chum O Urban M and Pajdla T,”Rubust wide baseline stereo from maximally stable extremalregions,” Image and Vision Computing, vol 22, pp.761-767.2002, October.

[6] Lowe D G, “Distinctive image features from scale-invariant keypointsdetectors,” International Journal of Computer Vision, vol 60, pp. 91-110. 2004, February.

[7] K.Mikolajezyk and C.Schmid. Scale & “Affine invariant interest Point detectors,” International Journal of Computer Vision vol 60, pp.63-86. 2004,January.

[8] Bay H Tuytelaars T and Gool L V ”SURF Speeded up robust features,” Proceedings of the 9th European Conference on Computer Vision Graz Austria, 2006,pp.404-417.

[9] Tuytelaars, T. and Van Gool, L, “Content-based image retrieval based on local affinely invariant regions,” In Int. Conf. on Visual Information Systems;1999, pp. 493–500.

[10] K. Mikolajczyk, T. Tuytelaars, C. Schmid, A. Zisserman, J. Matas, F. Schaffalitzky, T. Kadir, and L.V. Gool, “A Comparison of Affine Region Detectors,” International Journal of Computer Vision, vol 65,pp.43-72. 2005,January.

235