image forgery detection using adaptive

12
IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, VOL. 10, NO. 8, AUGUST 2015 1705 Image Forgery Detection Using Adaptive Oversegmentation and Feature Point Matching Chi-Man Pun, Senior Member, IEEE , Xiao-Chen Yuan, Member, IEEE, and Xiu-Li Bi Abstract— A novel copy–move forgery detection scheme using adaptive oversegmentation and feature point matching is proposed in this paper. The proposed scheme integrates both block-based and keypoint-based forgery detection methods. First, the proposed adaptive oversegmentation algorithm segments the host image into nonoverlapping and irregular blocks adaptively. Then, the feature points are extracted from each block as block features, and the block features are matched with one another to locate the labeled feature points; this procedure can approximately indicate the suspected forgery regions. To detect the forgery regions more accurately, we propose the forgery region extraction algorithm, which replaces the feature points with small superpixels as feature blocks and then merges the neighboring blocks that have similar local color features into the feature blocks to generate the merged regions. Finally, it applies the morphological operation to the merged regions to generate the detected forgery regions. The experimental results indicate that the proposed copy–move forgery detection scheme can achieve much better detection results even under various challenging conditions compared with the existing state-of-the-art copy–move forgery detection methods. Index Terms— Copy-move forgery detection, adaptive over-segmentation, local color feature, forgery region extraction. I. I NTRODUCTION W ITH the development of computer technology and image processing software, digital image forgery has been increasingly easy to perform. However, digital images are a popular source of information, and the reliability of digital images is thus becoming an important issue. In recent years, more and more researchers have begun to focus on the problem of digital image tampering. Of the existing types of image tampering, a common manipulation of a digital image is copy-move forgery [1], which is to paste one or several copied region(s) of an image into other part(s) of the same image. During the copy and move operations, some image processing methods such as rotation, scaling, blurring, compression, and Manuscript received August 6, 2014; revised October 9, 2014, November 24, 2014, January 15, 2015, and March 9, 2015; accepted April 11, 2015. Date of publication April 15, 2015; date of current version June 25, 2015. This work was supported in part by the Research Committee through the University of Macau, Macau, China, under Grant MYRG134(Y1-L2)-FST11- PCM, Grant MYRG181(Y1-L3)FST11-PCM, Grant MYRG2015-00012-FST, and MYRG2015-00011-FST, and in part by the Science and Technology Development Fund of Macau under Grant FDCT 008/2013/A1 and Grant FDCT 093-2014-A2. The associate editor coordinating the review of this manuscript and approving it for publication was Dr. Vrizlynn L. L. Thing. The authors are with the Department of Computer and Information Science, University of Macau, Macau, China (e-mail: [email protected]; [email protected]; [email protected]). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TIFS.2015.2423261 noise addition are occasionally applied to make convincing forgeries. Because the copy and move parts are copied from the same image, the noise component, color character and other important properties are compatible with the remainder of the image; some of the forgery detection methods that are based on the related image properties are not applica- ble in this case. In previous years, many forgery detection methods have been proposed for copy-move forgery detection. According to the existing methods, the copy-move forgery detection methods can be categorized into two main categories: block-based algorithms [1]–[13] and feature keypoint-based algorithms [14]–[19]. The existing block-based forgery detection methods divide the input images into overlapping and regular image blocks; then, the tampered region can be obtained by matching blocks of image pixels or transform coefficients. Fridrich et al. [1] proposed a forgery detection method in which the input image was divided into over-lapping rectangular blocks, from which the quantized Discrete Cosine Transform (DCT) coefficients of the blocks were matched to find the tampered regions. Popescu and Farid [2] applied Principal Component Analysis (PCA) to reduce the feature dimensions. Luo et al. [3] used the RGB color components and direction information as block features. Li et al. [4] used Discrete Wavelet Transform (DWT) and Singular Value Decomposition (SVD) to extract the image features. Mahdian and Saic [5] calculated the 24 Blur-invariant moments as features. Kang and Wei [6] calculated the singular values of a reduced-rank approximation in each block. Bayram et al. [7] used the Fourier-Mellin Transform (FMT) to obtain features. Wang et al. [8], [9] used the mean intensities of circles with different radii around the block center to represent the block features. Lin et al. [10] used the gray average results of each block and its sub-blocks as the block features. Ryu et al. [11], [12] used Zernike moments as block features. Bravo-Solorio and Nandi [13] used information entropy as block features. As an alternative to the block-based methods, keypoint- based forgery detection methods were proposed, where image keypoints are extracted and matched over the whole image to resist some image transformations while identifying duplicated regions. In [14]–[16] and [18], the Scale-Invariant Feature Transform (SIFT) [20] was applied to the host images to extract feature points, which were then matched to one another. When the value of the shift vector exceeded the threshold, the sets of corresponding SIFT feature points were defined as the forgery region. In [17] and [19], the Speeded Up Robust Features (SURF) [21] were applied to extract features instead 1556-6013 © 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

Upload: loguthalir

Post on 15-Feb-2016

73 views

Category:

Documents


1 download

DESCRIPTION

Image Forgery Detection Using Adaptive

TRANSCRIPT

Page 1: Image Forgery Detection Using Adaptive

IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, VOL. 10, NO. 8, AUGUST 2015 1705

Image Forgery Detection Using AdaptiveOversegmentation and Feature Point Matching

Chi-Man Pun, Senior Member, IEEE, Xiao-Chen Yuan, Member, IEEE, and Xiu-Li Bi

Abstract— A novel copy–move forgery detection scheme usingadaptive oversegmentation and feature point matching isproposed in this paper. The proposed scheme integrates bothblock-based and keypoint-based forgery detection methods. First,the proposed adaptive oversegmentation algorithm segments thehost image into nonoverlapping and irregular blocks adaptively.Then, the feature points are extracted from each block asblock features, and the block features are matched with oneanother to locate the labeled feature points; this procedure canapproximately indicate the suspected forgery regions. To detectthe forgery regions more accurately, we propose the forgeryregion extraction algorithm, which replaces the feature pointswith small superpixels as feature blocks and then merges theneighboring blocks that have similar local color features intothe feature blocks to generate the merged regions. Finally, itapplies the morphological operation to the merged regions togenerate the detected forgery regions. The experimental resultsindicate that the proposed copy–move forgery detection schemecan achieve much better detection results even under variouschallenging conditions compared with the existing state-of-the-artcopy–move forgery detection methods.

Index Terms— Copy-move forgery detection, adaptiveover-segmentation, local color feature, forgery region extraction.

I. INTRODUCTION

W ITH the development of computer technology andimage processing software, digital image forgery has

been increasingly easy to perform. However, digital imagesare a popular source of information, and the reliability ofdigital images is thus becoming an important issue. In recentyears, more and more researchers have begun to focus on theproblem of digital image tampering. Of the existing types ofimage tampering, a common manipulation of a digital image iscopy-move forgery [1], which is to paste one or several copiedregion(s) of an image into other part(s) of the same image.During the copy and move operations, some image processingmethods such as rotation, scaling, blurring, compression, and

Manuscript received August 6, 2014; revised October 9, 2014, November 24,2014, January 15, 2015, and March 9, 2015; accepted April 11, 2015.Date of publication April 15, 2015; date of current version June 25, 2015.This work was supported in part by the Research Committee through theUniversity of Macau, Macau, China, under Grant MYRG134(Y1-L2)-FST11-PCM, Grant MYRG181(Y1-L3)FST11-PCM, Grant MYRG2015-00012-FST,and MYRG2015-00011-FST, and in part by the Science and TechnologyDevelopment Fund of Macau under Grant FDCT 008/2013/A1 and GrantFDCT 093-2014-A2. The associate editor coordinating the review of thismanuscript and approving it for publication was Dr. Vrizlynn L. L. Thing.

The authors are with the Department of Computer and InformationScience, University of Macau, Macau, China (e-mail: [email protected];[email protected]; [email protected]).

Color versions of one or more of the figures in this paper are availableonline at http://ieeexplore.ieee.org.

Digital Object Identifier 10.1109/TIFS.2015.2423261

noise addition are occasionally applied to make convincingforgeries. Because the copy and move parts are copied fromthe same image, the noise component, color character andother important properties are compatible with the remainderof the image; some of the forgery detection methods thatare based on the related image properties are not applica-ble in this case. In previous years, many forgery detectionmethods have been proposed for copy-move forgery detection.According to the existing methods, the copy-move forgerydetection methods can be categorized into two main categories:block-based algorithms [1]–[13] and feature keypoint-basedalgorithms [14]–[19].

The existing block-based forgery detection methods dividethe input images into overlapping and regular image blocks;then, the tampered region can be obtained by matching blocksof image pixels or transform coefficients. Fridrich et al. [1]proposed a forgery detection method in which the input imagewas divided into over-lapping rectangular blocks, from whichthe quantized Discrete Cosine Transform (DCT) coefficientsof the blocks were matched to find the tampered regions.Popescu and Farid [2] applied Principal ComponentAnalysis (PCA) to reduce the feature dimensions. Luo et al. [3]used the RGB color components and direction informationas block features. Li et al. [4] used Discrete WaveletTransform (DWT) and Singular Value Decomposition (SVD)to extract the image features. Mahdian and Saic [5] calculatedthe 24 Blur-invariant moments as features. Kang and Wei [6]calculated the singular values of a reduced-rank approximationin each block. Bayram et al. [7] used the Fourier-MellinTransform (FMT) to obtain features. Wang et al. [8], [9] usedthe mean intensities of circles with different radii around theblock center to represent the block features. Lin et al. [10] usedthe gray average results of each block and its sub-blocks as theblock features. Ryu et al. [11], [12] used Zernike moments asblock features. Bravo-Solorio and Nandi [13] used informationentropy as block features.

As an alternative to the block-based methods, keypoint-based forgery detection methods were proposed, where imagekeypoints are extracted and matched over the whole image toresist some image transformations while identifying duplicatedregions. In [14]–[16] and [18], the Scale-Invariant FeatureTransform (SIFT) [20] was applied to the host images toextract feature points, which were then matched to one another.When the value of the shift vector exceeded the threshold,the sets of corresponding SIFT feature points were defined asthe forgery region. In [17] and [19], the Speeded Up RobustFeatures (SURF) [21] were applied to extract features instead

1556-6013 © 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

user
Highlight
user
Highlight
user
Highlight
user
Highlight
user
Highlight
user
Highlight
user
Highlight
user
Highlight
user
Highlight
user
Highlight
Page 2: Image Forgery Detection Using Adaptive

1706 IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, VOL. 10, NO. 8, AUGUST 2015

of SIFT. However, although these methods can locate thematched keypoints, most of them cannot locate the forgeryregions very well; therefore, they cannot achieve satisfactorydetection results and, at the same time, a sustained high recallrate [22].

Most of the existing block-based forgery detectionalgorithms use a similar framework, and the only difference isthat they apply different feature extraction methods to extractthe block features. Although these algorithms are effectivein forgery detection, they have three main drawbacks: 1) thehost image is divided into over-lapping rectangular blocks,which would be computationally expensive as the size of theimage increases; 2) the methods cannot address significantgeometrical transformations of the forgery regions; and 3) theirrecall rate is low because their blocking method is a regularshape. Although the existing keypoint-based forgery detectionmethods can avoid the first two problems, they can reducethe computational complexity and can successfully detect theforgery, even when some attacks exist in the host images; therecall results of the existing keypoint-based forgery methodswere very poor.

To address the above-mentioned problems, in this paper,we propose a novel copy-move forgery detection schemeusing adaptive over-segmentation and feature point match-ing. The proposed scheme integrates both the traditionalblock-based forgery detection methods and keypoint-basedforgery detection methods. Similar to block-based forgerydetection methods, we propose an image-blocking methodcalled the Adaptive Over-Segmentation algorithm to dividethe host image into non-overlapping and irregular blocksadaptively. Then, similar to the keypoint-based forgerydetection methods, the feature points are extracted from eachimage block as block features instead of being extractedfrom the whole host image as in the traditional keypoint-basemethods. Subsequently, the block features are matched withone another to locate the labeled feature points, which canapproximately indicate the suspected forgery regions. To detectmore accurate forgery regions, we proposed the ForgeryRegion Extraction algorithm, which replaces the feature pointswith small superpixels as feature blocks and, then, mergesthe neighboring blocks with similar local color features intofeature blocks, to generate the merged regions; finally, itapplies a morphological operation into the merged regions togenerate the detected forgery regions.

In the following sections, Section II shows the frameworkof the proposed copy-move forgery detection scheme andthen explains each step in detail. In section III, a series ofexperiments are conducted to demonstrate the effectivenessof our proposed scheme. Finally, the conclusions are drawnin section IV.

II. IMAGE FORGERY DETECTION USING ADAPTIVE

OVER-SEGMENTATION AND FEATURE

POINT MATCHING

This section describes the proposed image forgery detectionusing adaptive over-segmentation and feature point matchingin detail. Fig. 1 shows the framework of the proposed image

Fig. 1. Framework of the proposed copy-move forgery detection scheme.

forgery detection scheme. First, an adaptive over-segmentationmethod is proposed to segment the host image into non-overlapping and irregular blocks called Image Blocks (IB).Then, we apply the Scale Invariant Feature Transform (SIFT)in each block to extract the SIFT feature points as BlockFeatures (BF). Subsequently, the block features are matchedwith one another, and the feature points that are successfullymatched to one another are determined to be LabeledFeature Points (LFP), which can approximately indicate thesuspected forgery regions. Finally, we propose the ForgeryRegion Extraction method to detect the forgery regionfrom the host image according to the extracted LFP. In theremainder of this section, Section II-A explains the proposedAdaptive Over-Segmentation method in detail; Section II-Bintroduces the Feature Point Extraction using SIFT;Section II-C describes the Block Feature Matching procedures;and Section II-D presents the proposed Forgery RegionExtraction method.

A. Adaptive Over-Segmentation Algorithm

In our copy-move forgery detection scheme, we first proposethe Adaptive Over-Segmentation algorithm, which is similar tothe traditional block-based forgery detection methods and candivide the host image into blocks. In previous years, a largeamount of block-based forgery detection algorithms havebeen proposed [1]–[13]. Of the existing block-based forgerydetection schemes, the host image was usually divided into

user
Highlight
user
Highlight
user
Highlight
user
Highlight
user
Highlight
user
Highlight
user
Highlight
user
Highlight
user
Highlight
user
Highlight
user
Highlight
user
Highlight
Page 3: Image Forgery Detection Using Adaptive

PUN et al.: IMAGE FORGERY DETECTION USING ADAPTIVE OVERSEGMENTATION AND FEATURE POINT MATCHING 1707

Fig. 2. Different blocking/segmentation methods (a) Overlappingand rectangular blocking; (b) Overlapping and circular blocking; and(c) Non-overlapping and irregular blocking.

overlapping regular blocks, with the block size being definedand fixed beforehand, as shown in Fig. 2-(a) and (b). Then, theforgery regions were detected by matching those blocks. In thisway, the detected regions are always composed of regularblocks, which cannot represent the accurate forgery regionwell; as a consequence, the recall rate of the block-basedmethods is always very low, for example, as in [8] and [9].Moreover, when the size of the host images increases, thematching computation of the overlapping blocks will be muchmore expensive. To address these problems, we proposed theAdaptive Over-segmentation method, which can segment thehost image into non-overlapping regions of irregular shape asimage blocks, as shown in Fig. 2-(c); afterward, the forgeryregions can be detected by matching those non-overlappingand irregular regions.

Because we must divide the host image into non-overlapping regions of irregular shape and because thesuperpixels are perceptually meaningful atomic regions thatcan be obtained by over-segmentation, we employed thesimple linear iterative clustering (SLIC) algorithm [23] tosegment the host image into meaningful irregular superpixels,as individual blocks. The SLIC algorithm adapts a k-meansclustering approach to efficiently generate the superpixels,and it adheres to the boundaries very well. Fig. 2 shows thedifferent blocking/segmentation methods, where (a) shows theoverlapping and rectangular blocking, (b) shows the overlap-ping and circular blocking, and (c) shows the non-overlappingand irregular blocking with the SLIC segmentation method.Using the SLIC segmentation method, the non-overlappingsegmentation can decrease the computational expensescompared with the overlapping blocking; furthermore, inmost cases, the irregular and meaningful regions can representthe forgery region better than the regular blocks. However, theinitial size of the superpixels in SLIC is difficult to decide.

In practical applications of copy-move forgery detection,the host images and the copy-move regions are of differentsizes and have different content, and in our forgery detectionmethod, different initial sizes of the superpixels can producedifferent forgery detection results; consequently, different hostimages should be blocked into superpixels of different initialsizes, which is highly related to the forgery detection results.In general, when the initial size of the superpixels is too small,the result will be a large computational expense; otherwise,when it is too large, the result will be that the forgery detectionresults are not sufficiently accurate. Therefore, a balancebetween the computational expense and the detection accuracymust be obtained when employing the SLIC segmentationmethod for image blocking.

In general, the proper initial size of the superpixels is veryimportant to obtain good forgery detection results for differenttypes of forgery regions. However, currently, there is no goodsolution to determine the initial size of the superpixels inthe existing over-segmentation algorithms. In this paper, wepropose a novel Adaptive Over-Segmentation method that candetermine the initial size of the superpixels adaptively basedon the texture of the host image. When the texture of thehost image is smooth, the initial size of the superpixels canbe set to be relatively large, which can ensure not only thatthe superpixels can get close to the edges but also that thesuperpixels will contain sufficient feature points to be usedfor forgery detection; furthermore, larger superpixels imply asmaller number of blocks, which can reduce the computationalexpense when the blocks are matched with one another.In contrast, when the texture of the host image has moredetail, then the initial size of the superpixels can be set to berelatively small, to ensure good forgery detection results. In theproposed method, the Discrete Wavelet Transform (DWT) isemployed to analyze the frequency distribution of the hostimage. Roughly, when the low-frequency energy accounts forthe majority of the frequency energy, the host image willappear to be a smooth image; otherwise, if the low-frequencyenergy accounts for only a minority of the frequency energy,the host image appears to be a detailed image.

We have performed a large number of experiments toseek the relationship between the frequency distribution ofthe host images and the initial size of the superpixelsto obtain good forgery detection results. We performed afour-level DWT, using the ‘Haar’ wavelet, on the host image;then, the low-frequency energy EL F and high-frequencyenergy EH F can be calculated using (1) and (2), respectively.With the low-frequency energy EL F and high-frequencyenergy EH F , we can calculate the percentage of thelow-frequency distribution PL F using (3), according to whichthe initial size S of the superpixels can be defined as in (4).

EL F =∑

|C A4| (1)

EH F =∑

i

(∑

|C Di | +∑

|C Hi | +∑

|CVi |),

i = 1, 2, ..., 4 (2)

where C A4 indicates the approximation coefficients at the4th level of DWT; and C Di , C Hi and CVi indicate the detailed

user
Highlight
Page 4: Image Forgery Detection Using Adaptive

1708 IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, VOL. 10, NO. 8, AUGUST 2015

Fig. 3. Flowchart of the Adaptive Over-Segmentation algorithm.

coefficients at the i th level of DWT, i = 1, 2, ..., 4.

PL F = EL F

EL F + EH F· 100% (3)

S ={√

0.02 × M × N PL F > 50%√0.01 × M × N PL F ≤ 50%

(4)

where S means the initial size of the superpixels; M × Nindicates the size of the host image; and PL F means thepercentage of the low-frequency distribution.

In summary, the flow chart of the proposed AdaptiveOver-Segmentation method is shown in Fig. 3. First, weemployed the DWT to the host image to obtain the coefficientsof the low- and high-frequency sub-bands of the host image.Then, we calculated the percentage of the low-frequencydistribution PL F using (3), according to which we determinedthe initial size S, using (4). Finally, we employed theSLIC segmentation algorithm together with the calculatedinitial size S to segment the host image to obtain the imageblocks (IB).

Using the Adaptive Over-Segmentation method describedabove, in Fig. 4-(A1), the size of the host image I1 isM1 × N1 = 1632 × 1224; according to (3), PL F _1 can becalculated, as PL F _1 = 50.19%; therefore, the adaptive initialsize of the superpixels is calculated using (4), which yieldsS_1 = 199. Similarly, for the host image I2 in Fig. 4-(B1),with the size M2 × N2 = 1306 × 1950, PL F _2 = 39.89%,and S_2 = 159; for the host image I3 in Fig. 4-(C1), withthe size M3 × N3 = 1936 × 1296, PL F _2 = 59.92%, andS_3 = 224. Fig. 4-(A4), (B4), and (C4) show the host imagesegmentations with the proposed Adaptive Over-Segmentationmethod, and (a4), (b4), and (c4) show the correspondingdetected forgery regions with the proposed Adaptive Over-Segmentation method. We can see that in Fig. 4-(A), with thecalculated adaptive size, S_1 = 199, the forgery detectionresult in Fig. 4-(a4) performs better than the results whenthe fixed sizes are S = 150 and S = 250 (which are givenin Fig. 4-(a2) and (a3), respectively). In Fig. 4-(B), with thecalculated adaptive size S_2 = 159, the forgery detectionresult in Fig. 4-(b4) is similar to the result in Fig. 4-(b2)when S = 150; in addition, it performs better than the resultsin Fig. 4-(b3) when S = 250. In Fig. 4-(C), with the calculatedadaptive size, S_3 = 224, the forgery detection result

in Fig. 4-(c4) becomes close to the result in Fig. 4-(b3) whenS = 250, and it performs better than the results in Fig. 4-(b2)when S = 150.

As discussed above, the proposed AdaptiveOver-Segmentation method can divide the host imageinto blocks with adaptive initial sizes according to the givenhost images, with which each image can be determined tobe an appropriate block initial size to enhance the forgerydetection results. The proposed Adaptive Over-Segmentationmethod can lead to better forgery detection results comparedwith the forgery detection methods, which segment the hostimages into fixed-size blocks and, at the same time, reducethe computational expenses compared with most of theexisting forgery detection methods, which segment the hostimages into overlapping blocks.

B. Block Feature Extraction Algorithm

In this section, we extract block features from the imageblocks (IB). The traditional block-based forgery detectionmethods extracted features of the same length as the blockfeatures or directly used the pixels of the image block asthe block features; however, those features mainly reflectthe content of the image blocks, leaving out the locationinformation. In addition, the features are not resistant tovarious image transformations. Therefore, in this paper, weextract feature points from each image block as block features,and the feature points should be robust to various distortions,such as image scaling, rotation, and JPEG compression.

In recent years, the feature points extraction methodsSIFT [20] and SURF [21] have been widely used in thefield of computer vision. The feature points extracted bySIFT and SURF were proven to be robust against commonimage processing operations such as rotation, scale, blurring,and compression; consequently, SIFT and SURF were oftenused as feature point extraction methods in the existingkeypoint-based copy-move forgery detection methods.Christlein et al. [22] showed that the SIFT possessedmore constant and better performance compared with theother 13 image feature extraction methods in comparativeexperiments. As a result, in our proposed algorithm, wechose SIFT as the feature point extraction method to extractthe feature points from each image block, and each block ischaracterized by the SIFT feature points that were extractedin the corresponding block. Therefore, each block featurecontains irregular block region information and the extractedSIFT feature points.

C. Block Feature Matching Algorithm

After we have obtained the block features (BF), wemust locate the matched blocks through the block features.In most of the existing block-based methods, the block match-ing process outputs a specific block pair only if there are manyother matching pairs in the same mutual position, assumingthat they have the same shift vector. When the shift vectorexceeds a user-specified threshold, the matched blocks thatcontributed to that specific shift vector are identified as regionsthat might have been copied and moved. In our algorithm,because the block feature is composed of a set of featurepoints, we proposed a different method to locate the

user
Highlight
user
Highlight
user
Highlight
user
Highlight
user
Highlight
user
Highlight
Page 5: Image Forgery Detection Using Adaptive

PUN et al.: IMAGE FORGERY DETECTION USING ADAPTIVE OVERSEGMENTATION AND FEATURE POINT MATCHING 1709

Fig. 4. Superpixels of different initial sizes and the corresponding forgery detection results of (A) Ship, (B) Sailing, and (C) Kore. (A1), (B1), and (C1). Thecopy-move host images, I1, I2 and I3; (a1), (b1), (c1). The corresponding forgery regions of I1, I2 and I3, respectively. (A2), (B2), (C2). The host imagesare blocked into superpixels with initial size S = 150; (a2), (b2), (c2). The corresponding detected forgery regions when S = 150. (A3), (B3), (C3). The hostimages are blocked into superpixels with initial size S = 250; (a3), (b3), (c3). The corresponding detected forgery regions when S = 250. (A4), (B4), (C4).The host images are blocked into superpixels with the proposed Adaptive Over-segmentation method, by which the initial superpixel sizes are calculated asS = 199, S = 159, and S = 224, respectively; (a4), (b4), (c4). The corresponding detected forgery regions with the proposed Adaptive Over-segmentationmethod.

matched blocks. Fig. 5 shows the flowchart of the BlockFeature Matching algorithm. First, the number of matched

feature points is calculated, and the correlation coefficient mapis generated; then, the corresponding block matching threshold

Page 6: Image Forgery Detection Using Adaptive

1710 IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, VOL. 10, NO. 8, AUGUST 2015

Fig. 5. Flowchart of the Block Feature Matching algorithm.

is calculated adaptively; with the result, the matched blockpairs are located; and finally, the matched feature points inthe matched block pairs are extracted and labeled to locate theposition of the suspected forgery region. The detailed steps areexplained as follows.

Algorithm 1 Block Feature Matching AlgorithmInput: Block Features (BF);Output: Labeled Feature Points (LFP).STEP-1: Load the Block Features B F = {B F1, B F2, . . . ,B FN }, where N means the number of image blocks; andcalculate the correlation coefficients CC of the image blocks.STEP-2: Calculate the block matching threshold T RB

according to the distribution of correlation coefficients.STEP-3: Locate the matched blocks M B according to theblock matching threshold T RB .STEP-4: Label the matched feature points in the matchedblocks M B to indicate the suspected forgery regions.

In STEP-1, the correlation coefficient CC of the imageblocks indicates the number of matched feature points betweenthe corresponding two image blocks. Assuming that thereare N blocks after the adaptive over-segmentation, we cangenerate N(N − 1)/2 correlation coefficients, which form thecorrelation coefficient map. Among the blocks, the two featurepoints are matched when their Euclidean distance is greaterthan the predefined feature points’ matching threshold T Rp ,which means that the feature point fa(xa, ya) is matched tothe feature point fb(xb, yb) only if they can meet the conditiondefined in (5).

d( fa, fb) · T Rp ≤ d( fa, fi ) (5)

where d( fa, fb) means the Euclidian distance between thefeature points fa and fb , as defined in (6); d( fa, fi ) meansthe Euclidian distances between the keypoints fa and all of

the other keypoints in the corresponding block, as definedin (7), i means the i th feature points and n means the numberof feature points in the corresponding block; in addition,T Rp indicates the feature points matching threshold. WhenT Rp becomes larger, the matching accuracy will be higher, butat the same time, the miss probability will be increased. There-fore, in the experiments, we set T Rp = 2 to provide a goodtrade-off between the matching accuracy and miss probability.

d( fa, fb) =√

(xa − xb)2 + (ya − yb)2 (6)

d( fa, fi ) =√

(xa − xi )2 + (ya − yi )2,

i = 1, 2, ...n; i �= a, i �= b (7)

In STEP-2, to calculate the block matchingthreshold T RB , first the different elements of thecorrelation coefficients are sorted in ascending order asCCS ={CC1, CC2, CC3, · · · · · · , CCt }, where t ≤ N(N−1)/2.Then, the first derivative and second derivative of CC_S,∇ (CC_S) and ∇2 (CC_S) as well as the mean value of thefirst derivative vector ∇(CC_S) are calculated. Finally, weselect the minimum correlation coefficient from among thosewhose second derivative is larger than the mean value of thecorresponding first derivative vector, as defined in (8). Theselected correlation coefficient value is defined as the blockmatching threshold T RB .

∇2(CC_S) > ∇(CC_S) (8)

In STEP-3, with the calculated block matchingthreshold T RB , if the correlation coefficient of the blockpair is larger than T RB , the corresponding block pair willbe determined to be matched blocks; and in STEP-4, thematched feature points in the matched blocks are labeled toindicate the suspected forgery regions.

In the proposed Block Feature Matching algorithm, wehave defined two thresholds to match the blocks: the featurepoints’ matching threshold T Rp and the block matchingthreshold T RB , to avoid possible mismatched features,particularly for those copy-move forgery regions that aresimilar to the image background. Among the blocks, the twofeature points are matched when their Euclidean distanceis greater than T Rp , which can be adjusted to reduce thefalsely matched feature points. Furthermore, the two blocksare only matched when their correlation coefficient is largerthan T RB . In summary, in the proposed scheme, with thesetwo thresholds, most of the false matching can be avoided.

D. Forgery Region Extraction Algorithm

Although we have extracted the labeled featurepoints (LFP), which are only the locations of the forgeryregions, we must still locate the forgery regions. Consideringthat the superpixels can segment the host image very well,we proposed a method by replacing the LFP with smallsuperpixels to obtain the suspected regions (SR), which arecombinations of labeled small superpixels. Furthermore, toimprove the precision and recall results, we measure thelocal color feature of the superpixels that are neighborsto the suspected regions (SR); if their color feature issimilar to that of the suspected regions, then we merge

user
Highlight
user
Highlight
Page 7: Image Forgery Detection Using Adaptive

PUN et al.: IMAGE FORGERY DETECTION USING ADAPTIVE OVERSEGMENTATION AND FEATURE POINT MATCHING 1711

Fig. 6. Flow chart of the Forgery Region Extraction algorithm.

the neighbor superpixels into the corresponding suspectedregions, which generates the merged regions (MR). Finally,a close morphological operation is applied to the mergedregions to generate the detected copy-move forgery regions.Fig. 6 shows the flow chart of the Forgery Region Extractionalgorithm, which is explained in detail as follows.

Algorithm 2 Forgery Region ExtractionInput: Labeled Feature Points (LFP)Output: Detected Forgery Regions.STEP-1: Load the Labeled Feature Points (LFP), apply theSLIC algorithm with the initial size S to the host imageto segment it into small superpixels as feature blocks, andreplace each labeled feature point with its correspondingfeature block, thus generating the Suspected Regions (SR).STEP-2: Measure the local color feature of the superpixelsneighbor to the SR, called neighbor blocks; when their colorfeature is similar to that of the suspected regions, we mergethe neighbor blocks into the corresponding SR, thereforecreating the merged regions (MR).STEP-3: Apply the morphological close operation into MRto finally generate the detected forgery regions.

In STEP-1, assuming that L P F ={⟨

L P1, L P1⟩,

⟨L P2, L P2

⟩, · · · ,

⟨L Pn, L Pn

⟩}, where

⟨L Pi , L Pi

⟩represents a

matched feature point pair, i means the i th labeled featurepoint pair, i = 1, 2, . . . , n, and n is the total numberof feature points in LFP; the suspected regions will beS R = {⟨

LS1, LS1⟩,⟨LS2, LS2

⟩, · · · ,

⟨LSn, LSn

⟩}. The initial

size of the SLIC algorithm S, which we used to segment thehost image into small superpixels, is related to the size of thehost images; in this paper, for high resolution host images,for example, when the size of the host image is approximately

3000 × 3000, the initial size is set to S = 20 by experiments;while for low-resolution host images, for example, when thesize of the host image is approximately 1500×1500, the initialsize is set to S = 10 by the experiments.

In STEP-2, for each suspected region S Ri =⟨LSi , LSi

⟩, the neighboring blocks are defined as

S Ri _neighbor = ⟨LSi_θ , LSi_θ

⟩, where θ = {45°, 90°, 135°,

180°, 225°, 270°, 315°, 360°}; then, we measure the localcolor feature of the corresponding suspected region S Ri andits neighboring blocks S Ri _neighbor , using (9) and (10),respectively.

FC _LSi = R(LSi ) + G(LSi ) + B(LSi )

3

FC _LSi = R(LSi ) + G(LSi ) + B(LSi )

3(9)

FC_LSi_θ = R(LSi_θ ) + G(LSi_θ ) + B(LSi_θ )

3

FC_LSi_θ = R(LSi_θ ) + G(LSi_θ ) + B(LSi_θ )

3(10)

where R(), G() and B() mean calculating the RGB compo-nents of the corresponding block, respectively. When the localcolor feature of the neighboring blocks is similar to that of thecorresponding suspected regions, which means that the localfeature can meet the condition defined in (11), the neighboringblock will be merged into the corresponding suspected region.

∣∣FC _LSi − FC _LSi_θ

∣∣ ≤ T Rsim∣∣FC _LSi − FC _LSi_θ

∣∣ ≤ T Rsim (11)

where FC _LSi and FC _LSi are the local color features ofthe corresponding suspected region S Ri , S Ri = ⟨

LSi , LSi⟩;

FC _LSi_θ and FC _LSi_θ are the local color features ofits neighboring blocks S Ri _neighbor , S Ri _neighbor =⟨LSi_θ , LSi_θ

⟩. T Rsim is the threshold to measure the simi-

larity between the local color features; in this paper, we setT Rsim = 15 in the experiments.

Finally, in STEP-3, the structural element that we use in theclose operation is defined as a circle whose radius is relatedto the size of the host image. The close operation can fill thegaps in the merged regions and, at the same time, keep theshape of the region unchanged.

III. EXPERIMENTS AND DISCUSSION

In this section, a series of experiments are conducted to eval-uate the effectiveness and robustness of the proposed imageforgery detection scheme using adaptive over-segmentationand feature point matching. In the following experiments,the image dataset in [22] is used to test the proposedmethod. This dataset is formed based on 48 high-resolutionuncompressed PNG true color images, and the average sizeof the images is 1500 × 1500. In the dataset, the copiedregions are from the categories of living, nature, man-madeand mixed, and they range from overly smooth to highlytextured; the copy-move forgeries are created by copying,scaling and rotating semantically meaningful image regions.In summary, the dataset has 1826 images in total, whichare realistic copy-move forgeries. Therefore, we chose this

user
Highlight
Page 8: Image Forgery Detection Using Adaptive

1712 IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, VOL. 10, NO. 8, AUGUST 2015

Fig. 7. The copy-move forgery detection results of the proposed scheme (a1) ∼ (e1). The five host images from the dataset; (a2) ∼ (e2). The ground-truthforgery regions of the corresponding host images; (a3) ∼ (e3). The detected forgery regions of the corresponding images, using the proposed forgery detectionscheme.

dataset to objectively evaluate our method. Fig. 7 shows thecopy-move forgery detection results of the proposed scheme.In Fig. 7, (a1), (b1), (c1), (d1) and (e1) display the host images,which are forged images that are selected from the dataset;(a2), (b2), (c2), (d2) and (e2) show the corresponding forgeryregions; and (a3), (b3), (c3), (d3) and (e3) show the forgeryregions that were detected with the proposed scheme.

In the following experiments, the two characteristicsprecision and recall [16], [22] are used to evaluate theperformance of the proposed forgery detection scheme.Precision is the probability that the detected regions arerelevant, and it is defined as the ratio of the number ofcorrectly detected forged pixels to the number of totallydetected forged pixels, as stated in (12). Recall is theprobability that the relevant regions are detected, and it isdefined as the ratio of the number of correctly detected forgedpixels to the number of forged pixels in the ground-truthforged image, as stated in (13).

precision =∣∣∣� ∩ �

′∣∣∣|�| (12)

recall =∣∣∣� ∩ �

′∣∣∣∣∣�′ ∣∣ (13)

where � means the detected forgery regions with the proposedscheme from the dataset, for example, (a3) ∼ (e3) in Fig. 7;and �

′means the ground-truth forgery regions of the dataset,

for example, (a2) ∼ (e2) in Fig. 7.In addition to the precision and recall, we give the F1 score

as a reference parameter to measure the forgery detectionresult; the F1 score combines both the precision and recallinto a single value, and it can be calculated using (14).

F1 = 2 · precison · recall

precision + recall(14)

To reduce the effect of the randomness of the samples,the average precision and recall are computed over all ofthe images in the dataset. We evaluate the method at thepixel level and image level. Though the pixel-level metricsare useful for assessing the general localization performanceof the algorithm when the ground-truth data are available, theimage-level decisions are especially interesting with respect tothe automated detection of manipulated images. At the pixellevel, the precision and recall are calculated by counting thenumber of pixels in the corresponding region. At the imagelevel, the precision is the probability that a detected forgery istruly a forgery, and the recall is the probability that a forgeryimage is detected. In general, a higher precision and a higherrecall indicate superior performance.

Page 9: Image Forgery Detection Using Adaptive

PUN et al.: IMAGE FORGERY DETECTION USING ADAPTIVE OVERSEGMENTATION AND FEATURE POINT MATCHING 1713

Because Christlein et al. [22] have specificallyrecommended all of the benchmark methods and because weused the same dataset that they provided, we can compare ourexperimental results with the copy-move detection evaluationresults in their paper. We chose several state-of-the-artexisting schemes of the block-based forgery detection methodand the keypoint-based forgery detection method to comparewith our proposed scheme. For example, Wang et al. [8], [9]and Bravo-Solorio and Nandi [13] used the block-basedforgery detection method; and the SIFT and SURF featuredetection-based forgery detection schemes, which are bothdiscussed in [22], are of the keypoint-based forgery detectionmethod. Furthermore, to measure the advantage of theproposed Adaptive Over-Segmentation algorithm, we alsoblock the host images with a fixed initial size, which we setto S = 240 instead of blocking the host images adaptively; inthis situation, the detection results are also calculated.

In the following sections, first the proposed AdaptiveOver-Segmentation algorithm is evaluated in Section III-A,and then, the proposed copy-move forgery scheme is evaluatedunder different types of tests: the baseline test, for example,and the plain copy-move; the geometric transforms, suchas scaling and rotation; common signal processing, suchas JPEG compression; and the down-sampling forgeries.Section III-B and III-C demonstrate the test results.

A. Evaluation of the Proposed AdaptiveOver-Segmentation Algorithm

As described in Section II-A, the proposed AdaptiveOver-Segmentation algorithm can divide the host image intoblocks with an adaptive initial size according to the given hostimages. Compared with other forgery detection methods thatsegment the host images into fixed-size blocks, the forgerydetection results can be improved with the proposed AdaptiveOver-Segmentation algorithm. In Fig. 4, three different hostimages are selected, as shown in (A) Ship, (B) Sailing, and(C) Kore, to show the forgery detection results when the hostimages are blocked into superpixels of different initial sizes.(A1), (B1) and (C1) show the host images I1, I2 and I3,respectively; and (a1), (b1) and (c1) show the correspondingforgery regions (ground-truth). (A2), (B2) and (C2) show thehost images being blocked into superpixels with the fixedinitial size S = 150; and (a2), (b2) and (c2) show thecorresponding detected forgery regions. (A3), (B3) and (C3)show the host images being blocked into superpixels with thefixed initial size S = 250; and (a3), (b3) and (c3) show thecorresponding detected forgery regions. (A4), (B4) and (C4)show the host images being blocked into superpixels withthe proposed Adaptive Over-segmentation method, and theinitial superpixel sizes are calculated as S = 199, S = 159and S = 224, respectively; and (a4), (b4) and (c4) show thecorresponding detected forgery regions.

Table I shows the comparison results for the forgerydetection with and without the proposed AdaptiveOver-Segmentation algorithm. It can be easily observed thatfor host image I1, the proposed Adaptive Over-segmentationmethod can produce more accurate forgery detection results

TABLE I

FORGERY DETECTION RESULTS WITH/WITHOUT THE PROPOSED

ADAPTIVE OVER-SEGMENTATION ALGORITHM

TABLE II

DETECTION RESULTS UNDER PLAIN COPY-MOVE AT IMAGE LEVEL

with a higher Precision=93.85% and, at the same time,gain a much better Recall=99.12%; for host image I2,the proposed Adaptive Over-segmentation method canproduce more accurate forgery detection results with higherPrecision=96.60%; and for host image I3, the proposedAdaptive Over-segmentation method can produce moreaccurate forgery detection results with higher Recall=95.19%and, at the same time, maintain good Precision=95.28%.The comparison results indicate that the proposed AdaptiveOver-Segmentation algorithm can achieve much better forgerydetection results than the other forgery detection methodswith fixed-size blocks.

B. Detection Results Under Plain Copy-Move

Basically, we first evaluate the proposed scheme underideal conditions; in other words, we have 48 original imagesand 48 forgery images, in which a one-to-one copy-move isimplemented. We must distinguish the original and forgeryimages in this case. Tables II and III show the detectionresults for the 96 images when under plain copy-move, at theimage and pixel levels, respectively. From Table II, it can beeasily observed that our scheme can achieve precision=96%and recall=100%; thus F1 = 97.96% at the image level,which is much better than the existing state-of-the-art schemes.In Table III, at the pixel level, our scheme can achieveprecision=97.22% and recall=83.73%; thus F1 = 89.97%,and it can be easily observed that our proposed scheme per-forms much better than the keypoint-based forgery detectionmethods, SIFT [15], [16] and SURF [17], [19]. At the

Page 10: Image Forgery Detection Using Adaptive

1714 IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, VOL. 10, NO. 8, AUGUST 2015

TABLE III

DETECTION RESULTS UNDER PLAIN COPY-MOVE AT THE PIXEL LEVEL

same time, it performs similarly to the block-based forgerydetection methods, i.e., those of Wang et al. [8], [9] andBravo-Solorio and Nandi [13]. In addition, we can see that theproposed scheme with the adaptive over-segmentation methodperforms better than with fixed-size blocking, at both theimage and pixel levels.

In some special cases, it is possible that two copy-moveregions could be assigned into one single irregular region.and hence, it would be impossible to be detected for thetwo subsequent situations that occur together: 1. the twoduplicated regions are very close to one another or connected;and 2. the size of the connected regions is smaller thanthe Initial Block Size S (which is defined in Eq. (4) in theproposed Adaptive Over-Segmentation Algorithm). However,the possibility of both situations occurring together is quitelow and can be ignored because this circumstance means thatthe two copy-move forgery regions are extremely small andclose. According to our experimental results, we have testeda total of 1824 images in various situations, which are fromthe Image Manipulation Dataset [22], and there are no tworegions that are assigned into one single region.

C. Detection Results Under Different Transforms

In addition to the plain copy-move forgery, we have testedour proposed scheme when the copied regions are distorted byvarious attacks. In this case, the forged images are generatedby using each of the 48 images in the dataset, and the copiedregions are attacked by geometric distortions that includescaling and rotation and common signal processing such asJPEG compression.

1) Down-Sampling: All 48 of the forged host imagesin the dataset are scaled down from 90% to 10% insteps of 20%. In this case, we must test a total of48 × 5 = 240 images.

2) Scaling: The copied regions are scaled with the scalefactor varying from 91% to 109%, in steps of 2%,and with the scale factor of 50%, 80%, 120% and200% as well. In this case, we must test a total of48 × 14 = 672 images.

3) Rotation: The copied regions are rotated with therotation angle varying from 2° to 10°, in steps of 2°,and with the rotation angles of 20°, 60° and 180°

Fig. 8. Precision results at the pixel level (a) Down-sampling; (b) Scaling;(c) Rotation; and (d) JPEG Compression.

Fig. 9. Recall results at the pixel level (a) Down-sampling; (b) Scale;(c) Rotation; and (d) JPEG Compression.

as well. In this case, we must test a total of48 × 8 = 384 images.

4) JPEG compression: The forgery images areJPEG compressed with the quality factor varyingfrom 100 to 20, in steps of −10. In this case, we musttest a total of 48 × 9 = 432 images.

Figs. 8, 9, and 10 show the detection results at thepixel level when under different attacks: (a) Down-sampling,(b) Scaling, (c) Rotation, and (d) JPEG Compression. Here,the results that are represented in blue and marked ‘Proposed’indicate the results of the Proposed Scheme With Adaptive

Page 11: Image Forgery Detection Using Adaptive

PUN et al.: IMAGE FORGERY DETECTION USING ADAPTIVE OVERSEGMENTATION AND FEATURE POINT MATCHING 1715

Fig. 10. F1 scores at the pixel level (a) Down-sampling; (b) Scale;(c) Rotation; and (d) JPEG Compression.

Blocking. The results that are represented in yellow andmarked ‘RSIFT’ indicate the results of the Proposed Schemewith Fixed Size Blocking. The results that are represented ingreen and red and marked as ‘SIFT’ and ‘SURF’ indicatethe results of the keypoint-based forgery detection methodsbased on SIFT [15], [16] and SURF [17], [19], respectively;and the results that are represented in pink and sky-blueand marked as ‘Bravo’ and ‘Circle’ indicate the resultsof the block-based forgery detection methods proposed byWang et al. [8], [9] and by Bravo-Solorio and Nandi [13],respectively.

In Fig. 8 ∼ 10, the x-axis in (a) represents the factorof down-sampling, (b) represents the scale factor,(c) represents the rotation angle, and (d) represents the qualityfactor. Fig. 8 shows the precision results of the proposedscheme compared with the existing methods; it can be easilyobserved that the precision of the proposed scheme exceedsthat of the existing keypoint-based methods, SIFT [15], [16]and SURF [17], [19], by a large amount and is as good as thatof the existing block-based methods (the schemes proposed byWang et al. [8], [9] and Bravo-Solorio and Nandi [13]) undervarious attacks, including geometric distortions and commonsignal processing. Moreover, according to the precision results,the proposed scheme with the adaptive over-segmentationmethod performs better than that with the fixed-size blocking.

At the same time, Fig. 9 shows the recall results of theproposed scheme compared with the existing methods. It canbe easily observed that the recall of the proposed scheme ismuch better than that of the existing methods when undervarious attacks; this finding is expected because in our scheme,we proposed the Forgery Region Extraction algorithm to locatethe forgery regions, which replace the feature points with smallsuperpixels as feature blocks and then merge the neighboringblocks with similar local color features into the feature blocks.The Forgery Region Extraction algorithm can help greatly

reduce the possibility of the forgery being undetected and canthus improve the recall by a large amount.

Fig. 10 shows the F1 scores, which combine both theprecision and recall into a single value, for the proposedscheme compared with the existing methods. This figureindicates that the forgery detection result of the proposedscheme is better than that of the existing state-of-the-artmethods when under the various attacks.

Although our rotation experiments only involve rotationwith very small angles, our proposed method can work wellagainst any rotation angle because our block features areextracted by the SIFT algorithm, which is well known forits robustness to scale and rotation invariance. We have alsoconducted the experiments for copy-move forgery regions withlarge rotation angles: the results turn out to be similar and arevery good.

IV. CONCLUSIONS

Digital forgery images created with copy-move operationsare challenging to detect. In this paper, we have proposeda novel copy-move forgery detection scheme using adaptiveover-segmentation and feature-point matching. The AdaptiveOver-Segmentation algorithm is proposed to segment the hostimage into non-overlapping and irregular blocks adaptivelyaccording to the given host images; using this approach, foreach image, we can determine an appropriate block initial sizeto enhance the accuracy of the forgery detection results and,at the same time, reduce the computational expenses. Then, ineach block, the feature points are extracted as block features,and the Block Feature Matching algorithm is proposed, withwhich the block features are matched with one another tolocate the labeled feature points; this procedure can approx-imately indicate the suspected forgery regions. Subsequently,to detect the more accurate forgery regions, we propose theForgery Region Extraction algorithm, in which the labeledfeature points are replaced with small superpixels as featureblocks, and the neighboring feature blocks with local colorfeatures that are similar to the feature blocks are mergedto generate the merged regions. Next, the morphologicaloperation is applied to the merged regions to generate thedetected forgery regions.

We demonstrate the effectiveness of the proposed schemewith a large number of experiments. Experimental resultsshow that the proposed scheme can achieve much betterdetection results for copy-move forgery images under variouschallenging conditions, such as geometric transforms,JPEG compression, and down-sampling, compared with theexisting state-of-the-art copy-move forgery detection schemes.Future work could focus on applying the proposed forgerydetection scheme based on adaptive over-segmentation andfeature-point matching on other types of forgery, such assplicing or other types of media, for example, video and audio.

ACKNOWLEDGMENT

The authors would like to thank the reviewers for theirvaluable comments.

Page 12: Image Forgery Detection Using Adaptive

1716 IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, VOL. 10, NO. 8, AUGUST 2015

REFERENCES

[1] J. Fridrich, D. Soukal, and J. Lukáš, “Detection of copy–move forgeryin digital images,” in Proc. Digit. Forensic Res. Workshop, Cleveland,OH, Aug. 2003.

[2] A. C. Popescu and H. Farid, “Exposing digital forgeries by detectingduplicated image regions,” Dept. Comput. Sci., Dartmouth College,Hanover, NH, USA, Tech. Rep. TR2004-515, 2004.

[3] W. Luo, J. Huang, and G. Qiu, “Robust detection of region-duplicationforgery in digital image,” in Proc. 18th Int. Conf. PatternRecognit. (ICPR), Aug. 2006, pp. 746–749.

[4] G. Li, Q. Wu, D. Tu, and S. Sun, “A sorted neighborhood approachfor detecting duplicated regions in image forgeries based on DWTand SVD,” in Proc. IEEE Int. Conf. Multimedia Expo, Jul. 2007,pp. 1750–1753.

[5] B. Mahdian and S. Saic, “Detection of copy–move forgery usinga method based on blur moment invariants,” Forensic Sci. Int., vol. 171,nos. 2–3, pp. 180–189, 2007.

[6] X. B. Kang and S. M. Wei, “Identifying tampered regions using singularvalue decomposition in digital image forensics,” in Proc. Int. Conf.Comput. Sci. Softw. Eng., Dec. 2008, pp. 926–930.

[7] S. Bayram, H. T. Sencar, and N. Memon, “An efficient and robustmethod for detecting copy–move forgery,” in Proc. IEEE Int. Conf.Acoust., Speech, Signal Process. (ICASSP), Apr. 2009, pp. 1053–1056.

[8] J. Wang, G. Liu, H. Li, Y. Dai, and Z. Wang, “Detection of image regionduplication forgery using model with circle block,” in Proc. Int. Conf.Multimedia Inf. Netw. Secur. (MINES), Nov. 2009, pp. 25–29.

[9] J. W. Wang, G. J. Liu, Z. Zhang, Y. W. Dai, and Z. Q. Wang, “Fast androbust forensics for image region-duplication forgery,” Acta Automat.Sinica, vol. 35, no. 12, pp. 1488–1495, 2009.

[10] H. J. Lin, C. W. Wang, and Y. T. Kao, “Fast copy–move forgerydetection,” WSEAS Trans. Signal Process., vol. 5, no. 5, pp. 188–197,2009.

[11] S. J. Ryu, M. J. Lee, and H. K. Lee, “Detection of copy-rotate-moveforgery using Zernike moments,” in Information Hiding. Berlin,Germany: Springer-Verlag, 2010, pp. 51–65.

[12] S. J. Ryu, M. Kirchner, M. J. Lee, and H. K. Lee, “Rotation invariantlocalization of duplicated image regions based on Zernike moments,”IEEE Trans. Inf. Forensics Security, vol. 8, no. 8, pp. 1355–1370,Aug. 2013.

[13] S. Bravo-Solorio and A. K. Nandi, “Exposing duplicated regions affectedby reflection, rotation and scaling,” in Proc. IEEE Int. Conf. Acoust.,Speech, Signal Process. (ICASSP), May 2011, pp. 1880–1883.

[14] H. Huang, W. Guo, and Y. Zhang, “Detection of copy–move forgery indigital images using SIFT algorithm,” in Proc. Pacific-Asia WorkshopComput. Intell. Ind. Appl. (PACIIA), Dec. 2008, pp. 272–276.

[15] X. Pan and S. Lyu, “Region duplication detection using imagefeature matching,” IEEE Trans. Inf. Forensics Security, vol. 5, no. 4,pp. 857–867, Dec. 2010.

[16] I. Amerini, L. Ballan, R. Caldelli, A. Del Bimbo, and G. Serra,“A SIFT-based forensic method for copy–move attack detection andtransformation recovery,” IEEE Trans. Inf. Forensics Security, vol. 6,no. 3, pp. 1099–1110, Sep. 2011.

[17] X. Bo, W. Junwen, L. Guangjie, and D. Yuewei, “Image copy–moveforgery detection based on SURF,” in Proc. Int. Conf. Multimedia Inf.Netw. Secur. (MINES), Nov. 2010, pp. 889–892.

[18] P. Kakar and N. Sudha, “Exposing postprocessed copy–paste forgeriesthrough transform-invariant features,” IEEE Trans. Inf. ForensicsSecurity, vol. 7, no. 3, pp. 1018–1028, Jun. 2012.

[19] B. L. Shivakumar and S. S. Baboo, “Detection of region duplicationforgery in digital images using SURF,” IJCSI Int. J. Comput. Sci. Issues,vol. 8, issue 4. no. 1, pp. 199–205, 2011.

[20] D. G. Lowe, “Object recognition from local scale-invariant features,” inProc. 7th IEEE Int. Conf. Comput. Vis., Sep. 1999, pp. 1150–1157.

[21] H. Bay, T. Tuytelaars, and L. Van Gool, “SURF: Speeded up robustfeatures,” in Computer Vision. Berlin, Germany: Springer-Verlag, 2006,pp. 404–417.

[22] V. Christlein, C. Riess, J. Jordan, C. Riess, and E. Angelopoulou,“An evaluation of popular copy–move forgery detection approaches,”IEEE Trans. Inf. Forensics Security, vol. 7, no. 6, pp. 1841–1854,Dec. 2012.

[23] R. Achanta, A. Shaji, K. Smith, A. Lucchi, P. Fua, and S. Süsstrunk,“SLIC superpixels compared to state-of-the-art superpixel methods,”IEEE Trans. Pattern Anal. Mach. Intell., vol. 34, no. 11, pp. 2274–2282,Nov. 2012.

Chi-Man Pun (SM’11) received the B.Sc. andM.Sc. degrees in software engineering from theUniversity of Macau, in 1995 and 1998, respec-tively, and the Ph.D. degree in computer scienceand engineering from The Chinese University ofHong Kong, Hong Kong, in 2002. He is currently anAssociate Professor and the Head of the Departmentof Computer and Information Science with the Uni-versity of Macau. He has investigated several fundedresearch projects and authored over 150 refereedscientific papers in international journals, books,

and conference proceedings. His research interests include digital imageprocessing, digital watermarking, pattern recognition, computer vision, andintelligent systems and applications. He has served as the General Chairfor the 10th and 11th International Conference Computer Graphics, Imagingand Visualization (CGIV2013, CGIV2014), and a Program/Session Chair ofseveral other international conferences. He has also served as the EditorialMember/Referee of many international journals, such as the IEEE TRANS-ACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, the IEEETRANSACTIONS ON IMAGE PROCESSING, and Pattern Recognition. He isalso a Professional Member of the Association for Computing Machinery.

Xiao-Chen Yuan (M’11) received the B.Sc. degreein electronic information technology from the MacauUniversity of Science and Technology, in 2008,and the M.Sc. degree in e-commerce technologyand the Ph.D. degree in software engineeringfrom the University of Macau, in 2010 and 2013,respectively. She is currently a Post-Doctoral Fellowwith the Department of Computer and InformationScience, University of Macau. Her research interestsinclude digital image processing, digital audio/videoprocessing, digital watermarking, and multimedia

security.

Xiu-Li Bi received the B.Sc. degree in applicationof electronic technology and the M.Sc. degree insignal and information progressing from ShannxiNormal University, China, in 2004 and 2007, respec-tively. She is currently pursuing the Ph.D. degree insoftware engineering with the University of Macau.From 2007 to 2013, she was a Lecturer with theSchool of Computer Science, Beijing Institute ofTechnology, China. From 2013 to 2014, she was aResearch Assistant with the University of Macau.Her research interests include image processing,

multimedia security, and image forensics.