source camera identification using image featurescamera fingerprint can be estimated from images...

15
International Journal of Applied Engineering Research ISSN 0973-4562 Volume 13, Number 1 (2018) pp. 490-504 © Research India Publications. http://www.ripublication.com 490 Source Camera Identification using Image Features A. Jeyalakshmi 1 and Dr. D. Ramya Chitra 2 1 Associate Professor, Department of Computer Science, Sri Ramakrishna College of Art and Science, Coimbatore, Tamil Nadu, India. 1 Orcid Id: 0000-0002-9587-8320 2 Assistant Professor, Department of Computer Science, Bharathiar University, Coimbatore, Tamil Nadu, India. Abstract With the advent of technology, there are a number of digital cameras available in the market that can capture images excellently. Identification of cameras used in capturing the images is essential in applications including image forgery detection. Researchers have explored methods to identify the source camera of a given image using lens distortion, color filter array and demosaicing artifacts. This work proposes a method for identifying source camera based on a set of image features. Identification of source camera has been done using supervised learning technique and classification using Support Vector Machine (SVM). Experimental results show good classification, which proves the efficiency of this method. Keywords: Source Camera Identification, Image Features, GLCM, SVM. INTRODUCTION In today’s digital age, digital cameras are widely used for image acquisition. Digital images can be easily edited and modified with low cost hardware devices and software tools without any evidence. Image forensics tries to find the integrity and authenticity of the digital image, which are used in the intelligence systems and law enforcement. It is imperative to identify the camera that has been used to capture the image [I] [II] or whether it has been generated by the computer [III]. For the source camera identification problem, various methods have been proposed till now. Extraction of image features [IV], color filter array (CFA) interpolation [V], presence of lens radial distortion [I], extraction of photo- response non-uniformity (PRNU) noise to identify sensor fingerprint [VI], demosaicing artifacts [VIII], PCA based spatially adaptive denoising of CFA images for single-sensor digital cameras [IX], using the combination of demosaicing and zooming scheme to detect the color difference of the images [X]. Mostly, all digital cameras have the same architecture and general processing steps. The general structure of image formation process is illustrated in the Figure 1, after light enters into the digital camera through the lens system, a set of filters are processed. In that, anti-aliasing filter is one of the important filter, The CCD detector is used to measure the intensity of light at each pixel location on the detectors surface. The sophisticated cameras use a separate CCD for each of the three color (RGB) channels; so, most of the manufactures use a single CCD detector at every pixel, but partition its surface with different spectral filters. Such filters are called Color Filter Arrays (CFA). Shown in Figure 2(a) and 2(b) are CFA patterns using RGB and YMCG color space respectively for a 6x6 pixel block. Looking at the RGB values in the CFA pattern it is evident that the missing RGB values need to be interpolated for each pixel. There are a number of different interpolation algorithms which could be used and different manufacturers use different interpolation techniques. After color decomposition is performed by CFA, a detector is used to obtain a digital representation of light intensity in each color band. Then a number of operations are done by digital camera which includes interpolation, gamma correction, color processing, white point correction, and image compression. Although the operations and stages are common to all digital cameras, the exact processing detail in each stage varies from one manufacturer to the other, and even in different camera models manufactured by the same company. Hence, it is important to know the source camera that has been used to capture the given image, in order to detect image forgeries. Figure 1. Major Steps of image formation process in camera pipeline.[IV] R G R G G B G B R G R G G B G B G M G M C Y C Y M G M G C Y C Y (a) RGB (B) YMCG Figure 2. CFA Pattern This paper is organized as follows: overview of different methods of detecting camera models of research method in section 2, section 3 describes proposed method of the different features extracted for camera model detection, Experimental results and analysis for different feature extraction set is provided in section 4 and section 5 concludes this paper.

Upload: others

Post on 27-Aug-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Source Camera Identification using Image Featurescamera fingerprint can be estimated from images known to have been taken with the camera. I = I o + I o K + Θ (1) In this equation

International Journal of Applied Engineering Research ISSN 0973-4562 Volume 13, Number 1 (2018) pp. 490-504

© Research India Publications. http://www.ripublication.com

490

Source Camera Identification using Image Features

A. Jeyalakshmi1 and Dr. D. Ramya Chitra2

1Associate Professor, Department of Computer Science,

Sri Ramakrishna College of Art and Science, Coimbatore, Tamil Nadu, India. 1Orcid Id: 0000-0002-9587-8320

2Assistant Professor, Department of Computer Science, Bharathiar University,

Coimbatore, Tamil Nadu, India.

Abstract

With the advent of technology, there are a number of digital

cameras available in the market that can capture images

excellently. Identification of cameras used in capturing the

images is essential in applications including image forgery

detection. Researchers have explored methods to identify the

source camera of a given image using lens distortion, color

filter array and demosaicing artifacts. This work proposes a

method for identifying source camera based on a set of image

features. Identification of source camera has been done using

supervised learning technique and classification using Support

Vector Machine (SVM). Experimental results show good

classification, which proves the efficiency of this method.

Keywords: Source Camera Identification, Image Features,

GLCM, SVM.

INTRODUCTION

In today’s digital age, digital cameras are widely used for

image acquisition. Digital images can be easily edited and

modified with low cost hardware devices and software tools

without any evidence. Image forensics tries to find the

integrity and authenticity of the digital image, which are used

in the intelligence systems and law enforcement. It is

imperative to identify the camera that has been used to capture

the image [I] [II] or whether it has been generated by the

computer [III]. For the source camera identification problem,

various methods have been proposed till now. Extraction of

image features [IV], color filter array (CFA) interpolation [V],

presence of lens radial distortion [I], extraction of photo-

response non-uniformity (PRNU) noise to identify sensor

fingerprint [VI], demosaicing artifacts [VIII], PCA based

spatially adaptive denoising of CFA images for single-sensor

digital cameras [IX], using the combination of demosaicing

and zooming scheme to detect the color difference of the

images [X].

Mostly, all digital cameras have the same architecture and

general processing steps. The general structure of image

formation process is illustrated in the Figure 1, after light

enters into the digital camera through the lens system, a set of

filters are processed. In that, anti-aliasing filter is one of the

important filter, The CCD detector is used to measure the

intensity of light at each pixel location on the detectors

surface. The sophisticated cameras use a separate CCD for

each of the three color (RGB) channels; so, most of the

manufactures use a single CCD detector at every pixel, but

partition its surface with different spectral filters. Such filters

are called Color Filter Arrays (CFA). Shown in Figure 2(a)

and 2(b) are CFA patterns using RGB and YMCG color space

respectively for a 6x6 pixel block. Looking at the RGB values

in the CFA pattern it is evident that the missing RGB values

need to be interpolated for each pixel. There are a number of

different interpolation algorithms which could be used and

different manufacturers use different interpolation techniques.

After color decomposition is performed by CFA, a detector is

used to obtain a digital representation of light intensity in each

color band. Then a number of operations are done by digital

camera which includes interpolation, gamma correction, color

processing, white point correction, and image compression.

Although the operations and stages are common to all digital

cameras, the exact processing detail in each stage varies from

one manufacturer to the other, and even in different camera

models manufactured by the same company. Hence, it is

important to know the source camera that has been used to

capture the given image, in order to detect image forgeries.

Figure 1. Major Steps of image formation process in camera

pipeline.[IV]

R G R G

G B G B

R G R G

G B G B

G M G M

C Y C Y

M G M G

C Y C Y

(a) RGB (B) YMCG

Figure 2. CFA Pattern

This paper is organized as follows: overview of different

methods of detecting camera models of research method in

section 2, section 3 describes proposed method of the different

features extracted for camera model detection, Experimental

results and analysis for different feature extraction set is

provided in section 4 and section 5 concludes this paper.

Page 2: Source Camera Identification using Image Featurescamera fingerprint can be estimated from images known to have been taken with the camera. I = I o + I o K + Θ (1) In this equation

International Journal of Applied Engineering Research ISSN 0973-4562 Volume 13, Number 1 (2018) pp. 490-504

© Research India Publications. http://www.ripublication.com

491

RESEARCH METHOD- FEATURE EXTRACTION

TECHNIQUES

Feature extraction is to extract useful information from an

input image, which is then used to perform the classification

for source camera identification. The digital image is

processed in order to find some structure, e.g. color, which is

then used as the criterion on which the categorization may be

done.

Demosaicing Artifact based method

Bayram et al [VIII] propose a method to identify demosaicing

artifacts associated with different camera-models. They have

proposed two methods to describe a set of image

characteristics, which are used as features in designing

classifiers that distinguish between digital camera models.

This work concentrated to identify, detect and classify traces

of demosaicing operations. First the method estimates the

differences in image formation pipeline, like processing

techniques and device technologies. Next, it finds out the

unique characteristics of camera model using Expectation

Maximization (EM) algorithm [VII]. The EM algorithm has

been applied only on red channel. Using the EM algorithm,

two sets of features have been obtained for classification: the

weighting (interpolation) coefficients from the images and the

peak location magnitudes in the frequency spectrum of the

probability maps. Also, they have applied the sequential

forward floating search (SFSS) algorithm to reduce the

dimensionality of the feature vector by selecting the most

distinguishing features.

Sensor Imperfection based method:

Sensor Pattern Noise PNU based: Lukas et al [I] proposed

sensor pattern noise based method for camera model

identification. Pixel non-uniformity (PNU), where different

pixels have different light sensitivities due to imperfections in

sensor manufacturing processes is a major source of pattern

noise. This makes PNU a unique feature in identifying

sensors. Photo response non-uniformity (PRNU) casts a

unique pattern onto every image the camera captures. This

“camera fingerprint” is unique for each camera [II]. The

camera fingerprint can be estimated from images known to

have been taken with the camera.

I = Io + Io K + Θ (1)

In this equation (1) the camera output image I is the “true

scene” image that would be captured in the absence of any

imperfections as I0, and K is the PRNU factor (sensor

fingerprint) , Θ includes all other noise components, such as

dark current, shot noise, readout noise, and quantization noise.

The fingerprint K can be estimated from N images I(1), I(2),

I(3),….. I(N) taken by the camera. Let W (1), W (2), W (3)… W (N),

are their noise residuals obtained using a denoising filter F.

W(i) =I(i) –F(I(i)) (2) (2)

i=1….N and the PRNU factor, K has been derived as:

(3)

In both the patterns, the authors have tested 9 camera models

where two of them have similar CCD and two are exactly the

same model. The camera identification is accurate even for

cameras of the same model. The result is also good for

identifying compressed images. One problem with the

conducted experiments is that the authors use the same image

set to calculate both the camera reference pattern and the

correlations for the images.

Color Filter Array (CFA) Interpolation methods

CFA Interpolation using Expectation Maximization (EM)

algorithm: Bayram et al [V] suggest a method to identify the

camera model using CFA interpolation. In which, image

classification was determined by the correlation structure

present in each color band. Each manufacturer uses different

interpolation algorithms and somewhat different CFA

patterns. Using the iterative Expectation Maximization (EM)

algorithm, two sets of features are obtained for classification:

the interpolation coefficients from the images and the peak

location and magnitudes in the frequency spectrum of the

probability maps.

CFA Interpolation using Alternate Projection: In the prior

method the interpolation process is performed based on

iterative order using any one of the interpolation operations

like nearest-neighbor replication, bilinear interpolation, and

cubic spline interpolation. Although these single-channel

algorithms can provide satisfactory results in smooth regions

of an image, they usually fail in high-frequency regions,

especially along edges. The alternate projection algorithm

[XII] exploits an inter-channel correlation, and has given a

better performance.

Using Image Feature

The camera model identification problem can also be solved

by extracting a set of features from images. Mehdi Kharrazi et

al. [XI] proposed a method to extract 34 features from images,

which are taken from different camera models and categorized

into 3 groups like color features, image quality metrics, and

wavelet domain statistics. Features are extracted from images

of different cameras, which are then used to train and test the

classifier. In this method the result is good for uncompressed

images. Also, good in the jpeg images but the accuracy

dropped if the number of camera models is increased. Choi et

al [IV] have proposed a stepwise discriminate analysis method

for extracting the features from digital images, which is

advanced than analysis of variance (ANOVA).

Compared with the approaches [I] [II] [IV] [V] [XI] [XII]

feature extraction involves simplifying the amount of

resources required to describe a large set of data accurately,

because features are common to all the images. Their

accuracy differs when they were acquired. Hence, it is

observed that feature extraction gives reasonably good

performance than the other approaches.

Page 3: Source Camera Identification using Image Featurescamera fingerprint can be estimated from images known to have been taken with the camera. I = I o + I o K + Θ (1) In this equation

International Journal of Applied Engineering Research ISSN 0973-4562 Volume 13, Number 1 (2018) pp. 490-504

© Research India Publications. http://www.ripublication.com

492

PROPOSED METHOD- IMAGE FEATURES

Features of digital images are classified into two levels: global

and local features. Global properties of an image, includes

intensity histogram, frequency domain descriptors, covariance

matrix and high order statistics. Local features are defined on

local regions with spatial properties, including edges, corners,

lines, curves, etc. The features that will be used to recognize

camera model through various classification approaches are

described here. In this work, there are 50 features like

invariant moments, statistical features, GLCM and Color

moments have been extracted from each color channels from a

number of images of different camera models. These features

reflect the differences in the CFA (color filter array),

demosaicing algorithm, and the sensor signal transfer.

Invariant Moments

Image moments are useful to describe objects after

segmentation. An image moment is a certain particular

weighted average (moment) of the image pixels' intensities.

Simple properties of the image which are found via image

moments include area (or total intensity), its centroid, and

information about its orientation. Hu’s moments are invariant

under translation, changes in scale, and also rotation. It

describes the image despite of its location, size, and rotation.

Color moments

Color moments [XIII] are measures that characterize color

distribution in an image in the same way that central moments

uniquely describe a probability distribution. These are very

effective for color-based image analysis. The lower-order

moments generally provide enough information for image

classification.

The first color moment can be interpreted as the average color

in the image, and it can be calculated by using the equation (4)

(4)

where N is the number of pixels in the image and is the

value of the jth pixel of the image at the ith color channel. The

second color moment is the standard deviation, which is

obtained by taking the square root of the variance of the color

distribution.

(5)

where is the mean value, or first color moment, for the i-

th color channel of the image. The third color moment is the

skewness. It measures how asymmetric the color distribution

is, and thus it gives information about the shape of the color

distribution. Skewness can be computed with equation (6):

(6)

Kurtosis is the fourth color moment, and, similar to skewness,

it provides information about the shape of the color

distribution. More specifically, kurtosis is a measure of how

flat or tall the distribution is in comparison to normal

distribution. This can be computed with equation (7):

( 7)

Gray Level Co-Occurrence Matrix (GLCM)

Gray level co-occurrence matrix has proven to be a powerful

basis for use in image classification [IV]. The common

statistics applied to co-occurrence probabilities are discussed

below.

Energy: This is called Uniformity or Angular second

moment. It measures the uniformity that is pixel pair

repetitions. It detects disorders in images. Energy

reaches a maximum value equal to one. High energy

values occur when the gray level distribution has a

constant or periodic form. Energy has a normalized

range. The GLCM of less homogeneous image will

have large number of small entries.

Entropy: This measures the disorder or complexity of an

image. The entropy is large when the image is not

texturally uniform and many GLCM elements have

very small values. Complex textures tend to have

high entropy. Entropy is strongly, but inversely

correlated to energy.

Contrast: This measures the spatial frequency of an

image and is the difference moment of GLCM. It is

the difference between the highest and the lowest

values of a contiguous set of pixels. It measures the

amount of local variations present in the image. A

low contrast image presents GLCM concentration

term around the principal diagonal and features low

spatial frequencies.

Variance: This is a measure of heterogeneity and is

strongly correlated to first order statistical variable

such as standard deviation. Variance increases when

the gray level values differ from their mean.

Homogeneity: This is also called as Inverse Difference

Moment. It measures image homogeneity, as it

assumes larger values for smaller gray tone

differences in pair elements. It is more sensitive to

the presence of near diagonal elements in the GLCM.

It has maximum value when all elements in the

image are same. GLCM contrast and homogeneity

are strongly, but inversely, correlated in terms of

equivalent distribution in the pixel pairs population.

It means homogeneity decreases if contrast increases

while energy is kept constant.

Correlation: This feature is a measure of gray tone linear

dependencies in the image. The rest of the textural

features are secondary and derived from those listed

above. They are Sum Average, Sum Entropy, Sum

Variance, Difference Variance, Difference Entropy,

Maximum Correlation Coefficient, and Information

Measures of correlation.

In the proposed work 50 such features have been identified

and extracted based on color properties, texture properties and

statistical properties of the image. Source camera

identification has been performed based on supervised

classifier, Support Vector Machine (SVM). In the first phase,

Page 4: Source Camera Identification using Image Featurescamera fingerprint can be estimated from images known to have been taken with the camera. I = I o + I o K + Θ (1) In this equation

International Journal of Applied Engineering Research ISSN 0973-4562 Volume 13, Number 1 (2018) pp. 490-504

© Research India Publications. http://www.ripublication.com

493

training has been performed and the data has been classified

to a set of predefined classes. During the testing phase feature

extraction is done on any given image and then features are

used to identify the nearest class that the given image may

belong to. Thus identifying best features for source camera

identification has been the focus of this study and this article.

SVM classifier:

The aim of any classification algorithm is to use the best

classifier in order to achieve good accuracy.SVM [XIV] is

one such tool that has been widely used in supervised learning

techniques. SVMs are based on the idea of minimizing

training set error by constructing a hyper plane as the decision

surface in such a way that the margin of separation between

different classes are maximized. Consider a two-class

classification problem with linearly separable data and

training feature sets [mi, yi] (i =1,… K), where yi is the label

of the feature vector mi with a value of either +1 or -1. The

feature vector m lies on a hyper plane given by wT.m+b=0

where w is the normal to the hyper plane. A set of feature

vectors said to be optimally separated if no errors occur and

the distance between closest vectors to the hyper plane is

maximized. The distance d(w, b; m) of a feature vector m

from the hyper plane (w, b) is d(w,b;m)=

The optimal hyper plane is obtained by maximizing this

margin. Multiclass SVMs can be implemented by combining

several two-class SVMs. In this work, both one-class and

multiclass SVMs have been used.

RESULTS AND ANALYSIS

Three cameras have been used in this experimentation. Table

1 lists the digital still cameras model, maximum size of image

and type of the image. Each scene has been captured as an

image at three different timings of the day at 9am, 12noon and

3pm on each camera. For example, Figure 3(a), (b) and (c)

have been captured using each of the camera model at three

different timings, which accounts to 9 images of same scene.

Thus 20 images of same scene have been captured at various

timings by each of the camera. The camera specific

parameters have not been altered.

Table1. Digital cameras used in this experimentation

S.No Camera Model Max.Image Size Image format

C1 Canon PowerShot A495 3648x2048 JPEG

C2 Samsung PL120 4320x2432 JPEG

C3 Sony-dscw330 4320x3240 JPEG

Figure 3 (a.1) 3(a.2) 3 (a.3)

Figure 3(a.1) (a.2) and (a.3) have been acquired by the camera model C1 at three different timings: 9am, 12noon and 3pm

Figure 3 (b.1) 3(b.2) 3(b.3)

Figure 3(b.1), (b.2) and (b.3) have been acquired by the camera model C2 at three different timings: 9am, 12noon and 3pm

Figure 3 (c.1) 3(c.2) 3(c.3)

Figure 3(c.1) (c.2) and (c.3) have been acquired by the camera model C3 at three different timings: 9am, 12noon and 3pm

Page 5: Source Camera Identification using Image Featurescamera fingerprint can be estimated from images known to have been taken with the camera. I = I o + I o K + Θ (1) In this equation

International Journal of Applied Engineering Research ISSN 0973-4562 Volume 13, Number 1 (2018) pp. 490-504

© Research India Publications. http://www.ripublication.com

494

In the training phase, 180 images have been used for

extracting nearly 50 features and trained using SVM classifier.

During the testing phase, nearly 50 images (trained and

untrained0 have been used to identify the nearest matching

class using SVM. Experimental results show 83.5% accuracy

for single class SVM and 98.4%accuracy for a multi class

SVM. The image dataset include broad range of images, from

natural scenes to buildings, images with different background,

light intensities etc. Using these 180 images, 50 features have

been extracted by applying six techniques described in section

2.These results are discussed in detail.

Table 2. Features of the same scene on different camera models based on Demosaicing Artifact

C1 C2 C3

Features List 9 am 12 noon 3 pm 9 am 12noon 3 pm 9 am 12noon 3 pm

CSNR

R 59.2 59.1 59.3 58.0 58.0 58.3 43.0 58.4 58.9

G 59.2 58.8 59.3 58.2 58.2 58.9 43.7 58.0 58.5

B 61.9 61.1 61.5 59.9 59.9 60.9 43.8 59.1 60.1

INVARIANT MOMENTS

M1 0.9 1.0 1.0 0.9 0.9 1.9 1.1 1.1 1.2

M2 4.2 4.4 3.7 3.5 3.5 3.6 4.9 5.2 5.6

M3 5.6 5.7 5.0 5.7 5.7 6.4 12.4 6.0 6.9

M4 6.6 6.9 6.6 6.3 6.3 6.9 11.4 7.2 8.1

M5 13.4 14.0 13.0 13.0 13.0 13.1 24.2 14.5 16.6

M6 9.7 10.0 9.4 8.8 8.8 9.1 14.9 10.7 11.7

M7 13.7 14.2 13.4 13.9 13.9 14.0 23.8 15.0 16.1

COLOR MOMENTS

MEAN

R 0.4 0.4 0.4 0.4 0.4 0.8 1.8 0.5 0.5

G 0.4 0.4 0.3 0.5 0.5 0.6 1.7 0.5 0.6

B 0.3 0.3 0.3 0.4 0.4 1.3 1.7 0.5 0.4

VARIANCE

R 0.2 0.2 0.2 0.2 0.2 0.3 11.3 0.2 0.2

G 0.2 0.2 0.2 0.2 0.2 0.5 10.2 0.2 0.2

B 0.2 0.2 0.2 0.2 0.2 0.4 10.2 0.2 0.2

STD

R 0.5 0.5 0.5 0.5 0.5 0.7 3.4 0.5 0.5

G 0.5 0.5 0.5 0.5 0.5 0.8 3.2 0.5 0.5

B 0.4 0.4 0.4 0.5 0.5 0.8 3.2 0.5 0.5

STATISTICAL FEATURES(2D)

Contrast 16803 10755.2 10609 10157 11147.6 10158 11186 10012 10891

Correlation 0.0 0.0 0.0 0.1 0.1 1.0 0.0 0.0 0.0

Energy 0.0 0.0 0.0 0.0 0.0 0.9 0.0 0.0 0.0

Homogenit 0.0 0.0 0.0 0.0 0.0 0.1 0.0 0.0 0.0

GLCM

Autocorrel 21.8 24.2 20.5 24.6 24.6 24.9 24.6 28.1 32.4

Contrast 2.2 2.3 2.0 3.0 3.0 3.8 17.7 2.9 2.5

Correl1 0.9 0.9 0.9 0.8 1.0 0.9 0.1 0.9 0.9

Correl[1,2] 0.9 0.9 0.9 0.8 1.0 1.3 0.1 0.9 0.9

Clust.Promi 2212.3 2090.8 2233.4 1949.7 1949.8 1950.0 1042.9 2452.4 2045.

Page 6: Source Camera Identification using Image Featurescamera fingerprint can be estimated from images known to have been taken with the camera. I = I o + I o K + Θ (1) In this equation

International Journal of Applied Engineering Research ISSN 0973-4562 Volume 13, Number 1 (2018) pp. 490-504

© Research India Publications. http://www.ripublication.com

495

Clust.Shade 105.9 85.3 120.5 54.8 54.9 54.8 -16.1 35.8 -30.3

Dissimilarit 0.4 0.5 0.4 0.6 0.7 1.3 3.0 0.4 0.5

Energy:matl 0.4 0.4 0.4 0.3 0.5 0.7 0.1 0.5 0.3

Entropy 1.3 1.4 1.3 1.6 1.7 2.2 2.7 0.9 1.5

Homo: matl 0.9 0.9 0.9 0.9 1.0 1.4 0.5 0.9 0.9

Homogeneit 0.9 0.9 0.9 0.9 1.0 1.3 0.5 0.9 0.9

Max.probab 0.6 0.5 0.6 0.5 0.6 1.3 0.2 0.6 0.5

Sum.squ.var 22.9 25.3 21.5 26.0 26.1 26.7 33.3 29.5 33.6

Sum avg 7.1 7.7 6.8 7.9 8.0 8.8 9.6 8.4 9.5

Sum vari 72.8 80.1 68.7 80.7 80.8 80.9 79.2 102.0 107.6

Sum entro 1.2 1.3 1.2 1.4 1.5 2.3 2.2 0.8 1.4

Diffe. vari 2.2 2.3 2.0 3.0 3.2 3.4 17.7 2.9 2.5

Differ.entro 0.5 0.5 0.5 0.6 0.7 1.3 1.7 0.2 0.6

Infor.corre1 -0.6 -0.6 -0.6 -0.5 -0.4 -0.1 0.0 -0.6 -0.6

Info.Corre2 0.8 0.8 0.8 0.8 0.9 1.6 0.2 0.8 0.8

INN 1.0 1.0 1.0 1.0 1.1 1.8 0.8 1.0 1.0

IDMN 1.0 1.0 1.0 1.0 1.1 1.2 0.8 1.0 1.0

The performance of the image features invariant moments

accuracy is shown in the Figure 4.In which, except the

moment 1 the remaining moments produce good accuracy.

Figure 4. Invariant moments of same scene of different

camera models based on demosaicing artifact.

Figure 5.GLCM Features of same scene of different camera

models based on demosaicing artifact

In the color moments the variance produces likewise accuracy

than other moments.In Figure5, GLCM features:

autocorrelation, dissimilarity, energy, entropy, variance

provides fine results.

Table 3. Features of the same scene on different camera models based on Joint Zooming &Demosaicing Artifact

C1 C2 C3

Features List 9 am 12 noon 3 pm 9 am 12noon 3 pm 9 am 12 noon 3 pm

CSNR

R 58.94 58.87 59.09 57.81 57.95 57.81 58.57 58.37 58.71

G 58.55 58.20 58.65 57.42 57.56 57.42 58.04 58.01 58.01

B 58.95 58.50 59.21 57.75 57.89 57.75 58.71 59.15 58.70

INVARIANT MOMENTS

M1 0.99 1.08 0.97 0.82 0.96 0.82 1.17 1.13 1.19

M2 4.10 4.66 3.79 3.28 3.42 3.28 4.93 5.15 5.45

M3 5.68 5.84 5.28 5.29 5.43 5.29 6.59 6.00 6.72

Page 7: Source Camera Identification using Image Featurescamera fingerprint can be estimated from images known to have been taken with the camera. I = I o + I o K + Θ (1) In this equation

International Journal of Applied Engineering Research ISSN 0973-4562 Volume 13, Number 1 (2018) pp. 490-504

© Research India Publications. http://www.ripublication.com

496

M4 7.03 7.17 6.73 5.46 5.60 5.46 7.54 7.25 8.02

M5 14.55 14.59 13.79 11.26 11.40 11.26 15.13 14.52 16.47

M6 10.10 10.03 9.34 7.61 7.75 7.61 10.90 10.72 11.65

M7 13.84 14.62 13.46 12.41 12.55 12.41 15.57 15.01 16.07

COLOR MOMENTS

MEAN

R 0.40 0.44 0.37 0.35 0.49 0.35 0.48 0.45 0.53

G 0.40 0.44 0.37 0.35 0.49 0.35 0.48 0.45 0.53

B 0.40 0.44 0.37 0.35 0.49 0.35 0.48 0.45 0.53

VARIANCE

R 0.22 0.23 0.21 0.22 0.36 0.22 0.23 0.23 0.23

G 0.22 0.23 0.21 0.22 0.36 0.22 0.23 0.23 0.23

B 0.22 0.23 0.21 0.22 0.36 0.22 0.23 0.23 0.23

STD

R 0.47 0.48 0.46 0.47 0.61 0.47 0.48 0.47 0.48

G 0.47 0.48 0.46 0.47 0.61 0.47 0.48 0.47 0.48

B 0.47 0.48 0.46 0.47 0.61 0.47 0.48 0.47 0.48

STATISTICAL FEATURES(2D)

Contrast 16355 10487 10600 9067 9067 9067 10687 10012 10754

Correlation 0.03 0.02 -0.01 0.14 0.28 0.14 -0.01 0.01 0.03

Energy 0.00 0.00 0.00 0.00 0.14 0.00 0.00 0.00 0.00

Homogenit 0.03 0.04 0.04 0.04 0.18 0.04 0.04 0.04 0.04

GLCM

Autocorrel 25.00 27.59 23.04 21.68 21.82 21.68 29.85 28.13 32.90

Contrast 3.07 2.92 2.55 3.42 3.56 3.42 2.54 2.89 2.72

Correl1 0.86 0.87 0.88 0.83 0.97 0.83 0.89 0.86 0.88

Correl[1,2] 0.86 0.87 0.88 0.83 0.97 0.83 0.89 0.86 0.88

Clust.Promi 2525.3 2456.60 2589.9 2462.9 2463.13 2462.9 2456.5 2452.4 2424.0

Clust.Shade 86.20 62.28 112.83 134.83 134.97 134.83 26.32 35.82 -11.78

Dissimilarit 0.44 0.42 0.37 0.49 0.63 0.49 0.36 0.41 0.39

Energy:matl 0.50 0.48 0.53 0.49 0.63 0.49 0.48 0.49 0.49

Entropy 0.88 0.90 0.83 0.90 1.04 0.90 0.87 0.87 0.87

Homo: matl 0.94 0.95 0.95 0.94 1.08 0.94 0.95 0.95 0.95

Homogeneit 0.94 0.94 0.95 0.93 1.07 0.93 0.95 0.94 0.94

Max.probab 0.62 0.58 0.63 0.61 0.75 0.61 0.57 0.60 0.59

sum.squ.var 26.43 28.93 24.22 23.28 23.42 23.28 31.00 29.46 34.12

Sum avg 7.68 8.24 7.18 6.98 7.12 6.98 8.69 8.35 9.39

Sum vari 90.78 99.97 83.54 78.79 78.93 78.79 108.06 101.99 119.79

Sum entro 0.83 0.85 0.79 0.85 0.99 0.85 0.84 0.83 0.83

Diffe. Vari 3.07 2.92 2.55 3.42 3.56 3.42 2.54 2.89 2.72

Differ.entro 0.25 0.24 0.22 0.28 0.42 0.28 0.21 0.24 0.23

Info.Corre1 -0.63 -0.66 -0.68 -0.59 -0.45 -0.59 -0.69 -0.65 -0.67

Info.Corre2 0.74 0.76 0.75 0.73 0.87 0.73 0.77 0.75 0.76

INN 0.97 0.97 0.98 0.97 1.11 0.97 0.98 0.97 0.97

IDMN 0.97 0.97 0.98 0.97 1.11 0.97 0.98 0.97 0.98

Page 8: Source Camera Identification using Image Featurescamera fingerprint can be estimated from images known to have been taken with the camera. I = I o + I o K + Θ (1) In this equation

International Journal of Applied Engineering Research ISSN 0973-4562 Volume 13, Number 1 (2018) pp. 490-504

© Research India Publications. http://www.ripublication.com

497

In the above table, Joint Zooming and Demosaicing Artifact

features values are tabulated. Figure6, Figure 7 are displaying

the performance of the invariant moments and glcm features.

In which glcm features autocorrelation, cluster shade, and

dissimilarity provide good results.

Figure 6.Invariant moments of same scene of different

camera models based on joint zooming & demosaicing

artifact.

Figure 7.GLCM features of same scene of different camera

models based on joint zooming & demosaicing artifact.

Table 4. Features of the same scene on different camera models based on sensor imperfection method

C1 C2 C3

Features List 9 am 12 noon 3 pm 9 am 12noon 3 pm 9 am 12noon 3 pm

CSNR

R 63.03 61.04 61.58 59.61 59.61 59.61 59.54 60.31 59.96

G 61.57 60.70 61.92 59.01 59.01 59.01 59.08 59.44 59.77

B 67.17 64.39 63.64 61.00 61.00 61.00 60.62 63.38 61.88

INVARIANT MOMENTS

M1 1.51 1.52 1.42 1.48 1.48 1.48 1.59 1.51 1.63

M2 6.57 7.62 5.13 5.38 5.38 5.38 8.26 10.41 8.58

M3 9.93 9.03 10.62 12.16 12.16 12.16 9.92 7.70 10.43

M4 9.36 10.27 11.39 7.05 7.05 7.05 9.90 11.93 11.68

M5 19.63 18.94 21.08 16.32 16.32 16.32 20.82 25.52 23.83

M6 12.90 13.66 14.75 10.14 10.14 10.14 14.15 14.41 17.66

M7 18.87 22.83 19.15 16.14 16.14 16.14 18.93 21.07 23.05

COLOR MOMENTS

MEAN

R 0.66 0.78 0.64 0.56 0.56 0.56 0.78 0.71 0.86

G 0.66 0.78 0.64 0.56 0.56 0.56 0.78 0.71 0.86

B 0.66 0.78 0.64 0.56 0.56 0.56 0.78 0.71 0.86

VARIANCE

R 0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25

G 0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25

B 0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25

STD

R 0.50 0.50 0.50 0.50 0.50 0.50 0.50 0.50 0.50

Page 9: Source Camera Identification using Image Featurescamera fingerprint can be estimated from images known to have been taken with the camera. I = I o + I o K + Θ (1) In this equation

International Journal of Applied Engineering Research ISSN 0973-4562 Volume 13, Number 1 (2018) pp. 490-504

© Research India Publications. http://www.ripublication.com

498

G 0.50 0.50 0.50 0.50 0.50 0.50 0.50 0.50 0.50

B 0.50 0.50 0.50 0.50 0.50 0.50 0.50 0.50 0.50

STATISTICAL FEATURES(2D)

Contrast 40261. 14930 16010 13495 13495 13495 14209 13801 13640

Correlation 0.29 0.31 0.58 0.60 0.60 0.60 0.41 0.20 0.35

Energy 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00

Homogenit 0.05 0.05 0.06 0.07 0.07 0.07 0.05 0.06 0.04

GLCM

Autocorrel 40.83 48.44 39.18 34.75 34.75 34.75 48.55 44.26 54.17

Contrast 5.78 5.39 4.34 5.16 5.16 5.16 3.84 4.37 4.27

Correl1 0.94 0.97 0.96 0.92 0.92 0.92 0.96 0.93 0.93

Correl[1,2] 0.94 0.97 0.96 0.92 0.92 0.92 0.96 0.93 0.93

Clust.Promi 2995.4 2960.34 3109.7 2911.3 2911.31 2911.3 2801.6 2740.7 2586.3

Clust.Shade 245.96 237.78 256.23 231.87 231.87 231.87 217.14 210.43 144.50

Dissimilarit 0.83 0.78 0.62 0.74 0.74 0.74 0.55 0.63 0.62

Energy:matl 0.63 0.59 0.69 0.61 0.61 0.61 0.60 0.67 0.72

Entropy 1.08 1.11 1.10 1.02 1.02 1.02 1.00 1.01 1.00

Homo: matl 0.98 0.99 0.99 0.97 0.97 0.97 0.98 0.97 0.97

Homogeneit 0.98 0.99 0.99 0.97 0.97 0.97 0.98 0.97 0.97

Max.probab 0.77 0.74 0.81 0.76 0.76 0.76 0.75 0.81 0.84

sum.squ.var 42.68 50.23 41.19 36.37 36.37 36.37 49.86 45.46 54.88

Sum avg 11.30 12.98 10.97 9.90 9.90 9.90 12.90 11.92 14.02

Sum vari 148.38 179.24 142.01 125.22 125.22 125.22 179.19 161.86 203.79

Sum entro 1.00 1.04 1.03 0.95 0.95 0.95 0.94 0.95 0.94

Diffe. Vari 5.78 5.39 4.34 5.16 5.16 5.16 3.84 4.37 4.27

Differ.entro 0.39 0.40 0.34 0.39 0.39 0.39 0.30 0.33 0.33

Info.Corre1 -0.46 -0.46 -0.53 -0.37 -0.37 -0.37 -0.58 -0.53 -0.54

Info.Corre2 0.80 0.84 0.81 0.80 0.80 0.80 0.82 0.81 0.81

INN 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.98 0.98

IDMN 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99

Figure 8.Invariant moments of same scene of different

camera models based on sensor imperfection method.

Figure 9.GLCM features of same scene of different camera

models based on sensor imperfection method.

Page 10: Source Camera Identification using Image Featurescamera fingerprint can be estimated from images known to have been taken with the camera. I = I o + I o K + Θ (1) In this equation

International Journal of Applied Engineering Research ISSN 0973-4562 Volume 13, Number 1 (2018) pp. 490-504

© Research India Publications. http://www.ripublication.com

499

Table 4, Figure 8 and Figure 9 are shown the performance of

the sensor imperfection based features extraction. In which

invariant moments and glcm features produce a fine results

than other features.

Table 5. Features of the same scene on different camera models based on CFA Interpolation using Expectation Maximization

C1 C2 C3

Features List 9 am 12noon 3 pm 9 am 12noon 3 pm 9 am 12noon 3 pm

CSNR

R 79.41 80.08 80.76 79.50 79.50 79.50 82.01 81.44 82.10

G 82.59 83.19 84.08 83.04 83.04 83.04 85.41 84.81 85.42

B 78.79 79.41 80.17 78.74 78.74 78.74 81.45 80.70 81.43

INVARIANT MOMENTS

M1 0.94 1.01 0.93 0.91 0.91 0.91 1.16 1.11 1.18

M2 4.63 5.21 4.85 3.32 3.32 3.32 5.33 5.20 5.64

M3 6.90 7.14 6.82 6.86 6.86 6.86 8.16 8.15 8.32

M4 6.81 7.23 6.72 7.24 7.24 7.24 7.74 7.85 8.01

M5 14.27 15.38 14.31 14.90 14.90 14.90 16.67 16.52 17.14

M6 9.81 10.76 9.91 9.36 9.36 9.36 11.49 11.46 11.85

M7 14.78 15.07 14.27 15.64 15.64 15.64 16.39 16.86 16.87

COLOR MOMENTS

MEAN

R 0.45 0.48 0.44 0.43 0.43 0.43 0.50 0.47 0.53

G 0.43 0.46 0.42 0.48 0.48 0.48 0.53 0.50 0.55

B 0.33 0.35 0.33 0.43 0.43 0.43 0.46 0.44 0.49

VARIANCE

R 0.07 0.08 0.08 0.05 0.05 0.05 0.08 0.07 0.07

G 0.07 0.07 0.07 0.06 0.06 0.06 0.07 0.07 0.07

B 0.09 0.09 0.09 0.08 0.08 0.08 0.11 0.10 0.10

STD

R 0.27 0.28 0.27 0.23 0.23 0.23 0.27 0.25 0.26

G 0.27 0.27 0.27 0.24 0.24 0.24 0.27 0.25 0.26

B 0.30 0.30 0.30 0.28 0.28 0.28 0.33 0.31 0.32

STATISTICAL FEATURES(2D)

Contrast 16784 10710 10770 9596 9596 9596 10826 10233 10878

Correlation -0.01 0.00 -0.03 0.15 0.15 0.15 -0.02 0.01 0.01

Energy 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00

Homogenit 0.03 0.04 0.04 0.04 0.04 0.04 0.04 0.04 0.04

GLCM

Autocorrel 19.30 21.06 18.96 20.51 20.51 20.51 25.00 22.87 26.48

Contrast 0.59 0.56 0.50 0.59 0.59 0.59 0.40 0.45 0.42

Correl1 0.92 0.92 0.93 0.89 0.89 0.89 0.95 0.93 0.94

Correl[1,2] 0.92 0.92 0.93 0.89 0.89 0.89 0.95 0.93 0.94

Clust.Promi 559.4 479.13 568.8 308.2 308.29 308.2 456.8 395.2 422.04

Clust.Shade 36.36 24.54 39.31 14.96 14.96 14.96 18.34 16.49 8.39

Dissimilarit 0.35 0.35 0.32 0.37 0.37 0.37 0.28 0.31 0.29

Energy:mat 0.13 0.12 0.13 0.12 0.12 0.12 0.12 0.13 0.13

Page 11: Source Camera Identification using Image Featurescamera fingerprint can be estimated from images known to have been taken with the camera. I = I o + I o K + Θ (1) In this equation

International Journal of Applied Engineering Research ISSN 0973-4562 Volume 13, Number 1 (2018) pp. 490-504

© Research India Publications. http://www.ripublication.com

500

Entropy 2.59 2.62 2.51 2.60 2.60 2.60 2.58 2.54 2.53

Homo: matl 0.85 0.85 0.87 0.85 0.85 0.85 0.87 0.87 0.87

Homogenei 0.84 0.84 0.86 0.84 0.84 0.84 0.87 0.86 0.87

Max.probab 0.24 0.23 0.25 0.23 0.23 0.23 0.22 0.24 0.25

sum.squ.var 19.53 21.26 19.15 20.70 20.70 20.70 25.13 23.01 26.59

Sum avg 7.92 8.36 7.83 8.43 8.43 8.43 9.21 8.80 9.56

Sum vari 47.62 52.20 46.88 50.43 50.43 50.43 63.67 57.90 68.97

Sum entro 2.23 2.26 2.18 2.20 2.20 2.20 2.28 2.22 2.23

Diffe. Vari 0.59 0.56 0.50 0.59 0.59 0.59 0.40 0.45 0.42

Differ.entro 0.76 0.74 0.70 0.77 0.77 0.77 0.66 0.69 0.66

Info.Corre1 -0.53 -0.54 -0.55 -0.50 -0.50 -0.50 -0.60 -0.56 -0.59

Info.Corre2 0.91 0.92 0.92 0.90 0.90 0.90 0.94 0.92 0.93

INN 0.96 0.96 0.97 0.96 0.96 0.96 0.97 0.97 0.97

IDMN 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99

Figure 10.Invariant moments of same scene of different

camera models based on CFA Interpolation using Expectation

Maximization

.

Figure11.GLCM features of same scene of different camera

models based on CFA Interpolation using Expectation

Maximization

In the CFA Interpolation using Expectation Maximization

(EM) method blue channel of color moments and GLCM

features provide accepted results.

Table 6. Features of the same scene on different camera models based on CFA Interpolation using Alternate Projection method

C1 C2 C3

Features List 9 am 12noon 3 pm 9 am 12noon 3 pm 9 am 12noon 3 pm

CSNR

R 81.01 81.42 82.36 81.34 81.34 81.34 83.81 83.16 83.94

G 84.27 84.57 85.73 85.15 85.15 85.15 87.62 86.87 87.59

B 80.25 80.53 81.56 80.66 80.66 80.66 83.41 82.44 83.27

INVARIANT MOMENTS

M1 0.94 1.01 0.90 0.91 0.91 0.91 1.15 1.11 1.18

M2 4.64 5.13 4.86 3.32 3.32 3.32 5.34 5.20 5.65

M3 6.90 7.11 6.71 6.86 6.86 6.86 8.08 8.15 8.21

M4 6.81 7.17 6.56 7.24 7.24 7.24 7.78 7.85 7.89

Page 12: Source Camera Identification using Image Featurescamera fingerprint can be estimated from images known to have been taken with the camera. I = I o + I o K + Θ (1) In this equation

International Journal of Applied Engineering Research ISSN 0973-4562 Volume 13, Number 1 (2018) pp. 490-504

© Research India Publications. http://www.ripublication.com

501

M5 14.27 15.24 14.03 14.95 14.95 14.95 16.80 16.53 16.92

M6 9.81 10.68 9.77 9.36 9.36 9.36 11.73 11.46 11.83

M7 14.83 14.97 13.97 15.59 15.59 15.59 16.47 16.86 16.63

COLOR MOMENTS

MEAN

R 0.45 0.47 0.43 0.43 0.43 0.43 0.51 0.47 0.52

G 0.43 0.46 0.41 0.48 0.48 0.48 0.53 0.50 0.54

B 0.33 0.35 0.32 0.43 0.43 0.43 0.47 0.44 0.49

VARIANCE

R 0.07 0.08 0.08 0.05 0.05 0.05 0.08 0.07 0.07

G 0.07 0.07 0.07 0.06 0.06 0.06 0.07 0.07 0.07

B 0.09 0.09 0.09 0.08 0.08 0.08 0.10 0.10 0.10

STD

R 0.27 0.27 0.28 0.23 0.23 0.23 0.27 0.24 0.27

G 0.27 0.27 0.27 0.24 0.24 0.24 0.27 0.24 0.27

B 0.30 0.30 0.30 0.28 0.28 0.28 0.32 0.30 0.32

STATISTICAL FEATURES(2D)

Contrast 16785 10673 11008 9589 9589 9589 10919 10232 10814

Correlation -0.01 0.01 -0.05 0.15 0.15 0.15 -0.01 0.01 0.01

Energy 0.00 0.00 0.00 0.00 0.00 0.00 -0.01 0.00 0.00

Homogenit 0.03 0.04 0.04 0.04 0.04 0.04 0.04 0.04 0.04

GLCM

Autocorrel 19.30 20.92 18.45 20.51 20.51 20.51 25.17 22.87 26.44

Contrast 0.59 0.59 0.51 0.59 0.59 0.59 0.40 0.45 0.42

Correl1 0.92 0.91 0.93 0.89 0.89 0.89 0.95 0.93 0.94

Correl[1,2] 0.92 0.91 0.93 0.89 0.89 0.89 0.95 0.93 0.94

Clust.Promi 559.2 473.08 595.7 308.2 308.2 308.2 455.4 394.3 440.41

Clust.Shade 36.44 26.01 43.15 15.05 15.05 15.05 17.81 16.50 8.43

Dissimilarit 0.35 0.36 0.32 0.36 0.36 0.36 0.28 0.30 0.28

Energy:mat 0.13 0.12 0.13 0.12 0.12 0.12 0.12 0.13 0.13

Entropy 2.59 2.64 2.51 2.59 2.59 2.59 2.56 2.54 2.54

Homo: matl 0.85 0.85 0.86 0.85 0.85 0.85 0.88 0.87 0.87

Homogenei 0.85 0.84 0.86 0.84 0.84 0.84 0.87 0.86 0.87

Max.probab 0.25 0.23 0.25 0.23 0.23 0.23 0.22 0.24 0.25

sum.squ.var 19.53 21.14 18.65 20.70 20.70 20.70 25.30 23.01 26.55

Sum avg 7.92 8.35 7.70 8.43 8.43 8.43 9.25 8.80 9.52

Sum vari 47.62 51.75 45.39 50.44 50.44 50.44 64.27 57.90 68.69

Sum entro 2.22 2.26 2.18 2.20 2.20 2.20 2.27 2.22 2.24

Diffe. Vari 0.59 0.59 0.51 0.59 0.59 0.59 0.40 0.45 0.42

Differ.entro 0.76 0.76 0.71 0.77 0.77 0.77 0.65 0.69 0.66

Info.Corre1 -0.53 -0.52 -0.55 -0.50 -0.50 -0.50 -0.60 -0.56 -0.60

Info.Corre2 0.91 0.91 0.92 0.90 0.90 0.90 0.94 0.92 0.94

INN 0.96 0.96 0.97 0.96 0.96 0.96 0.97 0.97 0.97

IDMN 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99

Page 13: Source Camera Identification using Image Featurescamera fingerprint can be estimated from images known to have been taken with the camera. I = I o + I o K + Θ (1) In this equation

International Journal of Applied Engineering Research ISSN 0973-4562 Volume 13, Number 1 (2018) pp. 490-504

© Research India Publications. http://www.ripublication.com

502

Figure 12. Invariant moments of same scene of different

camera models based on CFA Interpolation using Alternate

projection method

Figure 13. GLCM features of same scene of different camera

models based on CFA Interpolation using Alternate projection

method

From the above results Features extracted from different

camera models based on CFA Interpolation using Alternate

Projection method provide very excellent performance.

Because, in which many more features like invariant

moments, color moments, statistical features and glcm

features provides a fine results.

Table 7. Features of the same scene on different camera models based on PCA

C1 C2 C3

Features List 9 am 12noon 3 pm 9 am 12noon 3 pm 9 am 12noon 3 pm

CSNR

R 43.06 43.06 43.07 44.99 44.99 44.95 43.00 43.05 43.04

G 43.82 43.85 43.82 45.87 45.88 45.85 43.73 43.74 43.77

B 43.95 43.92 43.90 45.82 45.80 45.81 43.80 43.82 43.83

INVARIANT MOMENTS

M1 1.08 1.11 1.10 0.98 0.99 0.99 1.14 1.13 1.14

M2 4.51 4.85 4.82 3.31 3.31 3.31 4.91 4.87 4.90

M3 11.32 11.33 11.88 10.81 10.71 10.61 12.36 11.65 11.65

M4 11.11 11.58 11.30 10.62 10.33 10.64 11.41 11.24 11.28

M5 23.19 23.54 23.27 21.61 21.62 21.62 24.18 23.50 23.15

M6 14.62 15.24 14.42 12.82 12.82 12.84 14.87 14.40 14.60

M7 22.99 23.96 24.16 21.85 21.85 21.84 23.83 23.67 23.93

COLOR MOMENTS

MEAN

R 1.80 1.78 1.76 1.78 1.78 1.78 1.81 1.79 1.80

G 1.68 1.65 1.64 1.70 1.70 1.70 1.71 1.69 1.71

B 1.63 1.61 1.60 1.67 1.67 1.67 1.68 1.67 1.69

VARIANCE

R 11.62 11.10 11.10 11.85 11.85 11.85 11.26 11.17 11.18

G 10.52 10.04 10.01 10.74 10.80 10.80 10.15 10.08 10.15

B 10.44 9.99 9.98 10.84 10.85 10.85 10.18 10.10 10.15

STD

R 3.41 3.33 3.33 3.36 3.37 3.36 3.35 3.34 3.34

Page 14: Source Camera Identification using Image Featurescamera fingerprint can be estimated from images known to have been taken with the camera. I = I o + I o K + Θ (1) In this equation

International Journal of Applied Engineering Research ISSN 0973-4562 Volume 13, Number 1 (2018) pp. 490-504

© Research India Publications. http://www.ripublication.com

503

G 3.24 3.17 3.16 3.22 3.21 3.21 3.19 3.18 3.19

B 3.23 3.16 3.16 3.22 3.34 3.22 3.19 3.18 3.19

STATISTICAL FEATURES(2D)

Contrast 15851 11183 11251 10961 10962 10960 11186 11126 11142

Correlation 0.00 0.00 0.00 0.03 0.15 0.17 0.00 0.00 0.00

Energy 0.00 0.00 0.00 0.00 0.12 0.14 0.00 0.00 0.00

Homogenit 0.03 0.04 0.04 0.04 0.16 0.18 0.04 0.04 0.04

GLCM

Autocorrel 23.54 23.71 23.46 23.73 23.85 23.87 24.59 24.32 24.65

Contrast 17.81 17.74 17.75 17.06 17.18 17.20 17.70 17.70 17.68

Correl1 0.14 0.14 0.14 0.18 0.30 0.32 0.14 0.14 0.14

Correl[1,2] 0.14 0.14 0.14 0.18 0.30 0.32 0.14 0.14 0.14

Clust.Promi 1045 1044 1044 1007 1007 1007 1042 1042 1039

Clust.Shade -10.14 -11.09 -9.58 -10.22 -10.10 -10.08 -16.10 -14.48 -16.38

Dissimilarit 2.98 2.97 2.97 2.86 2.98 3.00 2.96 2.97 2.96

Energy:mat 0.12 0.12 0.12 0.12 0.24 0.26 0.12 0.12 0.12

Entropy 2.74 2.75 2.75 2.73 2.85 2.87 2.74 2.75 2.74

Homo: matl 0.53 0.53 0.53 0.54 0.66 0.68 0.53 0.53 0.53

Homogenei 0.46 0.46 0.46 0.48 0.60 0.62 0.47 0.47 0.47

Max.probab 0.22 0.22 0.22 0.22 0.34 0.36 0.23 0.23 0.23

sum.squ.var 32.30 32.43 32.19 32.11 32.23 32.25 33.30 33.02 33.34

Sum avg 9.39 9.43 9.37 9.43 9.55 9.57 9.61 9.55 9.63

Sum vari 76.03 76.40 75.61 75.83 75.95 75.97 79.16 78.23 79.24

Sum entro 2.16 2.17 2.17 2.17 2.29 2.31 2.16 2.17 2.17

Diffe. Vari 17.81 17.74 17.75 17.06 17.18 17.20 17.70 17.70 17.68

Differ.entro 1.66 1.66 1.67 1.61 1.73 1.75 1.66 1.67 1.67

Info.Corre1 -0.01 -0.01 -0.01 -0.03 0.09 0.11 -0.01 -0.01 -0.01

Info.Corre2 0.18 0.18 0.18 0.21 0.33 0.35 0.18 0.18 0.18

INN 0.78 0.78 0.78 0.79 0.91 0.93 0.78 0.78 0.78

IDMN 0.83 0.83 0.83 0.84 0.96 0.98 0.83 0.83 0.83

Figure 14.Invariant moments of same scene of different

camera models based on PCA

Figure 15.GLCM features of same scene of different camera

models based on PCA

Various features on the same scene images captured using 3

Page 15: Source Camera Identification using Image Featurescamera fingerprint can be estimated from images known to have been taken with the camera. I = I o + I o K + Θ (1) In this equation

International Journal of Applied Engineering Research ISSN 0973-4562 Volume 13, Number 1 (2018) pp. 490-504

© Research India Publications. http://www.ripublication.com

504

different camera models at three different timings on a day

9am, 12noon and 3pm.Like this; six different techniques

(described in section II) have been applied and received the

different feature values for the 3 different camera models at

three different timings which are tabulated from tables 2 to 7.

From these results, CFA interpolation using alternate

projection technique gives much better performance than

other techniques, in which many more features give excellent

performance up to 98%.

CONCLUSION

This work is a study for identifying best features of an image

in order to perform source camera identification. The

identified features have been extracted using six different

techniques and a comparison has been made to identify the

best method for source camera identification and

classification. This information is essential in identifying

forgeries in a given image. As an extension to this work,

image forgery detection will be performed. Experimental

results corroborate that these features are best to perform

image analysis.

REFERENCES

[1] J. Lukac, J. Fridrich, and M. Goljan, “Digital camera

identification from sensor pattern noise,” IEEE

Transactions on Information Security and Forensics,

vol. 1, pp. 205–214, 2006.

[2] A. Swaminathan, M. Wu, and K. J. R. Liu, “Digital

image forensics via intrinsic fingerprints,” IEEE

Transactions on Information Forensics and Security,

vol. 3, pp. 101–117, 2008.

[3] Sintayehu Dehnie, Husrev T. Sencar and Nasir

Memon, “Digital Image Forensics for Identifying

Computer Generated and Digital Camera Images,”

IEEE International Conference on Image Processing,

Atlanta, GA, pp. 2313-2316, 2006.

[4] Kai San Choi, Edmund Y.Lam, and Kenneth

K.Y.Wong, “Feature selection in source camera

identification,” IEEE Conference on Systems, Man,

and Cybernetics, Taipei, Taiwan, pp. 3176-3180, 2006.

[5] Sevinc Bayram, Husrev T. Sencar, Nasir Memon,”

Source Camera Identification Based on CFA

Interpolation,” IEEE International Conference on

Image Processing, Genova, Italy, pp. 2413-2416, 2005.

[6] M. Chen, J. Fridrich, M. Goljan, and J. Lukas,

“Determining image origin and integrity using sensor

noise,” IEEE Transactions on Information Forensics

and Security, vol. 3, pp. 74–90, 2008.

[7] Todd Moon,”The Expectation Maximization

Algorithm,” IEEE Signal Processing Magazine,

November, 1996.

[8] Sevinc Bayrama, Husrev T. Sencarb, Nasir Memon,

“Classification of digital camera-models based on

demosaicing artifacts,” Journal of Digital

investigations, vol.5, pp. 49-59, 2008.

[9] Lei Zhang, Rastislav Lukac, Xiaolin Wu, and David

Zhang,“PCA-Based Spatially Adaptive Denoising of

CFA Images for Single-Sensor Digital Cameras,” IEEE

Transactions on Image Processing, vol. 18, pp.797-

810, April, 2009.

[10] Lei Zhang, David Zhang, “A joint demosaicking–

zooming scheme for single chip digital color cameras,”

Journal of Computer Vision and Image Understanding,

vol.107, pp.14–25, July–August 2007.

[11] Mehdi Kharrazi, Husrev T. Sencar, and Nasir Memon,

“Blind source camera identification,” IEEE

International Conference on Image Processing,

Singapore, pp.709-712, 2004.

[12] Bahadir K. Gunturk, Yucel Altunbasak, “Color Plane

Interpolation Using Alternating Projections,” IEEE

Transactions on Image Processing, vol.11, pp. 997-

1013, 2002.

[13] Stricker, M. and Orengo.M, “Similarity of Color

Images,” Proceedings of SPIE Storage and Retrieval

for Still Image and Video Databases III, SanJose, CA,

USA, pp. 381-392, 1995.

[14] C.Chang and C.J.Lin. LIBSVM: a library for support

vector machines. ACM Transactions on intelligent

Systems and Technology, vol.2, pp. 27:1--27:27, 2011.