1. Full Terms & Conditions of access and use can be found at
https://www.tandfonline.com/action/journalInformation?journalCode=tajf20
Australian Journal of Forensic Sciences
ISSN: (Print) (Online) Journal homepage: https://www.tandfonline.com/loi/tajf20
Salient keypoint-based copy–move image forgery
detection
Nitish Kumar & Toshanlal Meenpal
To cite this article: Nitish Kumar & Toshanlal Meenpal (2023) Salient keypoint-based
copy–move image forgery detection, Australian Journal of Forensic Sciences, 55:3, 331-354,
DOI: 10.1080/00450618.2021.2016964
To link to this article: https://doi.org/10.1080/00450618.2021.2016964
Published online: 12 Jan 2022.
Submit your article to this journal
Article views: 331
View related articles
View Crossmark data
Citing articles: 5 View citing articles
3. authentication of the image. In passive forgery detection, prior information regarding the
input image is not required for forgery detection. Passive techniques analyse the image
content and utilize the intrinsic information for forgery detection. Hence, passive detec
tion approaches are more practical for validation of content of an image taken from any
internet source. Passive detection approaches are mostly used for copy–move forgery
detection, image splicing detection, image in-painting detection, re-touching detection,
etc.4
.
Copy–move forgery is one of the most common and simple approaches in image
forgery. In copy–move forgery, some part of the image is copied and duplicated within
the same image. This can be achieved in such a way that tampering clues will be visually
imperceptible. Figure 1 shows the example of copy–move forgery where the duplicated
object is shown by an arrow in the forged image. This kind of forgery is performed to
remove any object or for object duplication, in order to have misleading image informa
tion. Copy–move forgery detection methods are divided into two categories: (1) the
block-based method and (2) the keypoint-based method. In the block-based method,
images are divided into a number of overlapping blocks for feature extraction. The major
drawbacks of this method are, the computation cost increases as the image size increases
and the geometrical transformations of forged areas cannot be detected by this method.
In the keypoint-based method, a pixel’s extreme value is extracted from a high entropy
region of the image. This method overcomes the shortcomings of the block-based
method. In some cases, when the forged region is multi-cloned in such cases, many
keypoint-based methods fail to extract sufficient keypoints. In addition, the keypoint-
based method faces issues of matching a large number of keypoints for finding the
duplicate region.
To tackle the issues mentioned above, a new salient-keypoint selection based
approach for copy–move forgery detection has been designed. Selection of salient key
points from SIFT and KAZE features is proposed for the detection of the duplicated region.
Figure 1. Example of copy–move image forgery.
332 N. KUMAR AND T. MEENPAL
4. A SIFT keypoint feature is selected which is robust to geometrical transformations and
noise addition. However, SIFT is based on Gaussian scale space, which smoothens the
sharp edges or transition in an image. Hence a KAZE feature is augmented along with
SIFT, which extracts most of the information from the edges. Salient keypoint selection is
introduced to reduce the number of keypoints and select the robust and most efficient
keypoints. Selected salient keypoints help in the reduction of computational time during
feature descriptor matching and result in efficient detection of the copied region. The
proposed approach introduces a selective search approach5
, for generating bounding
boxes across objects in the image. The bounding box helps in matching the keypoint
descriptors between two bounding boxes. The inclusion of a bounding box in our
proposed method eliminates the use of a filtering operation used for outliers.
The major contributions of this paper are as follows:
(1) A salient keypoint selection scheme for SIFT and KAZE features is proposed to
reduce feature matching time.
(2) A selective search approach is introduced for the detection of duplicated objects in
forged images, which eliminates the use of filters.
(3) The proposed technique has been evaluated on two benchmark datasets and have
achieved promising results. A comparative analysis with different state-of-the-art
approaches is also presented.
The rest of the paper is arranged as follows: Section 2 explains the existing state-of-art
methods used for copy–move forgery detection. SIFT and KAZE features are briefly
described in Section 3. The proposed methodology with salient keypoint selection is
presented in Section 4. Relevant experiments have been performed to validate the
proposed methodology, as detailed in Section 5. Finally, the conclusion is given in
Section 6.
2. Related work
In image forgery, some content of the original image is replaced by new content. If the
new content has been cropped from a region within the same image and stitched at the
forged location, this kind of image forgery is a copy–move forgery6
. In this case, the
forgery can be detected by identifying the region duplication within the given image. If
there is any region duplication found, the given image is considered a forged image,
otherwise it is an authentic image. Region duplication can be found either by dividing the
given image into small blocks or by extracting interest points throughout the image.
Hence, copy–move forgery detection techniques can be divided into two approaches,
a block-based approach and a keypoint-based approach7
.
In the block-based approach, the input image is divided into overlapping or non-
overlapping blocks, which are square or circular in shape. Features relevant to block
matching are extracted from these blocks and the block matching technique1
is applied
to find similar blocks. If matching blocks are found in two or more different regions, copy–
move forgery is detected, else the input image is authentic. Many state-of-the-art
AUSTRALIAN JOURNAL OF FORENSIC SCIENCES 333
5. techniques have been proposed based on various feature extraction techniques for copy–
move forgery detection6
. The idea of splitting an image into equal size blocks was first
proposed for copy–move forgery detection by Fridrich et al. 8
. On each block size of 8 × 8
pixels, a discrete cosine transform (DCT) was applied as a feature transform. Later, various
feature transforms were applied on the block-based method, such as DCT with circular
blocks9
, DCT with singular value decomposition10
, Fourier Transform11
, Discrete Wavelet
Transform (DWT)12
, Fast Walsh-Hadamard Transform (FWHT)13
, Dyadic Wavelet Transform
(DyWT)14
, Wiener Filter15
, Histogram of Oriented Gradient (HOG)16
, Zernike Moment17
,
Fourier-Mellin Transform (FMT)18
, and so on. A detailed analysis of these methods
has been given in the survey on copy–move forgery detection presented by Abd Warif
et al.1
.
Recently, some of the hybrid feature extraction techniques have been proposed to
enhance the detection accuracy. A combination of DWT and DCT features were proposed
by Hayat and Qazi19
. Mahmood et al.20
introduced the combined features of Stationary
Wavelet Transform (SWT) and DCT. SWT was utilized since its approximation subband
retains most of the image information and DCT was used to reduce feature dimensions.
An enhanced block-based approach was presented by Soni et al.21
where a Speeded Up
Robust Feature (SURF) was computed from each block. A new adaptive Tetrolet transform,
which is a special case of a Haar wavelet transform22
, was introduced to enhance the
detection accuracy even in the case of a post-processing operation. Some of the common
features matching approach used in block-based techniques are patch-match23
, lexico
graphical sorting7
, coherency sensitive hashing24
, KD Tree7
, etc.
In the keypoint-based approach, features are extracted from high entropy image
regions. Distinctive local features from the edge, corner or blobs of an image are extracted
as feature vectors. Each keypoint feature consists of a set of descriptors, which helps to
increase the reliability of feature. Features and their descriptor are utilized for classifica
tion and matching of a duplicated region. Several keypoint-based approaches have been
introduced for copy–move forgery detection such as Scale Invariant Feature Transform
(SIFT)25
, SURF26
, Harris Corner points 27
, Binary Robust Invariant Scalable Keypoints
(BRISK)28
, Oriented Fast and Rotated BRIEF (ORB)29
, KAZE Keypoints30
and so on. Dhivya
et al.31
introduced SURF features for forgery detection and trained for object recognition
using a Support Vector Machine (SVM). Meena and Tyagi32
proposed a hybrid approach
where a block-based approach was applied for the smooth region and a keypoint-based
approach was applied in the texture region of an image. From the smooth region, FMT
features were extracted, and SIFT features were extracted from the texture region. Uma
and Sathya33
presented a method using SIFT and DWT and introduced a football-game
based meta-heuristic optimization as a clustering technique. Selection of better feature
matching techniques also helps to improve the performance. Some of the common
feature matching steps followed in keypoint-based approaches are 2-Nearest neighbour
(NN)28
, generalized 2-Nearest Neighbour (g2NN)32
, etc. Apart from feature extraction and
the matching phase, there are some post-processing operations which play important
role in filtering outliers. The most frequently used operations are morphological
operations34
, RANdom SAmple Consensus (RANSAC)32
and Same Affine Transformation
Selection (SATS)7
. Yang et al.35
introduced two stage filtering algorithms – clustering-
based filtering and grid-based filtering, to filter out false matches.
334 N. KUMAR AND T. MEENPAL
6. Both aforementioned approaches have some pros and cons. Block-based techniques
achieve more robustness and accuracy in forgery detection, but the feature matching
steps introduced during this approach use exhaustive searching, which is computation
ally expensive. Another limitation of the block-based approach is deciding the appro
priate size of each block. If the block size is large, small forged objects cannot be
detected or uniform areas may be detected as a duplicated region. However, a small
block size does not extract better features and the feature matching step increases the
computational cost. The keypoint-based approach exhibits better performance in the
case of geometric transformations due to its scale and rotation invariant properties. The
major limitations of the keypoint-based approach is matching a huge number of key
points and the use of a filtering algorithm to remove outliers. To overcome this draw
back, this paper proposes a salient keypoint based feature extraction. SIFT and KAZE
keypoint features from the input image are extracted and salient keypoints are selected
for feature matching. In recent years, selection of salient keypoints and descriptor
effectiveness have been widely used for object detection and matching36
. Salient key
points make the feature robust and reduce the computational cost of copy–move
forgery detection.
3. Preliminary
This section illustrates in detail the Scale-Invariant Feature Transform (SIFT) and KAZE
features. These feature extraction techniques have been used in the proposed methodol
ogy for forgery detection.
3.1. SIFT keypoint feature extraction
SIFT keypoint was first proposed by David Lowe37
,and is a linear multiscale Gaussian
pyramid decomposition. A pre-processed input image is applied to the SIFT algorithm to
get a 128-dimensional feature vector, called a SIFT descriptor. SIFT keypoint feature
extraction is divided into four stages, as discussed below.
(1) Scale-space extrema detection: The interest point can be calculated by finding local
maxima in the scale space of the Laplacian of Gaussian (LoG) by varying the scale
(σ) values. Scale space can be represented in the form of Lðx; y; σÞ. LoG can be
computed by applying the convolution of the input image I(x,y) with Gaussian
function G ðx; y; σÞ as illustrated in equations (1) and (2).
Lðx; y; σÞ ¼ Gðx; y; σÞ � Iðx; yÞ (1)
Gðx; y; σÞ ¼
1
2πσ2
� �
e
x2þy2
ð Þ
2σ2
(2)
Since the computation of LoG is found to be costly, the Difference of Gaussian (DoG)
has been used in the SIFT algorithm. DoG is an approximation of LoG, which is
calculated by finding the Difference of Gaussian with two different values called, σ
and kσ.
AUSTRALIAN JOURNAL OF FORENSIC SCIENCES 335
7. Dðx; y; σÞ ¼ Lðx; y; kσÞ Lðx; y; σÞ (3)
In equation (3), Dðx; y; σÞ is the DoG of a given image and Lðx; y; kσÞ is the LoG for the
kth value of σ. Once we get the DoG of an image, local extrema have been investigated
over scale and space. A potential keypoint has been decided if it happens to be a local
extrema.
(2) Localization of keypoint: When we get a potential keypoint, it is enhanced to get
a more precise result. The Taylor series expansion has been used to scale space to get
the accurate location of the extrema. At each extrema, if the intensity value is less
with respect to the threshold value, then that extrema has been rejected. In this way,
low-contrast keypoints are eliminated and only strong keypoints are left.
(3) Orientation of keypoint: For keypoints to be rotation invariant, an orientation is
assigned to each keypoint. Based on a given scale σ, a neighbourhood is taken
across the keypoint and gradient magnitude M(x, y) and direction ϕ(x, y) are
evaluated as given in equations (4) and (5). A complete 3600
rotation is divided
into 36 bins and an orientation histogram is created. The highest peak in the
histogram is considered and keypoints created with the same scale and location
but with a different orientation.
Mðx; yÞ ¼ ððLðx þ 1; yÞ Lðx 1; yÞÞ2
þ
ðLðx; y þ 1Þ Lðx; y 1ÞÞ2
Þ1=2 (4)
ϕðx; yÞ ¼ tan 1 Lðx þ 1; yÞ Lðx 1; yÞ
Lðx; y þ 1Þ Lðx; y 1Þ
� �
(5)
(4) Keypoint descriptor: Around each keypoint, a 16 × 16 neighbourhood is taken which
is divided into 16 sub-blocks with a block size of 4 × 4. At each sub-block, eight bin
histograms are created. So, for each keypoint, 128 bin values are generated in the
form of a vector called a keypoint descriptor.
3.2. KAZE keypoint feature extraction
The KAZE feature was introduced by Alcantarilla et al.38
and is based on non-linear scale
space. KAZE is a Japanese word which means wind. The KAZE feature preserves most of
the information from the object boundary or edges compared with the other regions39
.
For the construction of non-linear scale space, this feature performs non-linear diffusion
filtering as explained in equation (6), which is the partial derivative of luminance of an
image (IL) with respect to time t which is a scale parameter.
@IL
@t
¼ divðcðx; y; tÞ:ÑILÞ (6)
cðx; y; tÞ ¼ gðjÑLσðx; y; tÞjÞ (7)
In equation (6), div is a divergence operator, Ñ is a gradient operator and cðx; y; tÞ is the
flow function. Function c can be a tensor or a scalar which is defined in equation (7).
A conductivity function (g) is introduced to make the diffusion adaptive to image
336 N. KUMAR AND T. MEENPAL
8. structure. Here, ÑLσ is the luminance function, which is the gradient of the Gaussian
smoothed image IL. The conductivity function creates KAZE more suitable for edge
representation. Various conductivity functions can be used to promote high contract
edges or smoothening across the edges.
4. Proposed method
To improve the robustness of the keypoint-based approach, and to enhance the compu
tational efficiency, salient keypoint selection has been introduced for extracted keypoints.
For a given input image, first SIFT keypoints and KAZE keypoints are extracted and
selected salient keypoints out of these keypoints (KP). Then, bounding box (BB) is
drawn on the input image using a region proposal approach and the unwanted boxes
are suppressed. The keypoints are localized inside the BB and apply the keypoint match
ing among the keypoints of the corresponding BB. If the required number of matching
pairs within two BBs are found, the forgery can be localized on the detection result. The
complete framework of the proposed copy–move forgery detection is shown in Figure 2.
An overview of the step-by-step implementation of the proposed method has been
visualized in Figure 3. In this section, each step of the proposed methodology is discussed
in detail.
Figure 2. Framework of the proposed salient keypoint-based copy–move forgery detection.
Figure 3. Overview of step-by-step implementation of proposed methodology.
AUSTRALIAN JOURNAL OF FORENSIC SCIENCES 337
9. 4.1. Keypoints-based feature extraction
Region duplication in copy–move forgery may be performed with some geometric
transformations (translation, rotation or scaling), which are difficult to detect using
a block-based detection approach. In such cases, a keypoint-based approach proved
to be robust and to extract keypoints from the region of interest. In the proposed
method, the input colour image considered for forgery detection is first converted into
a grayscale image as a pre-processing operation. Then, two different keypoint-based
features have been selected for feature extraction. First, SIFT keypoint is selected, which
is robust to illumination variations, geometrical transformations, noise and affine trans
formations. SIFT is based on Gaussian scale space, which performs the smoothening
operation on the image. As a result, sharp edges or transitions in an image become
smooth and, due to this, crucial information related to forgery detection can be wiped
out. To overcome this issue, the KAZE feature is augmented which is a multi-scale 2D
feature detector along with SIFT. It has been observed that SIFT keypoints gather
information mostly from salient regions whereas KAZE features gather information
mostly from object boundaries.
From the input image, a set of SIFT keypoint feature vectors PSi
¼ ðp1; p2; p3; :::; pmÞ
representing scale, location and orientation is generated with m number of keypoints and
each keypoint has 128 descriptors DSi
¼ ðx1; x2; x3; :::; x128Þ. Similarly, a set of KAZE key
point feature vectors PKi
¼ ðk1; k2; k3; :::; knÞ representing scale, location and orientation is
generated with n keypoints and each keypoint has a 64-dimension descrip
tor DKi
¼ ðv1; v2; v3; :::; v64Þ.
4.2. Salient keypoint evaluation
The number of keypoints for a high definition image is numerous. Hence, keypoint
matching in such cases is very time consuming in the case of forgery detection. For
a reduction in the number of keypoints, by keeping the keypoints robust, a salient key
point selection scheme is introduced. Salient keypoint selection is defined based on
keypoint detectability and its descriptor distinctiveness and repeatability.
Keypoint selection for SIFT and KAZE is performed by ranking the keypoints by
calculating the saliency score. Geometric transformations have been applied on an
input image to show invariance among the set of keypoints. A transformation function
Tp
is used as a geometric transformation of input image I to get transformed image Ip
as
shown in equation (8). Here, p represents the pth transformation and the geometric
transformations applied on I are scaling with factors (2, 1.5, 0.5), and rotation with
angles (π=12, π=6). Salient keypoints are selected depending upon the saliency score of
each keypoint S ðPiÞ. Saliency score is computed based on the value of distinctiveness (Di),
repeatability (R) and detectability (D). Here, each keypoint of SIFT or KAZE is represented
by Pi as shown in equation (9).
Ip
¼ Tp
ðIÞ (8)
Pi ¼ ððui; viÞ; siÞ; i ¼ 1; :::; n (9)
338 N. KUMAR AND T. MEENPAL
10. where ðui; viÞ is the location of the ith keypoint, si is the strength of corresponding
keypoint and n is the total number of keypoints. The saliency score for each keypoint
can be calculated as:
SðPiÞ ¼ DiðPiÞ þ RðPiÞ þ DðPiÞ (10)
where in equation (10), DiðPiÞ is distinctiveness, R ðPiÞ is repeatability and D ðPiÞ is
detectability of the ith keypoint respectively.
(1) Distinctiveness: It is defined as diversity of a keypoint descriptor from the other
keypoints in an image. This can be measured by finding summation of Euclidean
distance (L) of each pair of keypoint descriptors in an image as explained in
equation (11).
DiðPiÞ ¼
1
n 1
X
ðui;viÞ2Pi;i�j
Lðdi; djÞ (11)
(2) Repeatability: It is the estimation of invariance of the keypoints descriptor across
various transformations. This can be measured by finding the average of the
Euclidean distance of the keypoint descriptor in the input image and corresponding
keypoint descriptor of transformed image. In equation (12), t represents the num
ber of geometric transformations.
RðPiÞ ¼
1
t
X
n
ði;jÞ21
Lðdi; dp
j Þ (12)
(3) Detectability: It is defined as the ability of a keypoint to be detected under different
lighting conditions or viewpoints. Detectability of a keypoint is calculated as the
average strength(s) of each keypoint and its corresponding transform as discussed
in equation (13).
DðPiÞ ¼
1
t
X
n
i21
si (13)
The saliency score of each keypoint is calculated using equation (10) by normalizing the
score of Di, R and D in the range [0,1], and the mean of saliency score (μS) is calculated as
mentioned in equation (14). Salient keypoint PS for SIFT and KAZE is selected as per
equation (15), if the SðPiÞ is greater than the μS. Here, μS is considered as salient score
threshold value. Only those keypoints having salient score greater than μSare selected as
salient keypoints. The procedure for selection of salient keypoints for SIFT and KAZE is
explained in proposed Algorithm 1.
μS ¼
1
n
X
n
1
ðSðPiÞÞ (14)
PS ¼ Pi; when SðPiÞ � μS; i ¼ 1; :::; n (15)
AUSTRALIAN JOURNAL OF FORENSIC SCIENCES 339
11. Algorithm 1: Selection of Salient SIFT and KAZE keypoints from a given image
1 Input: Input image (I) and transformed image (IP
)
Output: PS
2 Compute SIFT and KAZE keypoint (P) from I and Ip
, let n be the number of SIFT or
KAZE KP.
3 P0
¼ ½�
4 for Pi, i 1 to n do
5 Selection of matching keypoints(P0
) between I and IP
.
6 if P I
ð Þ ¼ PðIp
Þ then
7 P0
j Pi where i ¼ 1; :::; n
8 end
9 end
10 S = []
11 for P0
j, j 1 to m do
12 S ðP0
jÞ ¼ Di ðP0
jÞ þ R ðP0
jÞ þ D ðP0
jÞ
13 by using equations (11)–(13)
14 end
15 PS ¼ ½�
16 for Pj j 1 to m do
17 if S ðP0
jÞ � μS then
18 PS ¼ P0
j
19 end
20 end
SIFT and KAZE keypoints and their corresponding selected salient keypoints are visualized
in Figure 4. Salient SIFT and KAZE keypoints are plotted together in the third column of
the image and a considerable reduction in the number of keypoints can be observed. This
reduction in the value of n drags the attention to apply the salient keypoint selection,
which reduces the time required in feature matching which is the decisive factor in copy–
move forgery detection.
4.3. Region proposal using selective search
In copy–move forgery, mostly objects have been duplicated. To improve the detection
accuracy, there is a need to first detect the different objects present in the image and then
check for duplication. Motivated by this idea, a selective search-based region proposal has
been used for detecting objects in the image as discussed in Uijlings et al.5
. In an image,
a region proposal finds out prospective objects using segmentation. A region proposal is
executed by combining pixels into smaller segments. The region proposals generated are
of different scales and with varying aspect ratios, as explained in Verma et al.40
. The region
proposal approach used here is much faster and efficient compared with the sliding
window approach for object detection. The selective search uses segmentation based on
image structure to generate class independent object locations. Four different image
340 N. KUMAR AND T. MEENPAL
12. similarity measures: colour similarity, size similarity, shape similarity and texture similarity
have been considered for object region evaluation. Based on ranking of the region, lower
rank duplicates have been filtered out for the duplicated region and bounding boxes with
high rank have been considered as the object region.
Non-maximum suppression has been used to reduce the number of bounding boxes.
Two different threshold N1 and N2 have been defined to achieve the optimal number of
bounding boxes. The N1 threshold has been defined to reduce the area of the bounding
boxes inside an image. Hence, bounding boxes with area less than or equal to N1 have
been considered. Here, the ratio of the bounding box area to the entire image area is
defined to be less than the N2 threshold. The N2 threshold has been used for intersection
over union (IoU) to set the overlapping threshold of the bounding boxes. The N2 thresh
old is applied to prune the smaller overlapping bounding boxes and keep the larger ones.
The ratio of two bounding box areas is computed to check the overlap. If the overlap
between bounding boxes is greater than N2, then one of the bounding boxes is sup
pressed. Thus, the reduction in the number of bounding boxes is achieved by defining N2.
4.4. Feature descriptor matching
In a copy–move forgery, feature matching is the most decisive step for finding region
duplication. Similar keypoint descriptors have been found in the extracted keypoints of
the original and duplicated regions. Hence, feature matching is used to decide whether
the given image has any duplicated region or not.
Figure 4. Keypoint visualization: first column represents SIFT and KAZE keypoints; second column
represents corresponding salient SIFT and salient KAZE keypoints and the final image shows the
combined salient SIFT and KAZE keypoints for an image of size 512 × 512.
AUSTRALIAN JOURNAL OF FORENSIC SCIENCES 341
13. In this proposed method, the same feature matching technique has been used for SIFT
and KAZE feature descriptors. Feature matching has been done by finding similar keypoints
present in one bounding box and another. Keypoints outside the bounding box have not
been considered and keypoints inside the same bounding box are not used for matching.
Hamming distance has been calculated between keypoint descriptors of two bounding
boxes, and 2-nearest neighbor (2-NN) has been used to find the matching. A minimum
matching pair have been set as three for finding the duplicated object in the image. The
keypoint matching has not been performed inside the same bounding boxes, which reduces
the false positives generated owing to similar intensity pixels in nearby areas. Hence, the 2-
nearest neighbor technique gives better results in feature matching and does not require
any filtering techniques such as g2NN or RANSAC .
Summarizing the proposed methodology, for a given input image (I), compute the
transformed image (Ip
) and extract SIFT and KAZE features from (I) and (Ip
). From extracted
keypoints, select salient keypoints and draw the bounding box on the input image. Hence,
match the keypoint descriptor present in one bounding box with another using 2-NN. If
a sufficient amount of matching is found between two boxes then it is a forged image,
else the input image is the original image.
5. Experiments and results
In this section, experiments conducted to evaluate the performance of the proposed
forgery detection method have been discussed. Different benchmark image datasets and
performance evaluation metrics have been explained. Test results have been analysed on
different geometric transformations and the result is compared with existing state-of-the-
art methods of copy–move forgery detection.
5.1. Image dataset and evaluation metrics
For the evaluation of the proposed detection method, two benchmark image datasets
CoMoFoD41
, and MICC-F220 42
, have been used. In the CoMoFoD dataset, 200 different
original images with size 512 × 512 pixels have been used. Different geometric transfor
mations have been considered and they have been grouped into five categories: scaling,
rotation, translation, distortion and combined. In each category, there are 40 examples of
forged images and each category has six post-processing operations performed on an
image. Post-processing operations are noise addition, image blurring, JPEG compression,
colour reduction, brightness change and contrast adjustment. The CoMoFoD dataset
has a total of 10,400 images and the area of the copied region varies from 0.14% to
14.3% of image area. The MICC-F220 dataset has a total of 220 images, where 110 are
original images and 110 are forged images. Images in this dataset are of different sizes,
varying from 722 × 480 to 800 × 600 pixels. In the forged image, the copied region is
rectangular with the average forged area 1.2% of the complete image area. Forged
images have been composed by randomly selecting different geometric transformations,
such as rotation, translation, scaling or combined transformation.
The metrics used for performance evaluation of proposed copy–move forgery detec
tion are true positive rate (TPR), false positive rate (FPR), accuracy (A), precision (P), recall
(R) and F1 score. True positive (TP) is the number of correctly detected forged images.
342 N. KUMAR AND T. MEENPAL
14. False positive (FP) is the number of images wrongly detected as forged. False negative
(FN) is the number of forged images which are undetected and marked as genuine. True
negative (TN) is the number of original images detected correctly. The total number of
forged images is denoted as NF and the total number of original images is denoted as NO.
5.2. Analysis of the reduction in the number of keypoints
A comparison of the number of keypoints of SIFT, KAZE and salient SIFT + KAZE is
presented on four different images of the CoMoFoD dataset in Figure 5. Selection of
salient SIFT and KAZE keypoints can achieve approximately 80% reduction in the number
of keypoints with respect to individual number of keypoints from SIFT or KAZE. This
reduction in the number of keypoints plays a significant role in the reduction in feature
matching time.
5.3. Discussion on parameter estimation
In the proposed methodology, the N1 and N2 thresholds are defined to reduce the
number of bounding boxes. An intensive experiment is performed to reach the optimal
value of the threshold. Table 1 demonstrates the fluctuation in number of bounding
boxes by varying the values of N1 and N2 on the forged image shown in Figure 1. During
the experiment, it is observed that after applying the N1 and N2 thresholds, if the number
of bounding boxes is more, then it impacts on the computation time or if the number of
bounding boxes is less, then the bounding boxes left do not cover all the objects properly
in the image. Hence, by performing several experiments on a number of images, an
optimal value of N1 is set to 0.05 and N2 is 0.25, shown as bold in Table 1
Figure 5. Comparison of number of SIFT, KAZE and salient SIFT + KAZE keypoints for three different
images of size 512 × 512.
AUSTRALIAN JOURNAL OF FORENSIC SCIENCES 343
15. 5.4. Evaluation of the proposed approach under plain copy–move forgery
Various experiments have been performed for the evaluation of the proposed technique
for image level forgery detection. Image level experiments have been performed to
distinguish whether the input image is genuine or forged. Plain copy–move forgery
means a translated copied region only, and has been evaluated on both the CoMoFoD
and MICC-F220 datasets. Four-hundred test images have been taken from the CoMoFoD
dataset where 300 images are forged and 100 images are original. Detection results on
various metrics have been reported in the first row of Table 2. The proposed technique
achieves a precision of 97.9%, which shows the proposed detection technique is very
efficient. The detection result on images under translation only is shown in the first row of
Figure 6. From the MICC dataset, 220 images have been tested for evaluation where 110
are forged and 110 are original and the performance of the proposed technique is
reported in the second row of Table 2. The proposed technique also performed well on
this dataset, with a precision of 96.22%. The detection result on the MICC dataset under
plain copy–move forgery is shown in the first row of Figure 7.
5.5. Evaluation of the proposed approach under different geometric
transformations
To evaluate the effectiveness of proposed approach, a series of experiments have been
performed under scaling, rotation and rotation with scaling on the MICC-F220 dataset. Forty-
four images with 33 forged and 11 original images have been tested for evaluation under
the scaling operation. Fifty-five images with 44 forged and 11 original images have been
tested under rotation operation. Thirty-three images with 22 forged and 11 original images
have been tested under rotation with the scaling operation. The detection result based on
precision, recall and F1-score is reported in Table 3. The detection of the proposed approach
under scaling operation on the MICC dataset is shown in second row of Figure 4. The last
row of Figure 4 shows forgery detection under rotation operation. The proposed approach
Table 1. Evaluation of threshold N1 and N2 based on number of bounding boxes (BB).
S. No. N1 N2 Initial no. of BB BB after N1 BB after N2
1 0.02 0.2 2099 1310 112
2 0.02 0.25 2099 1310 117
3 0.02 0.3 2099 1310 131
4 0.05 0.2 2099 1644 61
5 0.05 0.25 2099 1644 64
6 0.05 0.3 2099 1644 68
7 0.1 0.2 2099 1853 39
8 0.1 0.25 2099 1853 35
9 0.1 0.3 2099 1853 42
10 0.15 0.2 2099 1947 28
11 0.15 0.25 2099 1947 21
Table 2. Evaluation of proposed detection approach under plain copy–move forgery.
Dataset TPR FPR Accuracy Precision Recall F1-score
CoMoFoD 93.66 6 93.75 97.9 93.66 95.73
MICC-F220 92.7 3.6 95 96.22 93.57 94.87
344 N. KUMAR AND T. MEENPAL
16. achieves a precision of 96.87%, recall of 93.93% and 95.37% of F1-score under scaling
operation. Detection of forgery under rotation operation achieves a precision of 97.61%. In
the case of combined geometric transformation of rotation with scaling, the proposed
technique outperforms with a precision of 95.2%. This experimental analysis explains the
robustness of the proposed detection approach among different geometric transformations.
5.6. Evaluation of the proposed approach under different post-processing operations
A series of experiments have also been performed for evaluation of the proposed copy–
move forgery detection under different post-processing operations of the CoMoFoD
dataset. Post-processing operations, which have been considered for testing are blurring,
Figure 6. Copy–move forgery detection result on the CoMoFoD dataset under post-processing
operations (top row: forgery without any post-processing, middle row: forgery under image blurring,
bottom row: forgery under brightness change): (a) original image, (b) forged image and (c) detection
result of the proposed technique.
AUSTRALIAN JOURNAL OF FORENSIC SCIENCES 345
17. brightness change, colour reduction, noise addition and contrast adjustments. Under the
blurring operation, the averaging filters with varying filter sizes 3 × 3, 5 × 5 and 7 × 7 have
been used for forgery. In this case, 120 forged and 120 original images have been taken
from the CoMoFoD dataset. With averaging filter size 3 × 3, the proposed approach
achieves a precision of 94.87% and with filter size 7 × 7 it also achieves a precision of
91.89% as shown in Figure 8. In the case of forgery with brightness change, alteration of
the brightness range has been achieved from (0.01, 0.95) to (0.01, 0.8). Brightness change
with variation of (0.01, 0.95) does not create a significant visual difference, but variation
with (0.01, 0.8) shows a quite visible difference in brightness. The performance evaluation
of the proposed algorithm is shown in Figure 9 with a precision of 97.43% under alteration
of (0.01, 0.95) and a precision of 92.1% under alteration of (0.01, 0.8).
Figure 7. Copy–move forgery detection result on the MICC-F220 dataset under geometric transforma
tions (top row: forgery with translation only, middle row: forgery with scaling, bottom row: forgery
with rotation): (a) original image, (b) forged image and (c) detection result of proposed technique.
Table 3. Detection result on the MICC-F220 dataset under geo
metric transformation: scaling, rotation and rotation + scaling.
Transformations Precision Recall F1-score
Scaling 96.87 93.93 95.37
Rotation 97.61 93.18 95.34
Rotation + Scaling 95.2 90.9 93
346 N. KUMAR AND T. MEENPAL
18. An experimental analysis is also performed to justify the robustness of the proposed
method under noise addition. In this case, detection of 120 forged images with white
Gaussian noise having zero mean and variance (σ2
) range of [0.0005, 0.005, 0.009] is
performed. The detection result under noise addition is illustrated in Figure 10, which
shows that the proposed approach is not much affected by noise addition. Another
experiment is also performed under colour reduction with a varying number of colours
per channel: [128, 64, 32]. The detection result under colour reduction is illustrated in
Figure 11, which shows that the proposed approach performed well even after changing
the number of colours per channel. Contrast adjustment is another post-processing
operation on which the robustness of the proposed method is tested. The detection
result on different contrast ranges is shown in Figure 12, which shows the effectiveness of
the proposed method under different contrast changes. This evaluation shows that the
proposed approach is not much affected by blurring or brightness change operations,
which shows robustness against the post-processing operations.
5.7. Comparative analysis of the proposed approach with existing detection
approaches
The experimental results demonstrated above of the proposed approach, show the
efficient detection of copy–move forgery under different geometric transformations and
post-processing operations. Comparative analysis based on TPR, FPR, precision and F1-
score have been performed with existing detection approaches21,43–47
on the MICC-F220
dataset, and the performance of proposed method is highlighted in bold in Table 4. It is
observed that the proposed method has better performance among all methods except47
Figure 8. Detection result under blurring operation with different averaging filter size.
AUSTRALIAN JOURNAL OF FORENSIC SCIENCES 347
19. in the cases of FPR and precision. However, Soni et al.47
achieved a FPR of 3.16, which is
quite close to the FPR of the proposed method. Hence, it shows that the performance of
the proposed scheme is quite satisfactory.
The performance comparison of the proposed technique with the methods of Niyishaka
and Bhagvati28
, El Biach et al.48
, Mahmood et al.49
, Malviya and Ladhake50
and Soni et al.51
on the CoMoFoD dataset is presented in Table 5. It can be observed that the proposed
Figure 9. Detection result under brightness change with different alteration range.
Figure 10. Detection result with addition of Gaussian noise having zero mean and different values of
variance ().
348 N. KUMAR AND T. MEENPAL
20. method reported superior performance in terms of FPR and precision among all the
methods compared here. The proposed method achieved the highest precision of 97.90%
and lowest FPR of 6%. Hence, performance comparison analysis claims that the proposed
method achieves a significant improvement on benchmark datasets .
5.8. Comparative analysis of the proposed approach based on running time
The computational time complexity of the forgery detection approach plays a very
important role when the image size is large. Here, the computation time of the
proposed approach has been compared with existing state-of-the-art copy–move
Figure 11. Detection result under colour reduction with varying number of colours per channel.
Figure 12. Detection result under contrast change with different alteration range.
AUSTRALIAN JOURNAL OF FORENSIC SCIENCES 349
21. forgery detection techniques. The computational cost analysis has been carried out
based on CPU running-time. The complete experiment has been performed on
Python 3.7.1 and OpenCV 3.4.1 running on Windows 8. The timing was recorded
for the proposed approach for an Intel i7 processor having 8 GB RAM. Comparative
analysis of computational time has been done on an image size of 512 × 512 of the
CoMoFoD dataset presented in Figure 13. The proposed method is very much
computational effective and takes 7.14 seconds as compared with the methods of
Zandi et al.17
, Cozzolino et al.18
, Meena and Tyagi22
, Soni et al.51
and Zandi et al.52
Table 4. Comparison of proposed method with existing techniques on the MICC-F220 dataset.
Author Methods TPR FPR Precision F1-score
Hashmi et al.43
DyWT and SIFT 80.00 10.00 88.89 85.00
Ojeniyi et al.46
DCT and SURF 97.27 6.36 93.86 95.45
Niyishaka and Bhagvati45
DoG and ORB – – 90.09 86.24
Liu et al.44
CKN 99.09 7.27 93.16 96.03
Soni et al.47
SIFT and DBSCAN 99.15 3.16 – –
Soni et al.21
SURF and 2NN 97.55 8.4 – –
Proposed Salient SIFT and KAZE 92.7 3.6 96.22 94.87
Table 5. Comparison of proposed method with existing techniques on the CoMoFoD dataset.
Author Methods TPR FPR Precision F1-score
Malviya and Ladhake50
Auto Colour Correlogram 91.67 16 95.65 93.62
Soni et al.51
LBP-HP 98.40 7.40 – –
Mahmood et al.49
SWT 96.60 – 95.76 96.05
Niyishaka and Bhagvati28
Blob + BRISK 92.00 9 96.84 94.35
El Biach et al.48
Encoder-decoder – – – 81.56
Proposed Salient SIFT and KAZE 93.66 6 97.90 95.73
Figure 13. Computational cost comparison based on CPU time (in seconds) on the CoMoFoD dataset.
350 N. KUMAR AND T. MEENPAL
22. The main reason behind the reduced computation time is the salient feature selec
tion, which has reduced the number of keypoints. Since, the number of keypoints is
reduced, it takes less time for feature descriptor matching to find the duplicated
region. Hence, the proposed approach is the fastest among all the compared copy–
move detection approaches.
6. Conclusion
In this article, an improved keypoint-based copy–move forgery detection has been
proposed using SIFT and KAZE features. A salient keypoint feature selection has been
proposed to select robust features and for a reduction in the number of keypoints. To
enhance the detection of a duplicated object, a selective search-based region proposal
has been employed to create a bounding box on the input image. Keypoints within the
bounding box have been considered for feature descriptor matching to find duplicated
objects. Keypoints between two bonding boxes have been matched for duplicated region
detection, and keypoints within a bounding box have not been considered for matching,
which reduces the time complexity of the proposed algorithm. The proposed detection
approach has been evaluated on CoMoFoD and MICC-F220 datasets and gives promising
results under geometric transformations and common post-processing operations. The
experimental results reveal that the proposed detection approach outperforms the state-
of-art in terms of precision. Results also show that the proposed approach is faster than
other existing copy–move forgery detection techniques. The proposed detection
approach can detect forgeries under different post-processing operations. Therefore,
this approach can play a vital role in various proof-based image forensic applications.
Disclosure statement
No potential conflict of interest was reported by the author(s).
ORCID
Nitish Kumar http://orcid.org/0000-0001-9977-1347
Toshanlal Meenpal http://orcid.org/0000-0003-2809-075X
References
1. Abd Warif NB, Wahab AWA, Idris MYI, Ramli R, Salleh R, Shamshirband S, Choo -K-KR. Copy-
move forgery detection: survey, challenges and future directions. J Network Comput Appl.
2016;75:259–278.
2. Meenpal T. DWT-based blind and robust watermarking using spiht algorithm with applica
tions in tele-medicine. S¯adhan¯a. 2018;43(1):4.
3. Subhedar MS, Mankar VH. Current status and key issues in image steganography: a survey.
Comput Sci Rev. 2014;13:95–113. doi:10.1016/j.cosrev.2014.09.001
4. Asghar K, Habib Z, Hussain M. Copy-move and splicing image forgery detection and localiza
tion techniques: a review. Aust J Forensic Sci. 2017;49(3):281–307. doi:10.1080/
00450618.2016.1153711
AUSTRALIAN JOURNAL OF FORENSIC SCIENCES 351
23. 5. Uijlings JR, Van De Sande KE, Gevers T, Smeulders AW. Selective search for object recognition.
Int J Comput Vis. 2013;104(2):154–171. doi:10.1007/s11263-013-0620-5
6. Zheng L, Zhang Y, Thing VL. A survey on image tampering and its detection in real-world
photos. J Visual Commun Image Represent. 2019;58:380–399. doi:10.1016/j.jvcir.2018.12.022
7. Christlein V, Riess C, Jordan J, Riess C, Angelopoulou E. An evaluation of popular copy-move
forgery detection approaches. IEEE Trans Inf Forensics Secur. 2012a;7(6):1841–1854.
8. Fridrich AJ, Soukal BD, Lukáš AJ. Detection of copy-move forgery in digital images.
Proceedings of Digital Forensic Research Workshop; 2003.
9. Cao Y, Gao T, Fan L, Yang Q. A robust detection algorithm for copy-move forgery in digital
images. Forensic Sci Int. 2012;214(1–3):33–43. doi:10.1016/j.forsciint.2011.07.015
10. Zhao J, Guo J. Passive forensics for copy-move image forgery using a method based on DCT
and SVD. Forensic Sci Int. 2013;233(1–3):158–166. doi:10.1016/j.forsciint.2013.09.013
11. Shao H, Yu T, Xu M, Cui W. Image region duplication detection based on circular window
expansion and phase correlation. Forensic Sci Int. 2012;222(1–3):71–82. doi:10.1016/j.
forsciint.2012.05.002
12. Zhang J, Feng Z, Su Y. A new approach for detecting copy-move forgery in digital images.
2008 11th IEEE Singapore International Conference on Communication Systems; 2008. p.
362–366.
13. Bin Y, Xingming S, Xianyi C, Zhang J, Xu L. An efficient forensic method for copy–move
forgery detection based on DWT-FWHT. Radioengineering. 2013;22(4).
14. Muhammad G, Hussain M, Bebis G. Passive copy move image forgery detection using
undecimated dyadic wavelet transform. Digital Invest. 2012;9(1):49–57. doi:10.1016/j.
diin.2012.04.004
15. Peng F, Nie Y-Y, Long M. A complete passive blind image copy-move forensics scheme based
on compound statistics features. Forensic Sci Int. 2011;212(1–3):e21–e25. doi:10.1016/j.
forsciint.2011.06.011
16. Lee J-C, Chang C-P, Chen W-K. Detection of copy–move image forgery using histogram of
orientated gradients. Inf Sci. 2015;321:250–262. doi:10.1016/j.ins.2015.03.009
17. Ryu S-J, Kirchner M, Lee M-J, Lee H-K. Rotation invariant localization of duplicated image
regions based on zernike moments. IEEE Trans Inf Forensics Secur. 2013;8(8):1355–1370.
doi:10.1109/TIFS.2013.2272377
18. Cozzolino D, Poggi G, Verdoliva L. Efficient dense-field copy–move forgery detection. IEEE
Trans Inf Forensics Secur. 2015;10(11):2284–2297. doi:10.1109/TIFS.2015.2455334
19. Hayat K, Qazi T. Forgery detection in digital images via discrete wavelet and discrete cosine
transforms. Comput Electr Eng. 2017;62:448–458. doi:10.1016/j.compeleceng.2017.03.013
20. Mahmood T, Mehmood Z, Shah M, Saba T. A robust technique for copy-move forgery
detection and localization in digital images via stationary wavelet and discrete cosine
transform. J Visual Commun Image Represent. 2018;53:202–214. doi:10.1016/j.
jvcir.2018.03.015
21. Soni B, Das PK, Thounaojam DM. Geometric transformation invariant block based copy-move
forgery detection using fast and efficient hybrid local features. J Inf Secur Appl.
2019;45:44–51. doi:10.1016/j.jisa.2019.01.007
22. Meena KB, Tyagi V. A copy-move image forgery detection technique based on tetrolet
transform. J Inf Secur Appl. 2020;52:102481. doi:10.1016/j.jisa.2020.102481
23. Cozzolino D, Poggi G, Verdoliva L. Copy-move forgery detection based on patchmatch. 2014
IEEE International Conference on Image Processing (ICIP); 2014. p. 5312–5316.
24. Wang X-Y, Jiao L-X, Wang X-B, Yang H-Y, Niu P-P. Copy-move forgery detection based on
compact color content descriptor and delaunay triangle matching. Multimedia Tools Appl.
2019;78(2):2311–2344.
25. Pan X, Lyu S. Region duplication detection using image feature matching. IEEE Trans Inf
Forensics Secur. 2010;5(4):857–867. doi:10.1109/TIFS.2010.2078506
26. Ardizzone E, Bruno A, Mazzola G. Copy–move forgery detection by matching triangles of
keypoints. IEEE Trans Inf Forensics Secur. 2015;10(10):2084–2094. doi:10.1109/
TIFS.2015.2445742
352 N. KUMAR AND T. MEENPAL
24. 27. Isaac MM, Wilscy M. Copy-move forgery detection based on Harris corner points and brisk.
Proceedings of the Third International Symposium on Women in Computing and Informatics;
2015. p. 394–399.
28. Niyishaka P, Bhagvati C. Copy-move forgery detection using image blobs and brisk feature.
Multimedia Tools Appl. 2020;79(35):26045–26059. doi:10.1007/s11042-020-09225-6
29. Zhu Y, Shen X, Chen H. Copy-move forgery detection based on scaled ORB. Multimedia Tools
Appl. 2016;75(6):3221–3233. doi:10.1007/s11042-014-2431-2
30. Yang F, Li J, Lu W, Weng J. Copy-move forgery detection based on hybrid features. Eng Appl
Artif Intell. 2017;59:73–83. doi:10.1016/j.engappai.2016.12.022
31. Dhivya S, Sangeetha J, Sudhakar B. 2020. Copy-move forgery detection using surf feature
extraction and svm supervised learning technique. Soft Comput:1–12.
32. Meena KB, Tyagi V. A hybrid copy-move image forgery detection technique based on
Fourier-Mellin and scale invariant feature transforms. Multimedia Tools Appl. 2020b;79
(11):8197–8212. doi:10.1007/s11042-019-08343-0
33. Uma S, Sathya PD. 2020. Copy-move forgery detection of digital images using football game
optimization. Aust J Forensic Sci:1–22.
34. Bi X, Pun C-M. Fast copy-move forgery detection using local bidirectional coherency error
refinement. Pattern Recognit. 2018;81:161–175. doi:10.1016/j.patcog.2018.03.028
35. Yang J, Liang Z, Gan Y, Zhong J. A novel copy-move forgery detection algorithm via two-stage
filtering. Digital Signal Process. 2021;113:103032. doi:10.1016/j.dsp.2021.103032
36. Buoncompagni S, Maio D, Maltoni D, Papi S. Saliency-based keypoint selection for fast object
detection and matching. Pattern Recognit Lett. 2015;62:32–40. doi:10.1016/j.
patrec.2015.04.019
37. Lowe DG. Object recognition from local scale-invariant features. Proceedings of the Seventh
IEEE International Conference on Computer Vision; 1999. Vol. 2, p. 1150–1157.
38. Alcantarilla PF, Bartoli A, Davison AJ. KAZE features. In: Fitzgibbon A, Lazebnik S, Perona P,
Sato Y, Schmid C, editors. Computer vision – eccv 2012. Berlin (Heidelberg): Springer; 2012. p.
214–227.
39. Mukherjee P, Lall B. Saliency and KAZE features assisted object segmentation. Image Vis
Comput. 2017;61:82–97. Available from: https://www.sciencedirect.com/science/article/pii/
S0262885617300537
40. Verma A, Meenpal T, Acharya B. Object proposals based on variance measure. In: Das AK,
Nayak J, Naik B, Pati SK, Pelusi D, editors. Computational intel licence in pattern recognition.
Singapore: Springer Singapore; 2020. p. 307–320.
41. Tralic D, Zupancic I, Grgic S, Grgic M. Comofod—new database for copy-move forgery
detection. Proceedings ELMAR-2013; 2013. p. 49–54.
42. Amerini I, Ballan L, Caldelli R, Del Bimbo A, Serra G. A SIFT-based forensic method for copy–
move attack detection and transformation recovery. IEEE Trans Inf Forensics Secur. 2011;6
(3):1099–1110. doi:10.1109/TIFS.2011.2129512
43. Hashmi MF, Anand V, Keskar AG. Copy-move image forgery detection using an efficient and
robust method combining un-decimated wavelet transform and scale invariant feature
transform. Aasri Procedia. 2014;9:84–91.
44. Liu Y, Guan Q, Zhao X. Copy-move forgery detection based on convolutional kernel network.
Multimedia Tools Appl. 2018;77(14):18269–18293. doi:10.1007/s11042-017-5374-6
45. Niyishaka P, Bhagvati C. Digital image forensics technique for copy-move forgery detection
using DOG and ORB. International conference on computer vision and graphics; 2018. p.
472–483.
46. Ojeniyi JA, Adedayo BO, Ismaila I, Shafi’i AM. Hybridized technique for copy-move forgery
detection using discrete cosine transform and speeded-up robust feature techniques.
Int J Image Graphics Signal Process. 2018;11(4):22. doi:10.5815/ijigsp.2018.04.03
47. Soni B, Das PK, Thounaojam DM. Keypoints based enhanced multiple copy- move forgeries
detection system using density-based spatial clustering of application with noise clustering
algorithm. IET Image Process. 2018;12(11):2092–2099. doi:10.1049/iet-ipr.2018.5576
AUSTRALIAN JOURNAL OF FORENSIC SCIENCES 353
25. 48. El Biach FZ, Iala I, Laanaya H, Minaoui K. 2021. Encoder-decoder based convolutional neural
networks for image forgery detection. Multimedia Tools Appl:1–18.
49. Mahmood T, Mehmood Z, Shah M, Khan Z. An efficient forensic technique for exposing region
duplication forgery in digital images. Appl Intell. 2018;48(7):1791–1801. doi:10.1007/s10489-
017-1038-5
50. Malviya AV, Ladhake SA. Pixel based image forensic technique for copy-move forgery detec
tion using auto color correlogram. Procedia Comput Sci. 2016;79:383–390. doi:10.1016/j.
procs.2016.03.050
51. Soni B, Das PK, Thounaojam DM. Copy-move tampering detection based on local binary
pattern histogram Fourier feature. Proceedings of the 7th International Conference on
Computer and Communication Technology; 2017. p. 78–83.
52. Zandi M, Mahmoudi-Aznaveh A, Talebpour A. Iterative copy-move forgery detection based
on a new interest point detector. IEEE Trans Inf Forensics Secur. 2016;11(11):2499–2512.
doi:10.1109/TIFS.2016.2585118
354 N. KUMAR AND T. MEENPAL