More Related Content
Similar to 20120140502012
Similar to 20120140502012 (20)
More from IAEME Publication
More from IAEME Publication (20)
20120140502012
- 1. International Journal of Advanced Research in Engineering and Technology (IJARET), ISSN 0976 –
6480(Print), ISSN 0976 – 6499(Online) Volume 5, Issue 2, February (2014), pp. 101-108, © IAEME
101
A REVIEW ON DIFFERENT TECHNIQUES ON IMAGE FORGERY
DETECTION AND REMOVAL OF TAMPERED REGION
Aliya M. Salim1
, Dimple Shajahan2
1
(Computer Science and Engineering, TKM College of Engineering, Kollam, India)
2
(Computer Science and Engineering, TKM College of Engineering, Kollam, India)
ABSTRACT
Today, the manipulation of digital information has become easy due to the powerful
computers, advanced software packages for photo editing, easy available of high resolution capturing
devices, and the free access of internetworks. For the same reason, verifying the authenticity and
integrity of the digital data, and detecting the traces of tampering without any prior knowledge about
the image content has become an important research area in the field of Image Processing. In order to
develop techniques for detecting the forgeries and to improve them, a separate science has been
formed, called the Image Forensic Science, dealing with such techniques. This paper discusses about
a few techniques used for the detection of the forged images, and removal of the forged region, if the
image is found to be forged. The present status of the proposed method is also discussed in this
paper.
Keywords: Image Forensic Science, Image tampering detection, Blind methods, Active methods,
Copy-move, Image Splicing.
I. INTRODUCTION
Due to the widespread availability and use of digital cameras, and the rise of internet as a
means of communication, digital information has become an important method of conveying visual
information. But the ease with which the digital information can be manipulated by photo editing
software has created an environment where the authenticity of digital information often becomes
unreliable and unpredictable. To prevent such manipulations (forgeries) from being passed off on as
unaltered originals, techniques have been produced.
Image forensics has been developed as a separate area, in order to study, develop and improve
techniques to detect such forgeries. Image forensic science is a multidisciplinary area that aims at
acquiring important information on the history of digital information. Verifying the integrity of the
INTERNATIONAL JOURNAL OF ADVANCED RESEARCH IN ENGINEERING
AND TECHNOLOGY (IJARET)
ISSN 0976 - 6480 (Print)
ISSN 0976 - 6499 (Online)
Volume 5, Issue 2, February (2014), pp. 101-108
© IAEME: www.iaeme.com/ijaret.asp
Journal Impact Factor (2014): 4.1710 (Calculated by GISI)
www.jifactor.com
IJARET
© I A E M E
- 2. International Journal of Advanced Research in Engineering and Technology (IJARET), ISSN 0976 –
6480(Print), ISSN 0976 – 6499(Online) Volume 5, Issue 2, February (2014), pp. 101-108, © IAEME
102
digital images, and detecting the traces of tampering without using any protecting pre-extracted or
pre-embedded information have become the main objective of tampering detecting techniques in the
Image Forensic Science, that mainly uses the image processing methods. The trustworthiness of
digital photographs has become an inevitable part in human life. They find themselves applicable in
many areas such as forensic and criminal investigation, surveillance systems, medical imaging,
journalism, etc. Image forgery finds its origin long way back in history. But, in today’s digital age, it
is possible to very easily change the information represented by an image without leaving any
obvious traces of tampering. Examples of forgeries that have taken place over the years are shown in
the figure (Fig 1).
Despite this, no system exist which accomplishes effectively and accurately the image
tampering detection task. The objective of this paper is to discuss about the main approaches that are
involved in the detection of tampering traces in the considered image, depending upon the type of
manipulation (forgery) performed on the image.
Fig 1: Example of a forged image. The first two images are merged (image splicing) to obtain
the third image that is faked
Section II discusses about the different methods and techniques by which a forged image be
detected, and the different types of possibilities by which an image can be manipulated. Section III
discusses about a few papers that uses the blind methods, through which the proposed system has
reached. Section IV gives a brief explanation about the proposed system, and section V concludes the
paper.
II. TECHNIQUES INVOLVED
The digital information revolution and issues concerned with multimedia security have
generated several approaches to digital forensics and tampering detection. Image forgery detection
tasks can be divided into active and passive, or blind approaches. The active forgery detection
techniques can be divided into the data hiding approach (e.g., watermarks) and the digital signature
approach. The data hiding approach uses methods utilising the secondary data embedded into the
- 3. International Journal of Advanced Research in Engineering and Technology (IJARET), ISSN 0976 –
6480(Print), ISSN 0976 – 6499(Online) Volume 5, Issue 2, February (2014), pp. 101-108, © IAEME
103
image, such as the techniques of digital watermarking and digital signatures. Digital watermarking
techniques makes the assumption that a digital watermark (a known authentication code) has been
inserted into the image at the source side, and then this code is being used for verifying the
authenticity of the digital information at the detection time. Most often these watermarks are
inseparable from the image content they are embedded in, and they undergo the same manipulations
as the image. The drawback of the techniques involving this approach is that the code must be
inserted into the image at the time of recording the image, or later by an authorized person, which
usually requires specialized camera and/or subsequent processing of the image. Another limitation is
that they might result in the degradation of the image. The digital signature approach mainly involves
the extraction of unique features from the image at the source, and encoding them to form digital
signatures. At a later time these signatures are used for the verification of these images. The usages
of techniques involving these approaches also suffer the same limitations as the watermarking
techniques.
The passive approaches are also known as blind approaches, since they do not have any prior
information about the image features. These methods use the image only for assessing the
authenticity or integrity of the image. Image forensic techniques using the passive approach works
on the assumption that although these forgeries leave no tampering traces visually, they might alter
the statistical properties, often referred to as the digital fingerprints of the image that characterizes
the life cycle of the image from its acquisition to its processing. The alterations to the image, results
in the distortion of these fingerprints, thereby giving inconsistencies of the image. The passive tasks
apply techniques for the verification of these inconsistencies in the image, in order to detect the
tampered regions, if any. A more sophisticated method would be to produce a sort of map indicating
the trustworthiness of each pixel, in which case no manual choice of selecting the suspected region
of the image is necessary. These passive forgery detection techniques work in absence of any
protecting techniques and without using any prior information about the image. To detect the traces
of tampering, the blind methods use the image function and the fact that forgeries can bring into the
image specific detectable changes (e.g., statistical changes).
There are several different types of forgeries being applies to images, some of which
commonly used are the copy-move forgery, image splicing, image retouching, forgeries using JPEG
compression properties, using lighting inconsistencies, projective geometry and transformations,
chromatic aberration, color pixel array and inter-pixel correlations, noise variations, sharpening and
blurring etc.
Copy-move forgery, or otherwise known as region duplication forgery, refers to the type of
manipulation wherein a part of the image is being copied and pasted to another part of the same
image [6]. Image splicing refers to the technique of forgery, wherein two or more images are
combined by copying and pasting parts of an image to another image [7]. Another type of image
processing application of forgery is that of image retouching, wherein an image is enhanced or de-
blurred to improve the visual quality. Forgery techniques using compression is another method
wherein the image, after manipulation, is JPEG compressed, or DCT compressed, to make the faked
image seem real [9].
Sharpening and blurring, noise variations, color pixel array and inter-pixel correlation, chromatic
aberration, projective geometry and transformations, etc., are techniques that utilize the passive
(blind) approach to detect the tampered image.
III. LITERATURE REVIEW
Cao et al [3] proposes a novel accurate detection framework of demosaicing regularity from
different source images. This paper discusses the reverse classification of the demosaiced samples into
several categories and then estimating the underlying demosaicing formulas for each category based
- 4. International Journal of Advanced Research in Engineering and Technology (IJARET), ISSN 0976 –
6480(Print), ISSN 0976 – 6499(Online) Volume 5, Issue 2, February (2014), pp. 101-108, © IAEME
104
on partial second-order derivative correlation models, which detect both the intra-channel and the
cross-channel demosaicing correlation. An expectation-maximization reverse classification scheme is
used to iteratively resolve the ambiguous demosaicing axes in order to best reveal the implicit
grouping adopted by the underlying demosaicing algorithm. The drawback of this technique is that
noise variation detection need to be incorporated.
Dirik and Memon [4] proposes a detection method that uses the artifacts produced by the
color filter array (CFA) processing in most digital cameras. Here, two CFA features are extracted
and techniques are developed based on these features. The techniques are based on computing a
single feature and a simple threshold based classifier. The limitation of the technique proposed here
is that this technique is sensitive to strong JPEG re-compression and resizing.
Mahdian and Saic proposed in [5], forgery detection techniques, where the image noise
inconsistencies are considered for the detection of traces of tampering. A segmentation method that
detects changes in noise level is proposed here. A commonly used tool to conceal the traces of
tampering is the addition of locally random noise to the altered image regions. The noise degradation
is the main cause of failure of many active or passive image forgery detection methods. Typically,
the amount of noise is uniform across the entire authentic image. Adding locally random noise may
cause inconsistencies in the image’s noise. Therefore, the detection of various noise levels in an
image may signify tampering. The technique proposed in this paper is capable of dividing an
investigated image into various partitions with homogenous noise levels. The local noise estimation
is based on tiling the high pass wavelet coefficients at the highest resolution with non-overlapping
blocks. The noise standard deviation of each block is estimated using the widely used median-based
method. Once the standard deviation of noise is estimated, it is used as the homogeneity condition to
segment the investigated image into several homogenous sub-regions. This method can be used as a
supplementary along with other blind forgery detection tasks, but the limitation is that the method
fails whenever the degradation of noise is very small.
Irene and Lamberto in [8] proposes a copy-move detection algorithm where the sift features
are considered. In this paper a novel approach is presented for copy-move forgery detection and
localization based on the J- Linkage algorithm, which performs a robust clustering in the space of the
geometric transformation. In this kind of method, SIFT matching is often followed by a clustering
procedure to group key-points that are spatially close. The main novelty of the work consists in
introducing a clustering procedure which operates in the domain of the geometric transformation;
such a procedure has been properly improved to deal with multiple cloning too. The disadvantage of
the method proposed is that the time and space complexity for the extraction of SIFT features are
very high, and human interpretation of the result is mandatory.
Gallagher and Chen [10] introduce a concept based on the demosaicing features. Rather than
focusing on the statistical differences between the image textures, the feature of images from digital
cameras are recognized to contain traces of resampling as a result of using a color filter array with
demosaicing algorithms. Here the estimation of the actual demosaicing parameters is not necessarily
considered; rather, detection of the presence of demosaicing is taken into consideration. The in-
camera processing (rather than the image content) distinguishes the digital camera photographs from
computer graphics. The presence of demosaicing is a checklist being used in this detection
algorithm.the drawback is that if a malicious computer animator wishing to add an element of
realism to her computer graphic images could simply insert a software module to simulate the effect
of color filter array sampling and then apply demosaicing. Here this algorithm might fail, and
therefore this type of algorithm is not an effective way to deal with such attacks.
Mahdian Saic and Nedbal in [11] introduces a statistical approach, where in fingerprints of
true acquisition devices can be extracted to form true reference data sets. Here the originality of the
image is assessed using the header information of the image file. The method fails when the more
than one camera model produces the same fingerprint.
- 5. International Journal of Advanced Research in Engineering and Technology (IJARET), ISSN 0976 –
6480(Print), ISSN 0976 – 6499(Online) Volume 5, Issue 2, February (2014), pp. 101-108, © IAEME
105
In [12], Hany Farid proposes a technique to detect whether the part of an image was initially
compressed at a lower quality than the rest of the image, which is applicable to images of high and
low quality as well as resolution. This concept mainly depends upon the following basis. When
creating a digital forgery, it is often necessary to combine
several images, for example, when compositing one person’s head onto another person’s body. If
these images were originally of different JPEG compression quality, then the digital composite may
contain a trace of the original compression qualities, where the above approach comes into existence.
The drawback of this method is that the complexity of the analysis is very high. Another limitation is
that it is only effective when the tampered region is of lower quality than the image into which it was
inserted. The advantage of this approach is that it is effective on low-quality images and can detect
relatively small regions that have been altered.
Wei et.al [7] proposed a spliced image detection technique based on the Markov features in
DCT and DWT domain. The proposed feature vector consists of two kinds of Markov features
generated from the transition probability matrices, say the expanded Markov features in DCT domain
and the Markov features in DWT domain. The former one is developed to capture the correlation
between DCT coefficients, while the latter one is to characterize the dependency among wavelet
coefficients across positions, scales and orientations. Then, more features are constructed in DWT
domain to characterize the three kinds of dependency among wavelet coefficients across positions,
scales and orientations. After that, feature selection method SVM-RFE is used to fulfill the task of
feature reduction, making the computational cost more manageable. At last, the final dimensionality-
reduced feature vector is used for image splicing detection with SVM as the classifier.
Liu et.al [15] discusses a concept about spliced image detection using artificial blurred
boundary based on image edge analysis and blur detection. In this proposal, the image edges are
divided into three types based on the coefficients of the non-subsampled contourlet transform. And,
the six-dimensional feature of each edge point is extracted, which is composed of two non-
subsampled contourlet coefficients and four statistics based on the phase congruency. Then, three
support vector machines for each edge type are trained and used to detect the blurred edge points.
And, the local feature is defined to distinguish artificial blurred edge points from defocus ones. The
proposed method can be used to detect either the image blur or the image splicing with artificial
blurred boundary. The drawback of this method is that its accuracy cannot be guaranteed, when it
comes to high percentage of people’s participation.
Another concept introduced by Mahdian and Saic [13] deals with cyclostationary analysis,
where the geometric transformation is considered. This makes possible employing the well–
developed theory and efficient methods of cyclostationarity for blind analyzing of history of images
with respect to geometrical transformations. To verify this, a cyclostationarity detection method is
also proposed, and the traces of geometrical transformations in an image can be shown to be detected
and the specific parameters of the transformation estimated. The method is based on the fact that a
cyclostationary signal has a frequency spectrum correlated with a shifted version of itself. The
visibility of peaks depends upon the characteristics of the image, and detection of down-sampled
images cannot be accurate.
Kaiming and Christoph in [14] introduce a concept of alpha matting. Alpha matting refers to
the problem of softly extracting the foreground from an image. Given a tri-map (specifying known
foreground/background and unknown pixels), a straightforward way to compute the alpha value is to
sample some known foreground and background colors for each unknown pixel. Existing sampling-
based matting methods often collect samples near the unknown pixels only. They fail if good
samples cannot be found nearby. In this paper, a global sampling method is proposed that uses all
samples available in the image. A simple but effective cost function is defined to tackle the
ambiguity in the sample selection process. To handle the computational complexity introduced by
the large number of samples, the sampling task is posed as a correspondence problem. The
- 6. International Journal of Advanced Research in Engineering and Technology (IJARET), ISSN 0976 –
6480(Print), ISSN 0976 – 6499(Online) Volume 5, Issue 2, February (2014), pp. 101-108, © IAEME
106
correspondence search is efficiently achieved by generalizing a randomized algorithm previously
designed for patch matching. This method fails when the sampling selection criteria will not be
sufficient to resolve the color ambiguity. This may happen when an unknown pixel can be well
explained as a linear combination of two false foreground/background color clusters.
Jia et al [2] proposed a method to remove a particular region of interest. In this paper, the
problem of natural image matting is formulated as one of solving Poisson equations with the matte
gradient field. The approach, called Poisson matting, has the following advantages. First, the matte is
directly reconstructed from a continuous matte gradient field by solving Poisson equations using
boundary information from a user supplied trimap. Second, by interactively manipulating the matte
gradient field using a number of filtering tools, the user can further improve Poisson matting results
locally the desired output is obtained. The modified local result is seamlessly integrated into the final
result. The limitation is that when the foreground and background colors are very similar, the matting
equation becomes ill-conditioned, in which case the underlying structure of the matte cannot be easily
distinguished from noise, background or foreground. The second difficulty arises when the matte
gradient estimated in global Poisson matting largely biases the true values, so that small regions need
to be processed for local refinements in local Poisson matting, which increases user interaction. Last,
when the matte gradients are highly interweaved with the gradients of the foreground and background
within a very small region. Effective user interaction is an issue in this difficult situation.
IV. PROPOSED SYSTEM
Each of the different manipulations of images is unique, and so are the techniques to uncover
its traces. The proposed method consists of two phases. The system makes use of the artifacts
produced by the color filter arrays to detect the tampered regions, effective for cases of both copy-
move (region duplication) and image splicing manipulations are performed. The algorithm proposed,
takes into consideration the CFA artifacts as the digital fingerprint to check for the authenticity of the
image. The second phase consists of removing the forged region from the tampered region by means
of applying the poisson matting technique, which utilizes the probability map obtained from the first
phase. The probability map is supposed to show the area of possible manipulations in the image. The
final output expected would be image from which the tampered region is removed.
V. DISCUSSION
As forgeries have become popular, the importance of forgery detection is much increased.
Although many Image Forgery detection techniques have been proposed and have shown significant
promise, robust forgery detection is still difficult. There are at least three major challenges: tampered
images with compression, tampered images with noise, and tampered images with rotation. In this
some of the papers have been discussed. Forgery detection may be an easy task, if the original
version is available. Otherwise, with blind detection, the task may be very challenging. Detection of
blurred objects, because of manipulation, is a common problem in different types of forgeries. For
the manipulation of images, two types of modifications are applied, namely local and global
modifications. Local modifications are usually used in the copy/move forgery or in the case of
splicing. For the detection of contrast enhancements that perceptually impact the image, global
modifications are usually investigated. Global modifications are predominantly used for illumination
and changes in contrast. Fig 2 shows the number of publications in IEEE and Elsevier on Image
Forgery Detection Techniques for the past 13 years.
- 7. International Journal of Advanced Research in Engineering and Technology (IJARET), ISSN 0976 –
6480(Print), ISSN 0976 – 6499(Online) Volume 5, Issue 2, February (2014), pp. 101-108, © IAEME
107
Figure 2: Number of publications over la last 13 years. Results obtained by submitting query
“Image tampering detection” from IEEE (http://ieeexplore.ieee.org) and Elsevier
(http://www.sciencedirect.com) websites
VI. CONCLUSION
Sophisticated tools and advanced manipulation techniques have made forgery detection a
challenging one. Digital image forensic is still a growing area and lot of research needed to be done.
There are techniques exhibiting improved detection accuracy, but having high computational
complexity. Moreover, most of the methods may not be that responsive to the geometric
transformations, such as rotation and scaling. The factor of human perception is also not counted as a
factor during the development of these techniques. Therefore there is a need to develop techniques
that are automatic, HVS motivated and effective against geometric transformations. Most of the
techniques discussed have been handicapped by one or more factors that include limited accuracy
rate, low reliability and high complexity in addition to their sensitivity to various transformations and
non-responsiveness to noise.
Current research in passive-blind forgery detection is mainly limited to the image tampering
detection techniques and can be extended to audio and video. Understanding the perception of visual
semantics could be important also to identify the maliciousness of a forgery.
VII. REFERENCES
[1] Ferrara, Rosa, Piva, ‘Image Forgery Localization via Fine-Grained Analysis of CFA
Artifacts’, 2012 IEEE.
[2] Jian Sun, Jiaya Jia, Tang, Shum, ‘Poisson Matting’, unpublished.
[3] Hong Cao, ‘Accurate Detection of Demosaicing Regularity for Digital Image Forensics’,
IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, VOL. 4,
NO. 4, DECEMBER 2009, pp. 899-910.
[4] Ahmet Emir Dirik, Nasir Memon, ‘Image Tamper Detection based on demosaicing artifacts’,
IEEE Transactions 2010.
[5] Babak Mahdian, Stanislav Saic, ‘Using Noise Inconsistencies For Blind Image Forensics’,
Elsevier, Image and Vision Computing 27 (2009) 1497–1503.
- 8. International Journal of Advanced Research in Engineering and Technology (IJARET), ISSN 0976 –
6480(Print), ISSN 0976 – 6499(Online) Volume 5, Issue 2, February (2014), pp. 101-108, © IAEME
108
[6] Yuenan Li,’ Image copy-move forgery detection based on polar cosine transform and
approximate nearest neighbor searching’, Elsevier 2013, 59-67.
[7] Zhongwei He, WeiLu, WeiSun, JiwuHuang, ‘digital Image Splicing Detection Based on
Markov Features in DCT and DWT’, Elsevier 2012, 4292-4299.
[8] Irene Amerini,Lamberto Ballan, Roberto Caldelli, Alberto DelBimbo, Luca Del Tongo,
Giuseppe Serra, ’Copy Move Forgery Detection And Localization By Means Of Robust
Clustering With
JLinkage’, L.Amerinietal./SignalProcessing:ImageCommunication28(2013)659–669.
[9] Yi-Lei Chen, Chiou-Ting Hsu,’ Detecting Recompression of JPEG Images via Periodicity
Analysis of Compression Artifacts for Tampering Detection’, IEEE TRANSACTIONS ON
INFORMATION FORENSICS AND SECURITY, VOL. 6, NO. 2, JUNE 2011.
[10] Andrew C. Gallagher, Tsuhan Chen,’ Image Authentication by Detecting Traces of
Demosaicing’, unpublished.
[11] Babak Mahdian, Radim Nedbal, and Stanislav Saic, ‘Blind Verification of Digital Image
Originality: A Statistical Approach’, IEEE TRANSACTIONS ON INFORMATION
FORENSICS AND SECURITY, VOL. 8, NO. 9, SEPTEMBER 2013, pp: 1531-1540.
[12] Hany Farid, ’Exposing Digital Forgeries from JPEG Ghosts’, IEEE TRANSACTIONS ON
INFORMATION FORENSICS AND SECURITY, VOL. 4, NO. 1, MARCH 2009, Pp: 154-
160.
[13] Babak Mahdian, Stanislav Saic, ‘Cyclostationary Analysis applied to Image Forensics’, 2009
IEEE, pp: 279-284.
[14] Kaiming He, Christoph Rhemann, Carsten Rother, Xiao Tang, Jian Sun, ‘A Global Sampling
method for Alpha Matting’, 2049-2056.
[15] Guangjie Liu, Junwen Wang, Shiguo Lian, Yuewei Dai,’ Detect image splicing with artificial
blurred boundary’, Elsevier 2013, pp: 2647-2659.
[16] Prabakaran. G, Dr. Bhavani.R and Kanimozhi.K, “Two Secret Image Hiding Method using
SVD and DWT Techniques”, International Journal of Computer Engineering & Technology
(IJCET), Volume 4, Issue 2, 2013, pp. 102 - 107, ISSN Print: 0976 – 6367, ISSN Online:
0976 – 6375.
[17] Sheetal Kusal and Prof.Jyoti Rao, “Robust Image Alignment Using SVM Clustering for
Tampering Detection”, International Journal of Computer Engineering & Technology
(IJCET), Volume 4, Issue 5, 2013, pp. 147 - 154, ISSN Print: 0976 – 6367, ISSN Online:
0976 – 6375.