This paper proposes a new algorithm for removing large objects from digital images by filling in the removed region in a visually plausible manner. It combines exemplar-based texture synthesis with structure propagation from image inpainting. The key insights are that exemplar-based texture synthesis can replicate both texture and structure if the filling order is determined by propagating confidence in pixel values similar to inpainting, and that a single algorithm can synthesize both pure and composite textures. The proposed algorithm efficiently synthesizes textures and propagates structures into the removed region by sampling exemplar patches based on a confidence term calculated from local image gradients. This allows it to generate natural-looking results while avoiding blurring issues of previous approaches.
Image Enhancement and Restoration by Image InpaintingIJERA Editor
Inpainting is the process of reconstructing lost or deteriorated part of images based on the background information. i. e .it fills the missing or damaged region in an image utilizing spatial information of its neighboring region. Inpainting algorithm have numerous applications. It is helpfully used for restoration of old films and object removal in digital photographs. The main goal of the algorithm is to modify the damaged region in an image in such a way that the inpainted region is undetectable to the ordinary observers who are not familiar with the original image. This proposed work presents image inpainting process for image enhancement and restoration by using structural, texture and exemplar techniques. This paper presents efficient algorithm that combines the advantages of these two approaches. We first note that exemplar-based texture synthesis contains the essential process required to replicate both texture and structure; the success of structure propagation, however, is highly dependent on the order in which the filling proceeds. We propose a best-first algorithm in which the confidence in the synthesized pixel values is propagated in a manner similar to the propagation of information in inpainting. The actual color values are computed using exemplar-based synthesis. Computational efficiency is achieved by a blockbased sampling process.
Inpainting refers to the art of restoring lost parts of image and reconstructing them based on the background information i.e Image inpainting is the process of reconstructing lost or deteriorated parts of images using information from surrounding areas. In fine art museums, inpainting of degraded paintings is traditionally carried out by professional artists and usually very time consuming.The purpose of inpainting is to reconstruct missing regions in a visually plausible manner so that it seems reasonable to the human eye. There have been several approaches proposed for the same.
This paper gives an overview of different Techniques of Image Inpainting.The proposed work includes the overview of PDE based inpainting algorithm and Texture synthesis based inpainting algorithm. This paper presents a brief survey on comparative study of these two techniques used for Image Inpainting.
A CONCERT EVALUATION OF EXEMPLAR BASED IMAGE INPAINTING ALGORITHMS FOR NATURA...cscpconf
Image inpainting derives from restoration of art works, and has been applied to repair ancient
art works. Inpainting is a technique of restoring a partially damaged or occluded image in an
undetectable way. It fills the damaged part of an image by employing information of the
undamaged part according to some rules to make it look “reasonable” to human eyes. Digital
image inpainting is relatively new area of research, but numerous and different approaches to
tackle the inpainting problem have been proposed since the concept was first introduced. This
paper analyzes and compares the recent exemplar based inpainting algorithms by Minqin Wang
and Hao Guo et al. A number of examples on real images are demonstrated to evaluate the
results of algorithms using Peak Signal to Noise Ratio (PSNR)
Region filling and object removal by exemplar based image inpaintingWoonghee Lee
To get rid of (an) object(s) at a picture or to restore a picture from scratches or holes, Criminisi at el. suggested an algorithm which is combied "texture synthesis" and "inpainting". I made the slide to present at a class to introduce this algorithm. I refered a slide http://bit.ly/1Ng7DNt. I wish this slide may help you to understand the algorithm. Thank you.
OBIA on Coastal Landform Based on Structure Tensor csandit
This paper presents the OBIA method based on structure tensor to identify complex coastal
landforms. That is, develop Hessian matrix by Gabor filtering and calculate multiscale structure
tensor. Extract edge information of image from the trace of structure tensor and conduct
watershed segment of the image. Then, develop texons and create texton histogram. Finally,
obtain the final results by means of maximum likelihood classification with KL divergence as
the similarity measurement. The study findings show that structure tensor could obtain
multiscale and all-direction information with small data redundancy. Moreover, the method
described in the current paper has high classification accuracy
A combined method of fractal and glcm features for mri and ct scan images cla...sipij
Fractal analysis has been shown to be useful in image processing for characterizing shape and gray-scale
complexity. The fractal feature is a compact descriptor used to give a numerical measure of the degree of
irregularity of the medical images. This descriptor property does not give ownership of the local image
structure. In this paper, we present a combination of this parameter based on Box Counting with GLCM
Features. This powerful combination has proved good results especially in classification of medical texture
from MRI and CT Scan images of trabecular bone. This method has the potential to improve clinical
diagnostics tests for osteoporosis pathologies.
International Journal of Computational Engineering Research(IJCER)ijceronline
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology
A HYBRID MORPHOLOGICAL ACTIVE CONTOUR FOR NATURAL IMAGESIJCSEA Journal
Morphological active contours for image segmentation have become popular due to their low computational complexity coupled with their accurate approximation of the partial differential equations involved in the energy minimization of the segmentation process. In this paper, a morphological active contour which mimics the energy minimization of the popular Chan-Vese Active Contour without Edges is coupled with a morphological edge-driven segmentation term to accurately segment natural images. By using morphological approximations of the energy minimization steps, the algorithm has a low computational complexity. Additionally, the coupling of the edge-based and region-based segmentation techniques allows the proposed method to be robust and accurate. We will demonstrate the accuracy and robustness of the algorithm using images from the Weizmann Segmentation Evaluation Database and report on the segmentation results using the Sorensen-Dice similarity coefficient.
Image Enhancement and Restoration by Image InpaintingIJERA Editor
Inpainting is the process of reconstructing lost or deteriorated part of images based on the background information. i. e .it fills the missing or damaged region in an image utilizing spatial information of its neighboring region. Inpainting algorithm have numerous applications. It is helpfully used for restoration of old films and object removal in digital photographs. The main goal of the algorithm is to modify the damaged region in an image in such a way that the inpainted region is undetectable to the ordinary observers who are not familiar with the original image. This proposed work presents image inpainting process for image enhancement and restoration by using structural, texture and exemplar techniques. This paper presents efficient algorithm that combines the advantages of these two approaches. We first note that exemplar-based texture synthesis contains the essential process required to replicate both texture and structure; the success of structure propagation, however, is highly dependent on the order in which the filling proceeds. We propose a best-first algorithm in which the confidence in the synthesized pixel values is propagated in a manner similar to the propagation of information in inpainting. The actual color values are computed using exemplar-based synthesis. Computational efficiency is achieved by a blockbased sampling process.
Inpainting refers to the art of restoring lost parts of image and reconstructing them based on the background information i.e Image inpainting is the process of reconstructing lost or deteriorated parts of images using information from surrounding areas. In fine art museums, inpainting of degraded paintings is traditionally carried out by professional artists and usually very time consuming.The purpose of inpainting is to reconstruct missing regions in a visually plausible manner so that it seems reasonable to the human eye. There have been several approaches proposed for the same.
This paper gives an overview of different Techniques of Image Inpainting.The proposed work includes the overview of PDE based inpainting algorithm and Texture synthesis based inpainting algorithm. This paper presents a brief survey on comparative study of these two techniques used for Image Inpainting.
A CONCERT EVALUATION OF EXEMPLAR BASED IMAGE INPAINTING ALGORITHMS FOR NATURA...cscpconf
Image inpainting derives from restoration of art works, and has been applied to repair ancient
art works. Inpainting is a technique of restoring a partially damaged or occluded image in an
undetectable way. It fills the damaged part of an image by employing information of the
undamaged part according to some rules to make it look “reasonable” to human eyes. Digital
image inpainting is relatively new area of research, but numerous and different approaches to
tackle the inpainting problem have been proposed since the concept was first introduced. This
paper analyzes and compares the recent exemplar based inpainting algorithms by Minqin Wang
and Hao Guo et al. A number of examples on real images are demonstrated to evaluate the
results of algorithms using Peak Signal to Noise Ratio (PSNR)
Region filling and object removal by exemplar based image inpaintingWoonghee Lee
To get rid of (an) object(s) at a picture or to restore a picture from scratches or holes, Criminisi at el. suggested an algorithm which is combied "texture synthesis" and "inpainting". I made the slide to present at a class to introduce this algorithm. I refered a slide http://bit.ly/1Ng7DNt. I wish this slide may help you to understand the algorithm. Thank you.
OBIA on Coastal Landform Based on Structure Tensor csandit
This paper presents the OBIA method based on structure tensor to identify complex coastal
landforms. That is, develop Hessian matrix by Gabor filtering and calculate multiscale structure
tensor. Extract edge information of image from the trace of structure tensor and conduct
watershed segment of the image. Then, develop texons and create texton histogram. Finally,
obtain the final results by means of maximum likelihood classification with KL divergence as
the similarity measurement. The study findings show that structure tensor could obtain
multiscale and all-direction information with small data redundancy. Moreover, the method
described in the current paper has high classification accuracy
A combined method of fractal and glcm features for mri and ct scan images cla...sipij
Fractal analysis has been shown to be useful in image processing for characterizing shape and gray-scale
complexity. The fractal feature is a compact descriptor used to give a numerical measure of the degree of
irregularity of the medical images. This descriptor property does not give ownership of the local image
structure. In this paper, we present a combination of this parameter based on Box Counting with GLCM
Features. This powerful combination has proved good results especially in classification of medical texture
from MRI and CT Scan images of trabecular bone. This method has the potential to improve clinical
diagnostics tests for osteoporosis pathologies.
International Journal of Computational Engineering Research(IJCER)ijceronline
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology
A HYBRID MORPHOLOGICAL ACTIVE CONTOUR FOR NATURAL IMAGESIJCSEA Journal
Morphological active contours for image segmentation have become popular due to their low computational complexity coupled with their accurate approximation of the partial differential equations involved in the energy minimization of the segmentation process. In this paper, a morphological active contour which mimics the energy minimization of the popular Chan-Vese Active Contour without Edges is coupled with a morphological edge-driven segmentation term to accurately segment natural images. By using morphological approximations of the energy minimization steps, the algorithm has a low computational complexity. Additionally, the coupling of the edge-based and region-based segmentation techniques allows the proposed method to be robust and accurate. We will demonstrate the accuracy and robustness of the algorithm using images from the Weizmann Segmentation Evaluation Database and report on the segmentation results using the Sorensen-Dice similarity coefficient.
A HYBRID MORPHOLOGICAL ACTIVE CONTOUR FOR NATURAL IMAGESIJCSEA Journal
Morphological active contours for image segmentation have become popular due to their low computational complexity coupled with their accurate approximation of the partial differential equations
involved in the energy minimization of the segmentation process. In this paper, a morphological active contour which mimics the energy minimization of the popular Chan-Vese Active Contour without Edges is coupled with a morphological edge-driven segmentation term to accurately segment natural images. By using morphological approximations of the energy minimization steps, the algorithm has a low computational complexity. Additionally, the coupling of the edge-based and region-based segmentation techniques allows the proposed method to be robust and accurate. We will demonstrate the accuracy and robustness of the algorithm using images from the Weizmann Segmentation Evaluation Database and report on the segmentation results using the Sorensen-Dice similarity coefficient.
Comparative analysis and evaluation of image imprinting algorithmsAlexander Decker
This document compares and evaluates two different types of image inpainting algorithms: Marcelo Bertalmio's PDE-based algorithm and Zhaolin Lu et al's exemplar-based algorithm. Both algorithms are tested on images with variable occlusion sizes. The PDE-based algorithm is better at preserving linear structures for small regions but cannot reconnect structures or restore texture in large regions. The exemplar-based algorithm can find proper textures to fill large regions while preserving linear structures. Quantitative evaluation using PSNR shows that the exemplar-based algorithm achieves lower MSE values, especially for larger occlusion sizes. Therefore, the exemplar-based algorithm produces better results overall, particularly for filling in large missing regions of an image.
REMOVING OCCLUSION IN IMAGES USING SPARSE PROCESSING AND TEXTURE SYNTHESISIJCSEA Journal
The document presents a method for removing large occlusions from images using sparse processing and texture synthesis. It involves decomposing the image into structure and texture images using sparse representations. The occluded regions in the structure image are filled in using sparse reconstruction, which retains image structures. Texture synthesis is then performed on the texture image to fill in the occluded texture. Finally, the reconstructed structure and texture images are combined to produce the occlusion-free output image. The method is shown to effectively remove large occlusions while avoiding blurring and retaining both structures and textures. It outperforms other inpainting methods in terms of visual quality.
An efficient image segmentation approach through enhanced watershed algorithmAlexander Decker
This document proposes an efficient image segmentation approach combining an enhanced watershed algorithm and color histogram analysis. The watershed algorithm is applied to preprocessed images after merging the results with an enhanced edge detection. Over-segmentation issues are addressed through a post-processing step applying color histogram analysis to each segmented region, improving overall performance. The document provides background on image segmentation techniques, reviews related work applying watershed algorithms, and discusses challenges like over-segmentation that watershed approaches can face.
11.comparative analysis and evaluation of image imprinting algorithmsAlexander Decker
This document compares and evaluates two image inpainting algorithms: Marcelo Bertalmio's PDE-based algorithm and Zhaolin Lu et al's exemplar-based algorithm. Through experiments on images with different sized occluded regions, it finds that the PDE-based algorithm cannot reconnect structures or restore textures in large regions, while the exemplar-based algorithm can find patches to fill regions while preserving structures. Quantitative evaluation using PSNR shows the exemplar-based algorithm achieves lower MSE (error) for occlusion sizes from 10 to 40 pixels. The document provides examples comparing output of the two algorithms and discusses parameters needed for each.
A Survey on Exemplar-Based Image Inpainting Techniquesijsrd.com
Preceding paper include exemplar-based image inpainting technique give idea how to inpaint destroyed region such as Criminisi algorithm, patch shifting scheme, search region prior method. Criminsi’s and Sarawut’s patch shifting scheme needed more time to inpaint an damaged region but proposed method decrease time complexity by searching only in related region of missing portion of image.
This document summarizes a research paper that proposes a novel approach for enhancing digital images using morphological operators. The approach aims to improve contrast in images with poor lighting conditions. It uses two morphological operators based on Weber's law - the first employs blocked analysis while the second uses opening by reconstruction to define a multi-background. The performance of the proposed operators is evaluated on images with various backgrounds and lighting conditions. Key steps include dividing images into blocks, estimating minimum/maximum intensities in each block to determine background criteria, and applying contrast enhancement transformations based on the criteria. Opening by reconstruction is also used to approximate image background without modifying structures. Experimental results demonstrate the approach enhances images with poor lighting.
Multiple Ant Colony Optimizations for Stereo MatchingCSCJournals
The stereo matching problem, which obtains the correspondence between right and left images, can be cast as a search problem. The matching of all candidates in the same line forms a 2D optimization task and the two dimensional (2D) optimization is a NP-hard problem. There are two characteristics in stereo matching. Firstly, the local optimization process along each scan-line can be done concurrently; secondly, there are some relationship among adjacent scan-lines can be explored to promote the matching correctness. Although there are many methods, such as GCPs, GGCPs are proposed, but these so called GCPs maybe not be ground. The relationship among adjacent scan-lines is posteriori, that is to say the relationship can only be discovered after every optimization is finished. The Multiple Ant Colony Optimization(MACO) is efficient to solve large scale problem. It is a proper way to settle down the stereo matching task with constructed MACO, in which the master layer values the sub-solutions and propagate the reliability after every local optimization is finished. Besides, whether the ordering and uniqueness constraints should be considered during the optimization is discussed, and the proposed algorithm is proved to guarantee its convergence to find the optimal matched pairs.
Image in Painting Techniques: A survey IOSR Journals
This document provides a survey of different image inpainting techniques. It discusses approaches such as texture synthesis based inpainting, PDE (partial differential equation) based inpainting, exemplar based inpainting, hybrid inpainting, and semi-automatic inpainting. Texture synthesis approaches recreate textures within missing regions by sampling from surrounding textures. PDE based methods diffuse image information into missing areas. Exemplar based techniques iteratively copy patches from surrounding regions. Hybrid methods combine approaches. The document analyzes strengths and limitations of each technique.
This document discusses object removal from digital photographs through exemplar-based inpainting. It describes Criminisi's algorithm which combines texture synthesis and inpainting to remove objects while preserving linear structures and avoiding blurring. The algorithm works by assigning priority values to pixels based on proximity to linear structures, and then propagates texture patterns from surrounding regions into the removed object area. Experimental results show Criminisi's approach produces better outcomes than either texture synthesis or inpainting alone. Future work areas include improving curved structure propagation and applying the method to video.
In this project we have implemented a tool to inpaint selected regions from an image. Inpainting refers to the art of restoring lost parts of image and reconstructing them based on the background information. The tool provides a user interface wherein the user can open an image for inpainting, select the parts
of the image that he wants to reconstruct. The tool would then automatically inpaint the selected area according to the background information. The image can
then be saved. The inpainting in based on the exemplar based approach. The basic aim of this approach is to find examples (i.e. patches) from the image and
replace the lost data with it. Applications of this technique include the restoration of old photographs and damaged film; removal of superimposed text like
dates, subtitles etc.; and the removal of entire objects from the image like microphones or wires in special effects.
Image Restoration Using Joint Statistical Modeling in a Space-Transform Domainjohn236zaq
This document summarizes a research paper that presents a novel strategy for high-fidelity image restoration. It establishes a joint statistical model in an adaptive hybrid space-transform domain to characterize both local smoothness and nonlocal self-similarity of natural images. A new minimization functional is formulated using this joint statistical model within a regularization framework. A Split Bregman-based algorithm is developed to efficiently solve the severely underdetermined inverse problem and recover images from degradation while preserving details. Experiments on image inpainting, deblurring and denoising demonstrate the effectiveness of the proposed approach.
COMPOSITE TEXTURE SHAPE CLASSIFICATION BASED ON MORPHOLOGICAL SKELETON AND RE...sipij
After several decades of research, the development of an effective feature extraction method for texture
classification is still an ongoing effort. Therefore , several techniques have been proposed to resolve such
problems. In this paper a novel composite texture classification method based on innovative pre-processing
techniques, skeletonization and Regional moments (RM) is proposed. This proposed texture classification
approach, takes into account the ambiguity brought in by noise and the different caption and digitization
processes. To offer better classification rate, innovative pre-processing methods are applied on various
texture images first. Pre-processing mechanisms describe various methods of converting a grey level image
into binary image with minimal consideration of the noise model. Then shape features are evaluated using
RM on the proposed Morphological Skeleton (MS) method by suitable numerical characterization
measures for a precise classification. This texture classification study using MS and RM has given a good
performance. Good classification result is achieved from a single region moment RM10 while others failed
in classification.
The development of multimedia system technology in Content based Image Retrieval (CBIR) System is
one in every of the outstanding area to retrieve the images from an oversized collection of database. The feature
vectors of the query image are compared with feature vectors of the database images to get matching images.It is
much observed that anyone algorithm isn't beneficial in extracting all differing kinds of natural images. Thus an
intensive analysis of certain color, texture and shape extraction techniques are allotted to spot an efficient CBIR
technique that suits for a selected sort of images. The Extraction of an image includes feature description and
feature extraction. During this paper, we tend to projected Color Layout Descriptor (CLD), grey Level Co-
Occurrences Matrix (GLCM), Marker-Controlled Watershed Segmentation feature extraction technique that
extract the matching image based on the similarity of Color, Texture and shape within the database. For
performance analysis, the image retrieval timing results of the projected technique is calculated and compared
with every of the individual feature.
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
A novel method is proposed for image segmentation based on probabilistic field theory. This model assumes that the whole pixels of an image and some unknown parameters form a field. According to this model, the pixel labels are generated by a compound function of the field. The main novelty of this model is it consider the features of the pixels and the interdependent among the pixels. The parameters are generated by a novel spatially variant mixture model and estimated by expectation-maximization (EM)-
based algorithm. Thus, we simultaneously impose the spatial smoothness on the prior knowledge. Numerical experiments are presented where the proposed method and other mixture model-based methods were tested on synthetic and real world images. These experimental results demonstrate that our algorithm achieves competitive performance compared to other methods.
This document discusses using super-resolution-based in-painting for object removal in images. It begins with an overview of in-painting and exemplar-based in-painting methods. It then proposes a new framework that combines exemplar-based in-painting with a single-image super-resolution method. This approach improves image quality by producing high-resolution outputs with less noise compared to exemplar-based in-painting alone. The document concludes the proposed method increases robustness for applications like satellite imaging and medical imaging by providing high quality images with damaged objects removed.
Abstract: Many applications such as robot navigation, defense, medical and remote sensing performvarious processing tasks, which can be performed more easily when all objects in different images of the same scene are combined into a single fused image. In this paper, we propose a fast and effective method for image fusion. The proposed method derives the intensity based variations that is large and small scale, from the source images. In this approach, guided filtering is employed for this extraction. Gaussian and Laplacian pyramidal approach is then used to fuse the different layers obtained. Experimental results demonstrate that the proposed method can obtain better performance for fusion of
all sets of images. The results clearly indicate the feasibility of the proposed approach.
The document summarizes research on mesh representations in computer graphics. It discusses Greg Turk's introduction of "mutual tessellation" to represent objects at different levels of detail. It also covers Hugues Hoppe's work on "mesh optimization" to minimize triangles in dense meshes and "progressive meshes" to preserve overall appearance while simplifying. The document outlines challenges of mesh simplification, level-of-detail approximations, and compression. It describes representing meshes as sets of vertices, connectivity, and attributes. An energy function is defined to optimize meshes by minimizing distance and spring energies while preserving scalar attributes. Applications in medical imaging and reduced manual work in graphics are mentioned.
This document provides information about the Parramasala festival happening in Parramatta, NSW from October 4-7, 2013. It will feature performances from international and Australian artists representing various cultures. Highlights include performances by Raghu Dixit, Fearless Nadia, Aakash Odedra Company, and a concert featuring Bollywood star Shahrukh Khan. The festival will also include parades, markets, food stalls, and family entertainment at Prince Alfred Park.
A HYBRID MORPHOLOGICAL ACTIVE CONTOUR FOR NATURAL IMAGESIJCSEA Journal
Morphological active contours for image segmentation have become popular due to their low computational complexity coupled with their accurate approximation of the partial differential equations
involved in the energy minimization of the segmentation process. In this paper, a morphological active contour which mimics the energy minimization of the popular Chan-Vese Active Contour without Edges is coupled with a morphological edge-driven segmentation term to accurately segment natural images. By using morphological approximations of the energy minimization steps, the algorithm has a low computational complexity. Additionally, the coupling of the edge-based and region-based segmentation techniques allows the proposed method to be robust and accurate. We will demonstrate the accuracy and robustness of the algorithm using images from the Weizmann Segmentation Evaluation Database and report on the segmentation results using the Sorensen-Dice similarity coefficient.
Comparative analysis and evaluation of image imprinting algorithmsAlexander Decker
This document compares and evaluates two different types of image inpainting algorithms: Marcelo Bertalmio's PDE-based algorithm and Zhaolin Lu et al's exemplar-based algorithm. Both algorithms are tested on images with variable occlusion sizes. The PDE-based algorithm is better at preserving linear structures for small regions but cannot reconnect structures or restore texture in large regions. The exemplar-based algorithm can find proper textures to fill large regions while preserving linear structures. Quantitative evaluation using PSNR shows that the exemplar-based algorithm achieves lower MSE values, especially for larger occlusion sizes. Therefore, the exemplar-based algorithm produces better results overall, particularly for filling in large missing regions of an image.
REMOVING OCCLUSION IN IMAGES USING SPARSE PROCESSING AND TEXTURE SYNTHESISIJCSEA Journal
The document presents a method for removing large occlusions from images using sparse processing and texture synthesis. It involves decomposing the image into structure and texture images using sparse representations. The occluded regions in the structure image are filled in using sparse reconstruction, which retains image structures. Texture synthesis is then performed on the texture image to fill in the occluded texture. Finally, the reconstructed structure and texture images are combined to produce the occlusion-free output image. The method is shown to effectively remove large occlusions while avoiding blurring and retaining both structures and textures. It outperforms other inpainting methods in terms of visual quality.
An efficient image segmentation approach through enhanced watershed algorithmAlexander Decker
This document proposes an efficient image segmentation approach combining an enhanced watershed algorithm and color histogram analysis. The watershed algorithm is applied to preprocessed images after merging the results with an enhanced edge detection. Over-segmentation issues are addressed through a post-processing step applying color histogram analysis to each segmented region, improving overall performance. The document provides background on image segmentation techniques, reviews related work applying watershed algorithms, and discusses challenges like over-segmentation that watershed approaches can face.
11.comparative analysis and evaluation of image imprinting algorithmsAlexander Decker
This document compares and evaluates two image inpainting algorithms: Marcelo Bertalmio's PDE-based algorithm and Zhaolin Lu et al's exemplar-based algorithm. Through experiments on images with different sized occluded regions, it finds that the PDE-based algorithm cannot reconnect structures or restore textures in large regions, while the exemplar-based algorithm can find patches to fill regions while preserving structures. Quantitative evaluation using PSNR shows the exemplar-based algorithm achieves lower MSE (error) for occlusion sizes from 10 to 40 pixels. The document provides examples comparing output of the two algorithms and discusses parameters needed for each.
A Survey on Exemplar-Based Image Inpainting Techniquesijsrd.com
Preceding paper include exemplar-based image inpainting technique give idea how to inpaint destroyed region such as Criminisi algorithm, patch shifting scheme, search region prior method. Criminsi’s and Sarawut’s patch shifting scheme needed more time to inpaint an damaged region but proposed method decrease time complexity by searching only in related region of missing portion of image.
This document summarizes a research paper that proposes a novel approach for enhancing digital images using morphological operators. The approach aims to improve contrast in images with poor lighting conditions. It uses two morphological operators based on Weber's law - the first employs blocked analysis while the second uses opening by reconstruction to define a multi-background. The performance of the proposed operators is evaluated on images with various backgrounds and lighting conditions. Key steps include dividing images into blocks, estimating minimum/maximum intensities in each block to determine background criteria, and applying contrast enhancement transformations based on the criteria. Opening by reconstruction is also used to approximate image background without modifying structures. Experimental results demonstrate the approach enhances images with poor lighting.
Multiple Ant Colony Optimizations for Stereo MatchingCSCJournals
The stereo matching problem, which obtains the correspondence between right and left images, can be cast as a search problem. The matching of all candidates in the same line forms a 2D optimization task and the two dimensional (2D) optimization is a NP-hard problem. There are two characteristics in stereo matching. Firstly, the local optimization process along each scan-line can be done concurrently; secondly, there are some relationship among adjacent scan-lines can be explored to promote the matching correctness. Although there are many methods, such as GCPs, GGCPs are proposed, but these so called GCPs maybe not be ground. The relationship among adjacent scan-lines is posteriori, that is to say the relationship can only be discovered after every optimization is finished. The Multiple Ant Colony Optimization(MACO) is efficient to solve large scale problem. It is a proper way to settle down the stereo matching task with constructed MACO, in which the master layer values the sub-solutions and propagate the reliability after every local optimization is finished. Besides, whether the ordering and uniqueness constraints should be considered during the optimization is discussed, and the proposed algorithm is proved to guarantee its convergence to find the optimal matched pairs.
Image in Painting Techniques: A survey IOSR Journals
This document provides a survey of different image inpainting techniques. It discusses approaches such as texture synthesis based inpainting, PDE (partial differential equation) based inpainting, exemplar based inpainting, hybrid inpainting, and semi-automatic inpainting. Texture synthesis approaches recreate textures within missing regions by sampling from surrounding textures. PDE based methods diffuse image information into missing areas. Exemplar based techniques iteratively copy patches from surrounding regions. Hybrid methods combine approaches. The document analyzes strengths and limitations of each technique.
This document discusses object removal from digital photographs through exemplar-based inpainting. It describes Criminisi's algorithm which combines texture synthesis and inpainting to remove objects while preserving linear structures and avoiding blurring. The algorithm works by assigning priority values to pixels based on proximity to linear structures, and then propagates texture patterns from surrounding regions into the removed object area. Experimental results show Criminisi's approach produces better outcomes than either texture synthesis or inpainting alone. Future work areas include improving curved structure propagation and applying the method to video.
In this project we have implemented a tool to inpaint selected regions from an image. Inpainting refers to the art of restoring lost parts of image and reconstructing them based on the background information. The tool provides a user interface wherein the user can open an image for inpainting, select the parts
of the image that he wants to reconstruct. The tool would then automatically inpaint the selected area according to the background information. The image can
then be saved. The inpainting in based on the exemplar based approach. The basic aim of this approach is to find examples (i.e. patches) from the image and
replace the lost data with it. Applications of this technique include the restoration of old photographs and damaged film; removal of superimposed text like
dates, subtitles etc.; and the removal of entire objects from the image like microphones or wires in special effects.
Image Restoration Using Joint Statistical Modeling in a Space-Transform Domainjohn236zaq
This document summarizes a research paper that presents a novel strategy for high-fidelity image restoration. It establishes a joint statistical model in an adaptive hybrid space-transform domain to characterize both local smoothness and nonlocal self-similarity of natural images. A new minimization functional is formulated using this joint statistical model within a regularization framework. A Split Bregman-based algorithm is developed to efficiently solve the severely underdetermined inverse problem and recover images from degradation while preserving details. Experiments on image inpainting, deblurring and denoising demonstrate the effectiveness of the proposed approach.
COMPOSITE TEXTURE SHAPE CLASSIFICATION BASED ON MORPHOLOGICAL SKELETON AND RE...sipij
After several decades of research, the development of an effective feature extraction method for texture
classification is still an ongoing effort. Therefore , several techniques have been proposed to resolve such
problems. In this paper a novel composite texture classification method based on innovative pre-processing
techniques, skeletonization and Regional moments (RM) is proposed. This proposed texture classification
approach, takes into account the ambiguity brought in by noise and the different caption and digitization
processes. To offer better classification rate, innovative pre-processing methods are applied on various
texture images first. Pre-processing mechanisms describe various methods of converting a grey level image
into binary image with minimal consideration of the noise model. Then shape features are evaluated using
RM on the proposed Morphological Skeleton (MS) method by suitable numerical characterization
measures for a precise classification. This texture classification study using MS and RM has given a good
performance. Good classification result is achieved from a single region moment RM10 while others failed
in classification.
The development of multimedia system technology in Content based Image Retrieval (CBIR) System is
one in every of the outstanding area to retrieve the images from an oversized collection of database. The feature
vectors of the query image are compared with feature vectors of the database images to get matching images.It is
much observed that anyone algorithm isn't beneficial in extracting all differing kinds of natural images. Thus an
intensive analysis of certain color, texture and shape extraction techniques are allotted to spot an efficient CBIR
technique that suits for a selected sort of images. The Extraction of an image includes feature description and
feature extraction. During this paper, we tend to projected Color Layout Descriptor (CLD), grey Level Co-
Occurrences Matrix (GLCM), Marker-Controlled Watershed Segmentation feature extraction technique that
extract the matching image based on the similarity of Color, Texture and shape within the database. For
performance analysis, the image retrieval timing results of the projected technique is calculated and compared
with every of the individual feature.
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
A novel method is proposed for image segmentation based on probabilistic field theory. This model assumes that the whole pixels of an image and some unknown parameters form a field. According to this model, the pixel labels are generated by a compound function of the field. The main novelty of this model is it consider the features of the pixels and the interdependent among the pixels. The parameters are generated by a novel spatially variant mixture model and estimated by expectation-maximization (EM)-
based algorithm. Thus, we simultaneously impose the spatial smoothness on the prior knowledge. Numerical experiments are presented where the proposed method and other mixture model-based methods were tested on synthetic and real world images. These experimental results demonstrate that our algorithm achieves competitive performance compared to other methods.
This document discusses using super-resolution-based in-painting for object removal in images. It begins with an overview of in-painting and exemplar-based in-painting methods. It then proposes a new framework that combines exemplar-based in-painting with a single-image super-resolution method. This approach improves image quality by producing high-resolution outputs with less noise compared to exemplar-based in-painting alone. The document concludes the proposed method increases robustness for applications like satellite imaging and medical imaging by providing high quality images with damaged objects removed.
Abstract: Many applications such as robot navigation, defense, medical and remote sensing performvarious processing tasks, which can be performed more easily when all objects in different images of the same scene are combined into a single fused image. In this paper, we propose a fast and effective method for image fusion. The proposed method derives the intensity based variations that is large and small scale, from the source images. In this approach, guided filtering is employed for this extraction. Gaussian and Laplacian pyramidal approach is then used to fuse the different layers obtained. Experimental results demonstrate that the proposed method can obtain better performance for fusion of
all sets of images. The results clearly indicate the feasibility of the proposed approach.
The document summarizes research on mesh representations in computer graphics. It discusses Greg Turk's introduction of "mutual tessellation" to represent objects at different levels of detail. It also covers Hugues Hoppe's work on "mesh optimization" to minimize triangles in dense meshes and "progressive meshes" to preserve overall appearance while simplifying. The document outlines challenges of mesh simplification, level-of-detail approximations, and compression. It describes representing meshes as sets of vertices, connectivity, and attributes. An energy function is defined to optimize meshes by minimizing distance and spring energies while preserving scalar attributes. Applications in medical imaging and reduced manual work in graphics are mentioned.
This document provides information about the Parramasala festival happening in Parramatta, NSW from October 4-7, 2013. It will feature performances from international and Australian artists representing various cultures. Highlights include performances by Raghu Dixit, Fearless Nadia, Aakash Odedra Company, and a concert featuring Bollywood star Shahrukh Khan. The festival will also include parades, markets, food stalls, and family entertainment at Prince Alfred Park.
El documento describe las características físicas del picocorto, incluyendo su pico y patas fuertes, y una cresta de plumas en forma de disco que tienen ambos sexos aunque es más grande en los machos. A pesar de los brillantes colores de los machos, generalmente son difíciles de observar fuera de sus campos de despliegue debido a que son ariscos y viven en cascadas profundas o colinas remotas.
El documento define geométricamente la derivada como la pendiente de la recta tangente a una curva en un punto específico. Explica que una recta tangente tiene un punto en común con una circunferencia o función y solo existe en el punto de interacción. Además, matemáticamente la derivada se define como el cambio en el eje vertical dividido por el cambio en el eje horizontal.
Dr. Mohammed Bassam Al Kassab is a rheumatologist and chairman of internal medicine at Sultan Bin Abdulaziz Humanitarian City in Riyadh, Saudi Arabia. He received his MBBS from Damascus University in 1982 and a rheumatology certification from Cloud Bernard University Lyon in 1988. He is licensed as a consultant rheumatologist by the Saudi Commission for Health Specialities through December 2016. His career includes positions as consultant rheumatologist and medical director at King Fahad International Airport from 1999-2002 and chief of internal medicine and rheumatology consultant at Rabegh General Hospital from 1992-1999.
The document provides details on dining packages and event options at The Empress Estate venue. It lists 4 package levels ranging from $29.95-44.95 per guest that include various food and beverage inclusions like appetizers, salads, entrees and desserts. A food minimum spending amount is required before taxes and gratuity and increases with each package level. Sample menus and pricing are provided for hors d'oeuvres, salads and entrees that can be selected from. Beverage pricing and policies are also outlined.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
EXTENDED WAVELET TRANSFORM BASED IMAGE INPAINTING ALGORITHM FOR NATURAL SCENE...cscpconf
This paper proposes an exemplar based image inpainting using extended wavelet transform. The
Image inpainting modifies an image with the available information outside the region to be
inpainted in an undetectable way. The extended wavelet transform is in two dimensions. The
Laplacian pyramid is first used to capture the point discontinuities, and then followed by a
directional filter bank to link point discontinuities into linear structures. The proposed model
effectively captures the edges and contours of natural scene images
This document summarizes several existing techniques for image inpainting, including texture synthesis, geometric partial differential equations (PDEs), and coherence-based methods. It then proposes a combined model that incorporates elements of these different approaches into a variational framework. Specifically, it suggests combining texture synthesis, PDE-based diffusion, and enforcing coherence among neighboring pixels and across frames for video inpainting. The goal is to approximate the minimum of the proposed energy functional to better fill in missing or corrupted regions of images and video frames.
Comparative Study and Analysis of Image Inpainting TechniquesIOSR Journals
Abstract: Image inpainting is a technique to fill missing region or reconstruct damage area from an image.It
removes an undesirable object from an image in visually plausible way.For filling the part of image, it use
information from the neighboring area. In this dissertation work, we present a Examplar based method for
filling in the missing information in an image, which takes structure synthesis and texture sysnthesis together.
In exemplar based approach it used local information from an image to patch propagation.We have also
implement Nonlocal Mean approach for exemplar based image inpainting.In Nonlocal mean approach it find
multiple samples of best exemplar patches for patch propagation and weight their contribution according to
their similarity to the neighborhood under evaluation. We have further extended this algorithm by considering
collaborative filtering method to synthesize and propagate with multiple samples of best exemplar patches. We
have to preformed experiment on many images and found that our algorithm successfully inpaint the target
region.We have tested the accuracy of our algorithm by finding parameter like PSNR and compared PSNR
value for all three different approaches.
Keywords: Texture Synthesis, Structure Synthesis, Patch Propagation ,imageinpainting ,nonlocal approach,
collabrative filtering.
Sinusoidal Function for Population Size in Quantum Evolutionary Algorithm and...sipij
Fractal Image Compression is a well-known problem which is in the class of NP-Hard problems. Quantum Evolutionary Algorithm is a novel optimization algorithm which uses a probabilistic representation for solutions and is highly suitable for combinatorial problems like Knapsack problem. Genetic algorithms are widely used for fractal image compression problems, but QEA is not used for this kind of problems yet. This paper improves QEA whit change population size and used it in fractal image compression. Utilizing the self-similarity property of a natural image, the partitioned iterated function system (PIFS) will be found to encode an image through Quantum Evolutionary Algorithm (QEA) method Experimental results show that our method has a better performance than GA and conventional fractal image compression algorithms.
Automatic Synthesis Of Isotropic Textures On Surfaces From Sample ImagesDustin Pytko
This document summarizes a research paper on automatic texture synthesis on surfaces from sample images. It discusses:
1) A method for synthesizing isotropic textures on subdivision surfaces in a fully automatic way using Gaussian and Laplacian pyramids to construct coarse-to-fine texture synthesis that preserves first-order statistics.
2) Background on texture synthesis, including sample-based vs procedural methods, first and second order statistics, quilting methods, and neighborhood search methods.
3) Previous related work on texture synthesis methods by Efros and Leung using pixel-by-pixel growth, and by Wei and Levoy using multi-resolution neighborhood search.
Semi-Supervised Method of Multiple Object Segmentation with a Region Labeling...sipij
Efficient and efficient multiple object segmentation is an important task in computer vision and object recognition. In this work; we address a method to effectively discover a user’s concept when multiple objects of interest are involved in content based image retrieval. The proposed method incorporate a framework for multiple object retrieval using semi-supervised method of similar region merging and flood fill which models the spatial and appearance relations among image pixels. To improve the effectiveness of similarity based region merging we propose a new similarity based object retrieval. The users only need to roughly indicate the after which steps desired objects contour is obtained during the automatic merging of similar regions. A novel similarity based region merging mechanism is proposed to guide the merging process with the help of mean shift technique and objects detection using region labeling and flood fill. A region R is merged with its adjacent regions Q if Q has highest similarity with Q (using Bhattacharyya descriptor) among all Q’s adjacent regions. The proposed method automatically merges the regions that are initially segmented through mean shift technique, and then effectively extracts the object contour by merging all similar regions. Extensive experiments are performed on 12 object classes (224 images total) show promising results.
2.[7 12]combined structure and texture image inpainting algorithm for natural...Alexander Decker
This document summarizes a research paper that proposes a new algorithm for completing images of natural scenes where part of the image is missing or damaged. The algorithm first decomposes the original image into a structure component and a texture component. It then uses different techniques to reconstruct each component separately. The structure component is reconstructed using an algorithm that fills in missing information, while the texture component is repaired using an exemplar-based texture synthesis technique. The researchers believe this combined approach performs better than existing methods by taking advantage of both structure inpainting and texture synthesis. They provide results comparing their method to other approaches on various natural images.
11.combined structure and texture image inpainting algorithm for natural scen...Alexander Decker
This document presents a combined structure and texture image inpainting algorithm for natural scene image completion. It first decomposes the original image into a structure component and a texture component. It then reconstructs the missing information in the structure component using a structure inpainting algorithm and repairs the texture component using an exemplar-based texture synthesis technique. The algorithm takes advantage of both structure inpainting and texture synthesis methods to effectively reconstruct images. Experimental results on natural images show the proposed approach provides higher quality inpainted images compared to some existing methods.
2.[7 12]combined structure and texture image inpainting algorithm for natural...Alexander Decker
This document presents a combined structure and texture image inpainting algorithm for natural scene image completion. It first decomposes the original image into a structure component and a texture component. It then reconstructs the missing information in the structure component using a structure inpainting algorithm and repairs the texture component using an exemplar-based texture synthesis technique. The algorithm takes advantage of both structure inpainting and texture synthesis methods to effectively reconstruct images. Experimental results on natural images show the proposed approach provides higher quality inpainted images compared to some existing methods.
The document describes a new method called Cluster Sensing Superpixel (CSS) for generating superpixels. CSS identifies ideal cluster centers using pixel density, which reflects pixel concentration. CSS searches for local optimal cluster centers that have high pixel density (representativeness) and are isolated from other high density pixels. It is approximately 5 times faster than state-of-the-art methods while maintaining comparable performance. When applied to microscopy image segmentation, CSS benefits from its efficient implementation.
This document describes a testbed for image synthesis developed at Cornell University. The testbed was designed to facilitate research on new light reflection models, global illumination algorithms, and rendering of complex scenes. It uses a modular structure with hierarchical levels of functionality. The lowest level contains utility modules, the middle level contains object modules that work across primitive types, and the highest level contains image synthesis modules. The testbed uses a modeler-independent description format to represent environments independently of modeling programs. Renderers can then generate images from this common description.
This document discusses surface reconstruction from point cloud data. It describes how surface reconstruction involves three phases: initial surface estimation, mesh optimization, and smooth surface optimization. The crust algorithm and umbrella filtering are discussed as methods for surface reconstruction from point clouds. Challenges in surface reconstruction like noise, lack of normal vectors, non-uniform sampling, and holes are also outlined. The document reviews several related studies on surface reconstruction techniques.
A Hybrid Technique for the Automated Segmentation of Corpus Callosum in Midsa...IJERA Editor
The corpus callosum (CC) is the largest white-matter structure in human brain. In this paper, we take two techniques to observe the results of segmentation of Corpus Callosum. The first one is mean shift algorithm and morphological operation. The second one is k-means clustering. In this paper, it is performed in three steps. The first step is finding the corpus callosum area using adaptive mean shift algorithm or k-means clustering . In second step, the boundary of detected CC area is then used as the initial contour in the Geometric Active Contour (GAC) mode and final step to remove unknown noise using morphological operation and evolved to get the final segmentation result. The experimental results demonstrate that the mean shift algorithm and k-means clustering has provided a reliable segmentation performance.
This document describes an interactive multi-label image segmentation algorithm called "GrowCut" based on cellular automata. The algorithm can segment N-dimensional images with multiple labels. With modest user input of labeled pixels, GrowCut automatically segments the rest of the image in an iterative process. It requires less user effort than other techniques for moderately difficult images. The algorithm has advantages such as efficiency, parallelizability, and extensibility to generate new segmentation methods.
The secondo project on a series on Quantum Architecure.
In this project we study and alternative spatial configuration of DUMBO in Brooklyn NYC, removing the grid and reconfiguring the entire built and natural landscape based on parametric design
Surface generation from point cloud.pdfssuserd3982b1
This document summarizes research on generating 3D surface models from point clouds acquired using a vision-based laser scanning sensor. It discusses using adaptive filters to reduce noise in the point clouds and generating triangular meshes from the point clouds to create an initial surface model. It then covers using NURBS (Non-Uniform Rational B-Splines) to optimize the surface model for accuracy by fitting parametric surfaces to the triangular mesh. The goal is to develop accurate 3D surface models of turbine blades for robotic welding applications to repair flaws.
Garbage Classification Using Deep Learning TechniquesIRJET Journal
The document discusses using deep learning techniques for garbage classification. It compares the performance of different models, including support vector machines with HOG features, simple convolutional neural networks (CNNs), CNNs with residual blocks, and a hybrid model combining CNN features with HOG features. The CNN models generally performed best, with the simple CNN achieving over 93% accuracy on test data. Residual blocks did not significantly improve performance over simple CNNs. Combining CNN and HOG features was also considered but did not clearly outperform CNNs alone. Overall, CNN models were shown to effectively classify garbage using these image datasets.
2. IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 13, NO. 9, SEP 2004 1
Region Filling and Object Removal by
Exemplar-Based Image Inpainting
A. Criminisi*, P. P´erez and K. Toyama
Microsoft Research, Cambridge (UK) and Redmond (US)
antcrim@microsoft.com
Abstract—A new algorithm is proposed for removing large objects from
digital images. The challenge is to fill in the hole that is left behind in a
visually plausible way.
In the past, this problem has been addressed by two classes of algo-
rithms: (i) “texture synthesis” algorithms for generating large image re-
gions from sample textures, and (ii) “inpainting” techniques for filling in
small image gaps. The former has been demonstrated for “textures” – re-
peating two-dimensional patterns with some stochasticity; the latter focus
on linear “structures” which can be thought of as one-dimensional patterns,
such as lines and object contours.
This paper presents a novel and efficient algorithm that combines the
advantages of these two approaches. We first note that exemplar-based tex-
ture synthesis contains the essential process required to replicate both tex-
ture and structure; the success of structure propagation, however, is highly
dependent on the order in which the filling proceeds. We propose a best-first
algorithm in which the confidence in the synthesized pixel values is propa-
gated in a manner similar to the propagation of information in inpainting.
The actual colour values are computed using exemplar-based synthesis.
In this paper the simultaneous propagation of texture and structure in-
formation is achieved by a single, efficient algorithm. Computational effi-
ciency is achieved by a block-based sampling process.
A number of examples on real and synthetic images demonstrate the
effectiveness of our algorithm in removing large occluding objects as well
as thin scratches. Robustness with respect to the shape of the manually
selected target region is also demonstrated. Our results compare favorably
to those obtained by existing techniques.
Keywords— Object Removal, Image Inpainting, Texture Synthesis, Si-
multaneous Texture and Structure Propagation.
I. INTRODUCTION
This paper presents a novel algorithm for removing large ob-
jects from digital photographs and replacing them with visually
plausible backgrounds. Figure 1 shows an example of this task,
where the foreground person (manually selected as the target
region) is automatically replaced by data sampled from the re-
mainder of the image. The algorithm effectively hallucinates
new colour values for the target region in a way that looks “rea-
sonable” to the human eye. This paper builds upon and extends
the work in [8], with a more detailed description of the algorithm
and extensive comparisons with the state of the art.
In previous work, several researchers have considered texture
synthesis as a way to fill large image regions with “pure” tex-
tures – repetitive two-dimensional textural patterns with mod-
erate stochasticity. This is based on a large body of texture-
synthesis research, which seeks to replicate texture ad infinitum,
given a small source sample of pure texture [1], [9], [11], [12],
[13], [14], [16], [17], [18], [22], [25]. Of particular interest are
exemplar-based techniques which cheaply and effectively gen-
erate new texture by sampling and copying colour values from
the source [1], [11], [12], [13], [17].
As effective as these techniques are in replicating consistent
texture, they have difficulty filling holes in photographs of real-
world scenes, which often consist of linear structures and com-
a b
Fig. 1. Removing large objects from images. (a) Original photograph. (b) The
region corresponding to the foreground person (covering about 19% of the
image) has been manually selected and then automatically removed. Notice
that the horizontal structures of the fountain have been synthesized in the
occluded area together with the water, grass and rock textures.
posite textures – multiple textures interacting spatially [26]. The
main problem is that boundaries between image regions are a
complex product of mutual influences between different tex-
tures. In constrast to the two-dimensional nature of pure tex-
tures, these boundaries form what might be considered more
one-dimensional, or linear, image structures.
A number of algorithms specifically address the image fill-
ing issue for the task of image restoration, where speckles,
scratches, and overlaid text are removed [2], [3], [4], [7], [23].
These image inpainting techniques fill holes in images by prop-
agating linear structures (called isophotes in the inpainting lit-
erature) into the target region via diffusion. They are inspired
by the partial differential equations of physical heat flow, and
work convincingly as restoration algorithms. Their drawback is
that the diffusion process introduces some blur, which becomes
noticeable when filling larger regions.
The technique presented here combines the strengths of both
approaches into a single, efficient algorithm. As with inpainting,
we pay special attention to linear structures. But, linear struc-
tures abutting the target region only influence the fill order of
what is at core an exemplar-based texture synthesis algorithm.
The result is an algorithm that has the efficiency and qualita-
tive performance of exemplar-based texture synthesis, but which
also respects the image constraints imposed by surrounding lin-
ear structures.
The algorithm we propose in this paper builds on very recent
research along similar lines. The work in [5] decomposes the
original image into two components; one of which is processed
3. 2 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 13, NO. 9, SEP 2004
by inpainting and the other by texture synthesis. The output im-
age is the sum of the two processed components. This approach
still remains limited to the removal of small image gaps, how-
ever, as the diffusion process continues to blur the filled region
(cf. [5], fig.5 top right). The automatic switching between “pure
texture-” and “pure structure-mode” of [24] is also avoided.
Similar to [5] is the work in [10], where the authors de-
scribe an algorithm that interleaves a smooth approximation
with example-based detail synthesis for image completion. Like
the work in [5] also the algorithm in [10] is extremely slow (as
reported processing may take between 83 and 158 minutes on a
384 × 256 image) and it may introduce blur artefacts (cf. fig.8b,
last row of fig.13 and fig. 16c in [10]). In this paper we present a
simpler and faster region filling algorithm which does not suffer
from blur artefacts.
One of the first attempts to use exemplar-based synthesis
specifically for object removal was by Harrison [15]. There, the
order in which a pixel in the target region is filled was dictated
by the level of “texturedness” of the pixel’s neighborhood1
. Al-
though the intuition is sound, strong linear structures were often
overruled by nearby noise, minimizing the value of the extra
computation. A related technique drove the fill order by the
local shape of the target region, but did not seek to explicitly
propagate linear structures [6].
Recently Jia et al. [19] have presented a technique for fill-
ing image regions based on a texture-segmentation step and
a tensor-voting algorithm for the smooth linking of structures
across holes. Their approach has a clear advantage in that it is
designed to connect curved structures by the explicit generation
of subjective contours, over which textural structures are propa-
gated. On the other hand, their algorithm requires (i) an expen-
sive segmentation step, and (ii) a hard decision about what con-
stitutes a boundary between two textures. Our approach avoids
both issues through the use of a continuous parameter based
on local gradient strength only. A careful fusion of these ap-
proaches may result in a superior algorithm, but results suggest
that both approaches already achieve a reasonable measure of
visual credibility in filling holes.
Finally, Zalesny et al. [26] describe an algorithm for the par-
allel synthesis of composite textures. They devise a special-
purpose solution for synthesizing the interface between two
“knitted” textures. In this paper we show that, in fact, only one
mechanism is sufficient for the synthesis of both pure and com-
posite textures.
Paper outline. Section II presents the two key observations
which form the basis of our algorithm. The details of the pro-
posed algorithm are described in sect. III. Finally, a large gallery
of results on both synthetic images and real-scene photographs
is presented in sect. IV. Whenever possible our results are com-
pared to those obtained by state of the art techniques.
II. KEY OBSERVATIONS
A. Exemplar-based synthesis suffices
The core of our algorithm is an isophote-driven image-
sampling process. It is well-understood that exemplar-based ap-
1An implementation of Harrison’s algorithm is available from
www.csse.monash.edu.au/∼pfh/resynthesizer/
Fig. 2. Structure propagation by exemplar-based texture synthesis. (a)
Original image, with the target region Ω, its contour δΩ, and the source
region Φ clearly marked. (b) We want to synthesize the area delimited by
the patch Ψp centred on the point p ∈ δΩ. (c) The most likely candi-
date matches for Ψp lie along the boundary between the two textures in the
source region, e.g., Ψq and Ψq . (d) The best matching patch in the candi-
dates set has been copied into the position occupied by Ψp, thus achieving
partial filling of Ω. Notice that both texture and structure (the separating
line) have been propagated inside the target region. The target region Ω has,
now, shrank and its front δΩ has assumed a different shape.
proaches perform well for two-dimensional textures [1], [11],
[17]. But, we note in addition that exemplar-based texture syn-
thesis is sufficient for propagating extended linear image struc-
tures, as well; i.e., a separate synthesis mechanism is not re-
quired for handling isophotes.
Figure 2 illustrates this point. For ease of comparison, we
adopt notation similar to that used in the inpainting literature.
The region to be filled, i.e., the target region is indicated by Ω,
and its contour is denoted δΩ. The contour evolves inward as
the algorithm progresses, and so we also refer to it as the “fill
front”. The source region, Φ, which remains fixed throughout
the algorithm, provides samples used in the filling process.
We now focus on a single iteration of the algorithm to show
how structure and texture are adequately handled by exemplar-
based synthesis. Suppose that the square template Ψp ∈ Ω cen-
tred at the point p (fig. 2b), is to be filled. The best-match sam-
ple from the source region comes from the patch Ψˆq ∈ Φ, which
is most similar to those parts that are already filled in Ψp. In the
example in fig. 2b, we see that if Ψp lies on the continuation
of an image edge, the most likely best matches will lie along
the same (or a similarly coloured) edge (e.g., Ψq and Ψq in
fig. 2c).
All that is required to propagate the isophote inwards is a
simple transfer of the pattern from the best-match source patch
4. A.CRIMINISI, P. P´EREZ AND K. TOYAMA: OBJECT REMOVAL BY EXEMPLAR-BASED INPAINTING 3
a
The filling process
Onionpeel b c dDesiderata
b’ c’ d’
Fig. 3. The importance of the filling order when dealing with concave target regions. (a) A diagram showing an image and a selected target region (in
white). The remainder of the image is the source. (b,c,d) Different stages in the concentric-layer filling of the target region. (d) The onion-peel approach
produces artefacts in the synthesized horizontal structure. (b’,c’,d’) Filling the target region by an edge-driven filling order achieves the desired artefact-free
reconstruction. (d’) The final edge-driven reconstruction, where the boundary between the two background image regions has been reconstructed correctly.
(fig. 2d). Notice that isophote orientation is automatically pre-
served. In the figure, despite the fact that the original edge is not
orthogonal to the target contour δΩ, the propagated structure has
maintained the same orientation as in the source region.
In this work we focus on a patch-based filling approach (as
opposed to pixel-based ones as in [11]) because, as noted in [22],
this improves execution speed. Furthermore, we note that patch-
based filling improves the accuracy of the propagated structures.
B. Filling order is critical
The previous section has shown how careful exemplar-based
filling may be capable of propagating both texture and struc-
ture information. This section demonstrates that the quality of
the output image synthesis is highly influenced by the order in
which the filling process proceeds. Furthermore, we list a num-
ber of desired properties of the “ideal” filling algorithm.
A comparison between the standard concentric layer filling
(onion-peel) and the desired filling behaviour is illustrated in
fig. 3. Figures 3b,c,d show the progressive filling of a concave
target region via an anti-clockwise onion-peel strategy. As it
can be observed, this ordering of the filled patches produces the
horizontal boundary between the background image regions to
be unexpectedly reconstructed as a curve.
A better filling algorithm would be one that gives higher pri-
ority of synthesis to those regions of the target area which lie on
the continuation of image structures, as shown in figs. 3b’,c’,d’.
Together with the property of correct propagation of linear struc-
tures, the latter algorithm would also be more robust towards
variations in the shape of the target regions.
A concentric-layer ordering, coupled with a patch-based fill-
ing may produce further artefacts (cf. fig. 4).
Therefore, filling order is crucial to non-parametric texture
synthesis [1], [6], [12], [15]. To our knowledge, however, de-
signing a fill order which explicitly encourages propagation of
linear structure (together with texture) has never been explored,
and thus far, the default favourite has been the “onion peel” strat-
egy.
Another desired property of a good filling algorithm is that of
avoiding “over-shooting” artefacts that occur when image edges
are allowed to grow indefinitely. The goal here is finding a good
balance between the propagation of structured regions and that
of textured regions (fig. 3b’,c’,d’), without employing two ad-
hoc strategies. As demonstrated in the next section, the algo-
rithm we propose achieves such a balance by combining the
structure “push” with a confidence term that tends to reduce
sharp in-shooting appendices in the contour of the target region.
As it will be demonstrated, the filling algorithm proposed in
this paper overcomes the issues that characterize the traditional
concentric-layers filling approach and achieves the desired prop-
erties of: (i) correct propagation of linear structures, (ii) robust-
ness to changes in shape of the target region, (iii) balanced si-
multaneous structure and texture propagation, all in a single, ef-
ficient algorithm. We now proceed with the details of our algo-
rithm.
III. OUR REGION-FILLING ALGORITHM
First, given an input image, the user selects a target region, Ω,
to be removed and filled. The source region, Φ, may be defined
as the entire image minus the target region (Φ = I − Ω), as
a dilated band around the target region, or it may be manually
specified by the user.
Next, as with all exemplar-based texture synthesis [12], the
5. 4 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 13, NO. 9, SEP 2004
a
b c
Fig. 4. The importance of the filling order in patch-based filling. (a) The
target region is shown in white. (b) Part of the outmost layer in the target
region has been synthesized by an onion-peel algorithm proceeding in clock-
wise order. (c) At a further stage of the filling process, the remainder of
the outmost layer has been filled by this clock-wise onion-peel filling. The
concentric-layer filling has produced artefacts in the reconstruction of the
diagonal image edge. A filling algorithm whose priority is guided by the
image edges would fix this problem.
size of the template window Ψ must be specified. We provide a
default window size of 9 × 9 pixels, but in practice require the
user to set it to be slightly larger than the largest distinguishable
texture element, or “texel”, in the source region.
Once these parameters are determined, the region-filling pro-
ceeds automatically.
In our algorithm, each pixel maintains a colour value (or
“empty”, if the pixel is unfilled) and a confidence value, which
reflects our confidence in the pixel value, and which is frozen
once a pixel has been filled. During the course of the algorithm,
patches along the fill front are also given a temporary priority
value, which determines the order in which they are filled. Then,
our algorithm iterates the following three steps until all pixels
have been filled:
1. Computing patch priorities. Our algorithm performs the
synthesis task through a best-first filling strategy that depends
entirely on the priority values that are assigned to each patch on
the fill front. The priority computation is biased toward those
patches which: (i) are on the continuation of strong edges and
(ii) are surrounded by high-confidence pixels.
Given a patch Ψp centred at the point p for some p ∈ δΩ (see
fig. 5), we define its priority P(p) as the product of two terms:
P(p) = C(p)D(p). (1)
We call C(p) the confidence term and D(p) the data term, and
Fig. 5. Notation diagram. Given the patch Ψp, np is the normal to the contour
δΩ of the target region Ω and I⊥
p is the isophote (direction and intensity)
at point p. The entire image is denoted with I.
they are defined as follows:
C(p) =
q∈Ψp∩(I−Ω) C(q)
|Ψp|
, D(p) =
| I⊥
p · np|
α
where |Ψp| is the area of Ψp, α is a normalization factor (e.g.,
α = 255 for a typical grey-level image), np is a unit vector or-
thogonal to the front δΩ in the point p and ⊥ denotes the orthog-
onal operator. The priority P(p) is computed for every border
patch, with distinct patches for each pixel on the boundary of
the target region.
During initialization, the function C(p) is set to C(p) = 0
∀p ∈ Ω, and C(p) = 1 ∀p ∈ I − Ω.
The confidence term C(p) may be thought of as a measure of
the amount of reliable information surrounding the pixel p. The
intention is to fill first those patches which have more of their
pixels already filled, with additional preference given to pixels
that were filled early on (or that were never part of the target
region).
As it will be illustrated in fig. 6a, this automatically incor-
porates preference towards certain shapes of the fill front. For
example, patches that include corners and thin tendrils of the
target region will tend to be filled first, as they are surrounded
by more pixels from the original image. These patches provide
more reliable information against which to match. Conversely,
patches at the tip of “peninsulas” of filled pixels jutting into the
target region will tend to be set aside until more of the surround-
ing pixels are filled in.
At a coarse level, the term C(p) of (1) approximately en-
forces the desirable concentric fill order. As filling proceeds,
pixels in the outer layers of the target region will tend to be char-
acterized by greater confidence values, and therefore be filled
earlier; pixels in the centre of the target region will have lesser
confidence values.
The data term D(p) is a function of the strength of isophotes
hitting the front δΩ at each iteration. This term boosts the pri-
ority of a patch that an isophote “flows” into. This factor is of
fundamental importance in our algorithm because it encourages
linear structures to be synthesized first, and, therefore propa-
gated securely into the target region. Broken lines tend to con-
6. A.CRIMINISI, P. P´EREZ AND K. TOYAMA: OBJECT REMOVAL BY EXEMPLAR-BASED INPAINTING 5
nect, thus realizing the “Connectivity Principle” of vision psy-
chology [7], [20] (cf. fig. 7, fig. 11f’, fig. 13b and fig. 20f’).
2. Propagating texture and structure information. Once all
priorities on the fill front have been computed, the patch Ψˆp
with highest priority is found. We then fill it with data extracted
from the source region Φ.
In traditional inpainting techniques, pixel-value information
is propagated via diffusion. As noted previously, diffusion nec-
essarily leads to image smoothing, which results in blurry fill-in,
especially of large regions (see fig. 15f).
On the contrary, we propagate image texture by direct sam-
pling of the source region. Similar to [12], we search in the
source region for that patch which is most similar to Ψˆp.2
For-
mally,
Ψˆq = arg min
Ψq∈Φ
d(Ψˆp, Ψq) (2)
where the distance d(Ψa, Ψb) between two generic patches Ψa
and Ψb is simply defined as the sum of squared differences
(SSD) of the already filled pixels in the two patches. Pixel
colours are represented in the CIE Lab colour space [21] be-
cause of its property of perceptual uniformity3
.
Having found the source exemplar Ψˆq, the value of each
pixel-to-be-filled, p |p ∈ Ψˆp∩Ω, is copied from its corre-
sponding position inside Ψˆq.
This suffices to achieve the propagation of both structure and
texture information from the source Φ to the target region Ω,
one patch at a time (cf., fig. 2d). In fact, we note that any further
manipulation of the pixel values (e.g., adding noise, smoothing
etc.) that does not explicitly depend upon statistics of the source
region, is more likely to degrade visual similarity between the
filled region and the source region, than to improve it.
3. Updating confidence values. After the patch Ψˆp has been
filled with new pixel values, the confidence C(p) is updated in
the area delimited by Ψˆp as follows:
C(p) = C(ˆp) ∀p ∈ Ψˆp ∩ Ω.
This simple update rule allows us to measure the relative confi-
dence of patches on the fill front, without image-specific param-
eters. As filling proceeds, confidence values decay, indicating
that we are less sure of the colour values of pixels near the cen-
tre of the target region.
A pseudo-code description of the algorithmic steps is shown
in table I. The superscript t indicates the current iteration.
Some properties of our region-filling algorithm. As illustrated
in fig. 6a, the effect of the confidence term is that of smoothing
the contour of the target region by removing sharp appendices
and making the target contour close to circular. Also, in fig. 6a it
can be noticed that inwards-pointing appendices are discouraged
by the confidence term (red corresponds to low priority pixels).
Unlike previous approaches, the presence of the data term in
the priority function (1) tends to favour inwards-growing appen-
dices in the places where structures hit the contour (green pix-
els in fig. 6b), thus achieving the desired structure propagation.
2Valid patches must be entirely contained in Φ.
3Euclidean distances in Lab colour space are more meaningful than in RGB
space.
a
b
Fig. 6. Effects of data and confidence terms. (a) The confidence term assigns
high filling priority to out-pointing appendices (in green) and low priority to
in-pointing ones (in red), thus trying to achieve a smooth and roughly cir-
cular target boundary. (b) The data term gives high priority to pixels on the
continuation of image structures (in green) and has the effect of favouring
in-pointing appendices in the direction of incoming structures. The com-
bination of the two terms in Eq. (1) produces the desired organic balance
between the two effects, where the inwards growth of image structures is
enforced with moderation.
• Extract the manually selected initial front δΩ0
.
• Repeat until done:
1a. Identify the fill front δΩt
. If Ωt
= ∅, exit.
1b. Compute priorities P(p) ∀p ∈ δΩt
.
2a. Find the patch Ψˆp with the maximum priority,
i.e., ˆp = arg maxp∈δΩt P(p).
2b. Find the exemplar Ψˆq ∈ Φ that minimizes d(Ψˆp, Ψˆq).
2c. Copy image data from Ψˆq to Ψˆp ∀p ∈ Ψˆp ∩ Ω.
3. Update C(p) ∀p ∈ Ψˆp ∩ Ω
TABLE I
Region filling algorithm.
But, as mentioned, the pixels of the target region in the proxim-
ity of those appendices are surrounded by little confidence (most
neighbouring pixels are un-filled), and therefore, the “push” due
to image edges is mitigated by the confidence term. As pre-
sented in the results section, this achieves a graceful and auto-
matic balance of effects and an organic synthesis of the target
region via the mechanism of a single priority computation for
all patches on the fill front. Notice that (1) only dictates the or-
der in which filling happens. The use of image patches for the
actual filling achieves texture synthesis [11].
Furthermore, since the fill order of the target region is dic-
tated solely by the priority function P(p), we avoid having to
predefine an arbitrary fill order as done in existing patch-based
approaches [11], [22]. Our fill order is function of image proper-
7. 6 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 13, NO. 9, SEP 2004
ties, resulting in an organic synthesis process that eliminates the
risk of “broken-structure” artefacts (as in fig. 11f). Furthermore,
since the gradient-based guidance tends to propagate strong
edges, blocky and mis-alignment artefacts are reduced (though
not completely eliminated), without a patch-cutting (quilting)
step [11] or a blur-inducing blending step [22].
It must be stressed that our algorithm does not use explicit nor
implicit segmentation at any stage. For instance, the gradient
operator in (1) is never thresholded and real valued numbers are
employed.
Implemetation details. In our implementation the contour δΩ
of the target region is modelled as a dense list of image point
locations. These points are interactively selected by the user via
a simple drawing interface. Given a point p ∈ δΩ, the nor-
mal direction np is computed as follows: i) the positions of the
“control” points of δΩ are filtered via a bi-dimensional Gaussian
kernel and, ii) np is estimated as the unit vector orthogonal to
the line through the preceding and the successive points in the
list. Alternative implementation may make use of curve model
fitting. The gradient Ip is computed as the maximum value of
the image gradient in Ψp ∩ I. Robust filtering techniques may
also be employed here. Finally, pixels are classified as belong-
ing to the target region Ω, the source region Φ or the remainder
of the image by assigning different values to their alpha compo-
nent. The image alpha channel is, therefore, updated (locally) at
each iteration of the filling algorithm.
IV. RESULTS AND COMPARISONS
Here we apply our algorithm to a variety of images, ranging
from purely synthetic images to full-colour photographs that in-
clude complex textures. Where possible, we make side-by-side
comparisons to previously proposed methods. In other cases,
we hope the reader will refer to the original source of our test
images (many are taken from previous literature on inpainting
and texture synthesis) and compare these results with the results
of earlier work.
In all of the experiments, the patch size was set to be greater
than the largest texel or the thickest structure (e.g., edges) in the
source region. Furthermore, unless otherwise stated the source
region has been set to be Φ = I − Ω. All experiments were run
on a 2.5GHz Pentium IV with 1GB of RAM.
The Kanizsa triangle and the Connectivity Principle. We
perform our first experiment on the well-known Kanizsa trian-
gle [20] to show how the algorithm works on a structure-rich
synthetic image.
As shown in fig. 7, our algorithm deforms the fill front δΩ
under the action of two forces: isophote continuation (the data
term, D(p)) and the “pressure” from surrounding filled pixels
(the confidence term, C(p)).
The sharp linear structures of the incomplete green triangle
are grown into the target region. But also, no single struc-
tural element dominates all of the others. This balance among
competing isophotes is achieved through the naturally decaying
confidence values (fig 10 will illustrate the large-scale artefacts
which arise when this balance is missing). Figures 7e,f also
show the effect of the confidence term in smoothing sharp ap-
pendices such as the vertices of the target region.
a
b c
d e
f g
Fig. 7. Realization of the “Connectivity Principle” on a synthetic example.
(a) Original image, the “Kanizsa triangle” with random noise added. (b) The
occluding white triangle in the original image has been manually selected as
the target region (24% of total image area) and marked with a red boundary.
(c. . .f) Different stages of the filling process. (d) Notice that strong edges
are pushed inside the target region first and that sharp appendices (e.g., the
vertices of the selected triangle) are rapidly smoothed. (f) When no struc-
tures hit the front δΩ the target region evolves in a roughly circular shape.
(g) The output image where the target region has been filled, i.e., the oc-
cluding triangle removed. Little imperfections are present in the curvature
of the circles in the reconstructed areas, while the sides of the internal trian-
gle have been correctly connected. The blur typical of diffusion techniques
is completely avoided. See figs. 11, 13, 20 for further examples of structural
continuation.
As described above, the confidence is propagated in a man-
ner similar to the front-propagation algorithms used in inpaint-
ing. We stress, however, that unlike inpainting, it is the con-
fidence values that are propagated along the front (and which
determine fill order), not colour values themselves, which are
sampled from the source region.
Finally, we note that despite the large size of the removed
region, edges and lines in the filled region are as sharp as any
found in the source region; i.e., there is no diffusion-related blur.
This is a property of exemplar-based texture synthesis.
Comparing different filling orders. Figures 8, 9 and 11
demonstrate the effect of different filling strategies.
8. A.CRIMINISI, P. P´EREZ AND K. TOYAMA: OBJECT REMOVAL BY EXEMPLAR-BASED INPAINTING 7
a b
c d
e f
Fig. 8. Effect of filling order on a synthetic image. (a) The original image;
(b) The target region has been selected and marked with a red boundary;
(c) Filling the target region in raster-scan order; (d) Filling by concentric
layers; (e) The result of applying Harrison’s technique which took 2 45 ;
(f) Filling with our algorithm which took 5 . Notice that even though the
triangle upper vertex is not complete our technique performs better than the
others.
Figure 8f shows how our filling algorithm achieves the best
structural continuation in a simple, synthetic image. Our syn-
thesis algorithm has been compared with three other existing
techniques.
Also, as stated previously, in the case only the edge term is
used, then the overshoot artefact may arise, as demonstrated in
fig 10.
Figure 9 further demonstrates the validity of our algorithm
on an aerial photograph. The 40 × 40-pixel target region has
been selected to straddle two different textures (fig. 9b). The
remainder of the 200 × 200 image in fig. 9a was used as source
for all the experiments in fig. 9.
With raster-scan synthesis (fig. 9c) not only does the top re-
gion (the river) grow into the bottom one (the city area), but
visible seams also appear at the bottom of the target region.
This problem is only partially addressed by a concentric filling
(fig 9d). Similarly, in fig. 9e the sophisticated ordering proposed
by Harrison [15] only moderately succeeds in preventing this
phenomenon.
In all of these cases, the primary difficulty is that since the
(eventual) texture boundary is the most constrained part of the
target region, it should be filled first. But, unless this is explic-
itly addressed in determining the fill order, the texture boundary
is often the last part to be filled. The algorithm proposed in this
paper is designed to address this problem, and thus more natu-
rally extends the contour between the two textures as well as the
vertical grey road in the figure.
In the example in fig. 9, our algorithm synthesizes the target
region in only 2 seconds. Harrison’s resynthesizer [15], which
is the nearest in quality, requires approximately 45 seconds.
a b
c d
e f
Fig. 9. Effect of filling order on an aerial photograph. (a) The original
image, an aerial view of London. (b) The target region has been selected and
marked with a red boundary; Notice that it straddles two different textures;
(c) Filling with raster-scan order; (d) Filling by concentric layers; (e) The
result of applying Harrison’s technique (performed in 45 ); (f) Filling with
our algorithm (performed in 2 ). See text for details.
Fig. 10. The “overshoot” artefact. The use of the data term only in the priority
function may lead to undesired edge “over-shoot” artefacts. This is due to
the fact that some edges may grow indiscriminately. A balance between
structure and texture synthesis is highly desirable and achieved in this paper.
cf. fig 8f.
Figure 11 shows yet another comparison between the con-
centric filling strategy and the proposed algorithm. In the pres-
ence of concave target regions, the “onion peel” filling may lead
to visible artefacts such as unrealistically broken structures (see
the pole in fig. 11f). Conversely, the presence of the data term
of (1) encourages the edges of the pole to grow “first” inside the
target region and thus correctly reconstruct the complete pole
(fig. 11f’). This example demonstrates the robustness of the pro-
posed algorithm with respect to the shape of the selected target
region.
Figure 12 shows a visualization of the priority function re-
lated to the example in fig 11c’,...,f’. Notice that due to the
data term of (1) the pixels belonging to the reconstructed pole
are characterized by larger (brighter) values of P(p). Similarly,
pixels on thin appendices (larger confidence) of the target region
9. 8 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 13, NO. 9, SEP 2004
a b
onion peel our algorithm
c c’
d d’
e e’
f f’
Fig. 11. Onion peel vs. structure-guided filling. (a) Original image. (b) The
target region has been selected and marked with a red boundary. (c,d,e,f)
Results of filling by concentric layers. (c’,d’,e’,f’) Results of filling with
our algorithm. Thanks to the data term in (1) the sign pole is reconstructed
correctly by our algorithm.
tend to have large priority values associated with them.
Comparisons with diffusion-based inpainting. We now turn
to some examples from the inpainting literature. The first two
examples show that our approach works at least as well as in-
painting.
The first (fig. 13) is a synthetic image of two ellipses [4]. The
occluding white torus is removed from the input image and the
two dark background ellipses reconstructed via our algorithm
(fig. 13b). This example was chosen by authors of the original
work on inpainting to illustrate the structure propagation capa-
a b
Fig. 12. Priority function for the example in fig 11f’. (a) The priorities
associated to the target region in fig 11 are initialized as 0 inside (dark
pixels) and 1 outside (bright pixels). (b) The final priorities at the end of
the filling process. Notice that larger values of the priority function P(p)
are associated with pixels on the continuation of strong edges (i.e., the pole)
and on thin outward-pointing appendices.
a b
Fig. 13. Comparison with traditional structure inpainting. (a) Original
image from [4]. The target region is the white ellipse in the centre. (b)
Object removal and structure recovery via our algorithm.
bilities of their algorithm. Our results are visually identical to
those obtained by inpainting (cf. fig.4 in [4]).
We now compare results of the restoration of an hand-drawn
image. In fig. 14 the aim is to remove the foreground text. Our
results (fig. 14b) are mostly indistinguishable with those ob-
tained by traditional inpainting 4
. This example demonstrates
the effectiveness of both techniques in image restoration appli-
cations.
It is in real photographs with large objects to remove, how-
ever, that the real advantages of our approach become apparent.
Figure 15 shows an example on a real photograph, of a bungee
jumper in mid-jump (from [4], fig.8). In the original work, the
thin bungee cord is removed from the image via inpainting. In
order to prove the capabilities of our algorithm we removed the
entire person (fig. 15e). Structures such as the shore line and the
edge of the house have been automatically propagated into the
target region along with plausible textures of shrubbery, water
and roof tiles; and all this with no a priori model of anything
specific to this image.
For comparison, figure 15f shows the result of filling the same
target region (fig. 15b) by image inpainting5
. Considerable blur
is introduced into the target region because of inpainting’s use of
diffusion to propagate colour values. Moreover, high-frequency
textural information is entirely absent.
Figure 16 compares our algorithm to the recent “texture and
structure inpainting” technique described in [5]. The original
image in fig. 16a and fig. 16b are from [5]. Figure 16c shows the
result of our filling algorithm and it demonstrates that also our
4www.ece.umn.edu/users/marcelo/restoration4.html
5120,000 iterations were run using the implementation in
www.bantha.org/∼aj/inpainting/
10. A.CRIMINISI, P. P´EREZ AND K. TOYAMA: OBJECT REMOVAL BY EXEMPLAR-BASED INPAINTING 9
a b
c d e
Fig. 14. Image restoration example. (a) Original image. The text occupies
9% of the total image area. (b) Result of text removal via our algorithm. (c)
Detail of (a). (e) Result of filling the “S” via traditional image-inpainting.
(d) Result of filling the “S” via our algorithm. We also achieve structure
propagation.
technique accomplishes the simultaneous propagation of struc-
ture and texture inside the selected target region. Moreover,
the lack of diffusion step in our method avoids blurring prop-
agated structures (see the vertical edge in the encircled region)
and makes the algorithm more computationally efficient.
Comparison with Drori et al. “Fragment-based Image Com-
pletion”. Figure 17 shows the results of our region-filling al-
gorithm on one of the examples used in [10]. As it can be no-
ticed by comparing fig. 17d and the last image in fig.13 of [10],
our algorithm does not introduce the edge blur that character-
izes the latter figure. In fig. 17d the sharpness of the table edge
is retained since no smoothing is introduced at any stage of our
filling algorithm.
Comparison with Jia et al. “Image Repairing”. Figure 18
compares the results of our region-filling algorithm with those
obtained by Jia et al. in [19]. The image in fig. 18c has been ob-
tained by the region-filling algorithm in [19], and fig. 18d shows
the result of our algorithm. Notice that our algorithm succeedes
in filling the target region without implicit or explicit segmenta-
tion.
Synthesizing composite textures. Fig. 19 demonstrates that our
algorithm behaves well also at the boundary between two differ-
ent textures, such as the ones analyzed in [26]. The original
image in fig. 19a is one of the examples used in [26]. The tar-
get region selected in fig. 19c straddles two different textures.
The quality of the “knitting” in the contour reconstructed via
our approach (fig. 19d) is similar to the original image and to
the results obtained in the original work (fig. 19b), but again,
this has been accomplished without complicated texture models
or a separate boundary-specific texture synthesis algorithm.
It must be stressed that in cases such as that of fig. 19c where
the band around the target region does not present dominant gra-
dient directions the filling process proceeds in a roughly uniform
a b
c d
e f
Fig. 15. Removing large objects from photographs. (a) Original image from
[4], 205 × 307pix. (b) The target region (in white with red boundary)
covers 12% of the total image area. (c,d) Different stages of the filling
process. Notice how the isophotes hitting the boundary of the target region
are propagated inwards while thin appendices (e.g., the arms) in the target
region tend to disappear quickly. (e) The final image where the bungee
jumper has been completely removed and the occluded region reconstructed
by our automatic algorithm (performed in 18 , to be compared with 10
of Harrison’s resynthesizer). (f) The result of region filling by traditional
image inpainting. Notice the blur introduced by the diffusion process and
the complete lack of texture in the synthesized area.
way, similar to the concentric-layer approach. This is achieved
automatically through the priority equation (1).
Further examples on photographs. We show more examples
on photographs of real scenes.
Figure 20 demonstrates, again, the advantage of the proposed
approach in preventing structural artefacts. While the onion-
peel approach produces a deformed horizon (fig. 20f), our algo-
rithm reconstructs the boundary between sky and sea as a con-
vincing straight line (fig. 20f’). During the filling process the
11. 10 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 13, NO. 9, SEP 2004
Fig. 16. Comparison with “texture and structure inpainting” [5]. (a) Orig-
inal image. The target regions are marked in white. (b) Region filling via
“Simultaneous Structure and Texture Propagation”. Notice the blur of the
edge in the circled region. (c) The result of our algorithm. Both structure
and texture have been nicely propagated inside the target region. The edge
in the circled region is noticeably sharper.
a b
c d
Fig. 17. Comparison with “Fragment-Based Image Completion” [11]. (a)
A photo of the oil painting “Still Life with Apples”, P. C´ezanne, c. 1890,
The Hermitage, St. Petersburg. (b) The manually selected target region. (c)
The result of our automatic region-filling algorithm. (d) The data which has
been used to fill the target region. Notice the sharply reconstructed table
edge, see text for details.
topological changes of the target region are handled effortlessly.
In fig. 21, the foreground person has been manually se-
lected and the corresponding region filled in automatically. The
synthesized region in the output image convincingly mimics
the complex background texture with no prominent artefacts
(fig. 21f).
Finally, figs 22... 27 present a gallery of further examples of
object removal and region filling from real photographs. Those
results demonstrate the power and versatility of our algorithm.
V. CONCLUSION AND FUTURE WORK
This paper has presented a novel algorithm for removing large
objects from digital photographs. The result is an image in
which the selected object has been replaced by a visually plausi-
a b
c d
Fig. 18. Comparison with Jia et al. “Image Repairing” [20]. (a) Moor, orig-
inal input image. (b) The manually selected target region. (c) The resulting
region-filling achieved by Jia et al. . (d) The result of our region-filling
algorithm. The missing portion of rainbow is reconstructed convincingly.
Figures (c) and (d) are of comparable quality, but our algorithm avoids the
image segmentation step with considerable increase in speed.
a b
c d
Fig. 19. Comparison with “Parallel Composite Texture Synthesis” [27]. (a)
Original image, the fur of a zebra. (b) The result of the synthesis algorithm
described in “Parallel Composite Texture Synthesis”. (c) Original image
with the target region marked with a red boundary (22% of total image
size). (d) The target region has been filled via our algorithm. The “knitting”
effect along the boundary between the two textures is correctly reproduced
also by our technique.
ble background that mimics the appearance of the source region.
Our approach employs an exemplar-based texture synthesis
technique modulated by a unified scheme for determining the fill
order of the target region. Pixels maintain a confidence value,
which together with image isophotes, influence their fill priority.
The technique is capable of propagating both linear structure
and two-dimensional texture into the target region with a single,
simple algorithm. Comparative experiments show that a simple
selection of the fill order is necessary and sufficient to handle
12. A.CRIMINISI, P. P´EREZ AND K. TOYAMA: OBJECT REMOVAL BY EXEMPLAR-BASED INPAINTING 11
a b
onion peel our algorithm
c c’
d d’
e e’
f f’
Fig. 20. Concentric-layer filling vs. the proposed guided filling algorithm.
(a) Original image. (b) The manually selected target region (20% of the
total image area) has been marked in white with a red boundary. (c,d,e,f)
Intermediate stages in the concentric-layer filling. The deformation of the
horizon is caused by the fact that in the concentric-layer filling sky and sea
grow inwards at uniform speed. Thus, the reconstructed sky-sea boundary
tends to follow the skeleton of the selected target region. (c’,d’,e’,f’) Inter-
mediate stages in the filling by the proposed algorithm, where the horizon
is correctly reconstructed as a straight line.
this task.
Our method performs at least as well as previous techniques
designed for the restoration of small scratches, and, in instances
in which larger objects are removed, it dramatically outperforms
earlier work in terms of both perceptual quality and computa-
tional efficiency.
Moreover, robustness towards changes in shape and topology
of the target region has been demonstrated, together with other
advantageous properties such as: (i) preservation of edge sharp-
a b
c d
e f
Fig. 21. Removing large objects from photographs. (a) Original image.
(b) The target region (10% of the total image area) has been blanked out.
(c. . .e) Intermediate stages of the filling process. (f) The target region has
been completely filled and the selected object removed. The source region
has been automatically selected as a band around the target region. The
edges of the stones have been nicely propagated inside the target region
together with the water texture.
a
b
Fig. 22. Removing an object on a highly textured background. (a) Original
photograph. (b) One of the two people has been removed. This demon-
strates that our algorithm works correctly also for the (simpler) case of
“pure” texture.
13. 12 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 13, NO. 9, SEP 2004
a b
c d
Fig. 23. Region-filling on an image of a text. (a) Original photo of typewritten
text, a favourite example of texture synthesis researchers. (b) A portion of
the image (marked with a red boundary) has been removed. (c) The result
of our filling algorithm. (d) As in (c) with just the text synthesized in the
target region highlighted. Even though the generated text does not make
much sense it still looks plausible. This example further demonstrates the
correct behaviour of our algorithm in the case of pure texture synthesis.
a
b
c
Fig. 24. Removing several objects from a photograph. (a) Original image, a
photograph from Ghana. Courtesy of P. Anandan. (b,c) The crowd of peo-
ple and other objects are gradually removed by our algorithm. The source
regions have been selected as dilated bands around the target regions. Back-
ground texture and structure have seamlessly replaced the original charac-
ters.
a
b
c
d
Fig. 25. Removing multiple objects from photographs. (a) Original photo-
graph of Kirkland (WA). (b,c,d) Several objects are sequentially removed.
Notice how well the shore line and other structures have been reconstructed.
ness, (ii) no dependency on image segmentation and (iii) bal-
anced region filling to avoid over-shooting artefacts.
Also, patch-based filling helps achieve: (i) speed efficiency,
(ii) accuracy in the synthesis of texture (less garbage growing),
and finally (iii) accurate propagation of linear structures.
Limitations of our technique are: (i) the synthesis of regions
for which similar patches do not exist does not produce reason-
able results (a problem common to [10], [19]); (ii) the algo-
rithm is not designed to handle curved structures, (iii) finally,
like in [10], [19], our algorithm does not handle depth ambigui-
ties (i.e., what is in front of what in the occluded area?).
14. A.CRIMINISI, P. P´EREZ AND K. TOYAMA: OBJECT REMOVAL BY EXEMPLAR-BASED INPAINTING 13
a
b
Fig. 26. Completing panoramas. (a) A panorama of Keble College in Oxford.
Courtesy of D. Capel and A. Zisserman [7]. (b) The “bow-tie” effect which
characterizes image stitching has been removed here by automatic region
in-filling.
a b
c d
Fig. 27. Final examples of object removal from photographs. (a,c) Original
images. (b,d) After object removal. Objects and people are removed by
realistic edge and texture synthesis in the selected target regions.
Currently, we are investigating the possibility of constructing
ground-truth data and designing evaluation tests that would al-
low us to quantify the performance of each algorithm. This turns
out to be a non-trivial task. Furthermore we are ivestigating ex-
tensions of the current algorithm to handle accurate propagation
of curved structures in still photographs as well as removing ob-
jects from video, which promise to impose an entirely new set
of challenges.
Acknowledgements. The authors would like to thank
M. Gangnet, A. Blake, P. Anandan and R. Bornard for inspir-
ing discussions; and G. Sapiro, M. Bertalmio, D. Capel, A. Zis-
serman, L. van Gool and A. Zalesny for making some of their
images available.
REFERENCES
[1] M. Ashikhmin. Synthesizing natural textures. In Proc. ACM Symposium
on Interactive 3D Graphics, pages 217–226, Research Triangle Park, NC,
March 2001.
[2] C. Ballester, V. Caselles, J. Verdera, M. Bertalmio, and G. Sapiro. A vari-
ational model for filling-in gray level and color images. In Proc. Int. Conf.
Computer Vision, pages I: 10–16, Vancouver, Canada, June 2001.
[3] M. Bertalmio, A.L. Bertozzi, and G. Sapiro. Navier-stokes, fluid dynam-
ics, and image and video inpainting. In Proc. Conf. Comp. Vision Pattern
Rec., pages I:355–362, Hawai, December 2001.
[4] M. Bertalmio, G. Sapiro, V. Caselles, and C. Ballester. Image inpaint-
ing. In Proc. ACM Conf. Comp. Graphics (SIGGRAPH), pages 417–424,
New Orleans, LU, July 2000. http://mountains.ece.umn.edu/
∼guille/inpainting.htm.
[5] M. Bertalmio, L. Vese, G. Sapiro, and S. Osher. Simultaneous struc-
ture and texture image inpainting. In Proc. Conf. Comp. Vision Pattern
Rec., Madison, WI, 2003. http://mountains.ece.umn.edu/
∼guille/inpainting.htm.
[6] R. Bornard, E. Lecan, L. Laborelli, and J-H. Chenot. Missing data cor-
rection in still images and image sequences. In ACM Multimedia, France,
December 2002.
[7] T. F. Chan and J. Shen. Non-texture inpainting by curvature-driven diffu-
sions (CDD). J. Visual Comm. Image Rep., 4(12):436–449, 2001.
[8] A. Criminisi, P. Perez, and K. Toyama. Object removal by exemplar-based
inpainting. In Proc. Conf. Comp. Vision Pattern Rec., Madison, WI, Jun
2003.
[9] J.S. de Bonet. Multiresolution sampling procedure for analysis and synthe-
sis of texture images. In Proc. ACM Conf. Comp. Graphics (SIGGRAPH),
volume 31, pages 361–368, 1997.
[10] I. Drori, D. Cohen-Or, and H. Yeshurun. Fragment-based image comple-
tion. In ACM Trans. on Graphics (SIGGRAPH 2003 issue), 22(3), vol-
ume 22, pages 303–312, San Diego, US, 2003.
[11] A. Efros and W.T. Freeman. Image quilting for texture synthesis and trans-
fer. In Proc. ACM Conf. Comp. Graphics (SIGGRAPH), pages 341–346,
Eugene Fiume, August 2001.
[12] A. Efros and T. Leung. Texture synthesis by non-parametric sampling.
In Proc. Int. Conf. Computer Vision, pages 1033–1038, Kerkyra, Greece,
September 1999.
[13] W.T. Freeman, E.C. Pasztor, and O.T. Carmichael. Learning low-level
vision. Int. J. Computer Vision, 40(1):25–47, 2000.
[14] D. Garber. Computational Models for Texture Analysis and Texture Syn-
thesis. PhD thesis, University of Southern California, USA, 1981.
[15] P. Harrison. A non-hierarchical procedure for re-synthesis of complex
texture. In Proc. Int. Conf. Central Europe Comp. Graphics, Visua. and
Comp. Vision, Plzen, Czech Republic, February 2001.
[16] D.J. Heeger and J.R. Bergen. Pyramid-based texture analysis/synthesis. In
Proc. ACM Conf. Comp. Graphics (SIGGRAPH), volume 29, pages 229–
233, Los Angeles, CA, 1995.
[17] A. Hertzmann, C. Jacobs, N. Oliver, B. Curless, and D. Salesin. Image
analogies. In Proc. ACM Conf. Comp. Graphics (SIGGRAPH), Eugene
Fiume, August 2001.
[18] H. Igehy and L. Pereira. Image replacement through texture synthesis. In
Proc. Int. Conf. Image Processing, pages III:186–190, 1997.
[19] J. Jia and C.-K. Tang. Image repairing: Robust image synthesis by adaptive
nd tensor voting. In Proc. Conf. Comp. Vision Pattern Rec., Madison, WI,
2003.
[20] G. Kanizsa. Organization in Vision. Praeger, New York, 1979.
[21] J. M. Kasson and W. Plouffe. An analysis of selected computer interchange
color spaces. In ACM Transactions on Graphics, volume 11, pages 373–
405, October 1992.
[22] L. Liang, C. Liu, Y.-Q. Xu, B. Guo, and H.-Y. Shum. Real-time texture
synthesis by patch-based sampling. ACM Transactions on Graphics, 2001.
[23] S Masnou and J.-M. Morel. Level lines based disocclusion. In Int. Conf.
Image Processing, Chicago, 1998.
[24] S. Rane, G. Sapiro, and M. Bertalmio. Structure and texture filling-in of
missing image blocks in wireless transmission and compression applica-
tions. In IEEE. Trans. Image Processing, 2002. to appear.
[25] L.-W. Wey and M. Levoy. Fast texture synthesis using tree-structured
vector quantization. In Proc. ACM Conf. Comp. Graphics (SIGGRAPH),
2000.
[26] A. Zalesny, V. Ferrari, G. Caenen, and L. van Gool. Parallel composite tex-
ture synthesis. In Texture 2002 workshop - ECCV, Copenhagen, Denmark,
June 2002.