Image inpainting


Published on

In this project we have implemented a tool to inpaint selected regions from an image. Inpainting refers to the art of restoring lost parts of image and reconstructing them based on the background information. The tool provides a user interface wherein the user can open an image for inpainting, select the parts
of the image that he wants to reconstruct. The tool would then automatically inpaint the selected area according to the background information. The image can
then be saved. The inpainting in based on the exemplar based approach. The basic aim of this approach is to find examples (i.e. patches) from the image and
replace the lost data with it. Applications of this technique include the restoration of old photographs and damaged film; removal of superimposed text like
dates, subtitles etc.; and the removal of entire objects from the image like microphones or wires in special effects.

Published in: Technology, Business
No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Image inpainting

  2. 2. IMAGE INPAINTING <ul><li> Dr. Anupam Agrawal </li></ul><ul><li> Pulkit Goyal (RIT 2007029) </li></ul><ul><li> Sapan Diwakar (RIT 2007043) </li></ul><ul><li>Project supervisor </li></ul><ul><li>Project Members </li></ul>
  3. 3. IMAGE INPAINTING <ul><li>Introduction </li></ul><ul><ul><li> Inpainting is art of reconstructing the missing portions of images in order to make it more legible and to restore its unity [1]. </li></ul></ul><ul><ul><li>The aim is to create a software that can remove selected portions from the image and fill the hole left behind in a visually plausible way using the background information. </li></ul></ul><ul><ul><li>It is still in the beginning stages and a lot of researches are being carried out to explore this area. </li></ul></ul>
  4. 4. IMAGE INPAINTING <ul><li>The Problem </li></ul><ul><ul><li>With age, photographs often get damaged. To revert deterioration, we need a software that can remove the damaged/scratched regions in an undetectable way. </li></ul></ul><ul><ul><li>When we take a snapshot, there may be some unwanted object that comes in between. There is a need of software that can efficiently remove the marked object from the image. </li></ul></ul>Figure 1: Removing unwanted objects from the Image
  5. 5. IMAGE INPAINTING <ul><li>The Problem (continued) </li></ul><ul><li>Large areas with lots of information lost are harder to reconstruct, because information in other parts of the image is not enough to get an impression of what is missing. If the human brain is not able to imagine what is missing, equations will not make it either. </li></ul><ul><li>Details that are completely hidden/occluded by the object to be removed cannot be recovered by any mathematical method. </li></ul><ul><li>Therefore the objective for image inpainting is not to recover the original image, but to create some image that has a close resemblance with the original image. </li></ul>
  6. 6. IMAGE INPAINTING <ul><li>Image Inpainting methods can be classified broadly into :- </li></ul><ul><li>Texture synthesis algorithm: These algorithms sample the texture form the region outside the region to be inpainted. It has been demonstrated for textures, repeating two dimensional patterns with some randomness. </li></ul><ul><li>Structure recreation: These algorithms try to recreate the structures like lines and object contours. These are generally used when the region to be inpainted is small. This focuses on linear structures which can be thought as one dimensional pattern such as lines and object contours. </li></ul><ul><li>Image Inpainting Methods </li></ul>
  7. 7. IMAGE INPAINTING <ul><li>We have chosen a combination of the above two methods combining the advantages of these algorithms because: </li></ul><ul><ul><li>Capable of inpainting large regions. </li></ul></ul><ul><ul><li>Most of the texture of the image can be rebuilt. </li></ul></ul><ul><ul><li>Most of the structure of the image can be rebuilt. </li></ul></ul><ul><ul><li>Using structural recreation alone introduces some blur. </li></ul></ul><ul><ul><li>If it is efficiently designed, it may be faster than other algorithms. </li></ul></ul><ul><li>Our choice </li></ul>
  8. 8. IMAGE INPAINTING <ul><li>Large images may take a lot of time to be inpainted. We have tried to provide a solution to this through our software. We also provide the option of fast inpainting wherein we trade-off between speed and quality of inpainting. </li></ul><ul><li>Due to memory limitations, JVM may not be able to allocate enough space for the image matrix to be stored for very large images. Providing a solution to this problem is our foremost consideration in future. </li></ul><ul><li>As previously mentioned, since information is lost, there is no way to recover the original information for the region to be inpainted. Thus the result that we provide is just a close resemblance to the original image and may not be visually plausible sometimes. We would try to improve on the algorithm in the future. </li></ul><ul><li>We use green color (R = 0, G = 255, B = 0) to represent the target region (i.e. the region to be inpainted). We used green colors as it is the most used color while developing special effects in movies etc. and exact green color is less likely to occur in an image. Although, if it does occur in the image that is to be inpainted, it would be removed as well. </li></ul><ul><li>Constraints/Assumptions </li></ul>
  9. 9. IMAGE INPAINTING <ul><li>Modules </li></ul><ul><ul><li>The project is divided into two modules: </li></ul></ul><ul><li>User Interface: We provide the user with an interface with which he can select the region to be inpainted by plotting different points on the boundary of the region to be inpainted. These points are then automatically interpolated to form a enclosing polygon. </li></ul><ul><li>Inpainting: This module is concerned with the removal of selected portion from the image so that it looks “reasonable” to the human eye. It receives the image from the user where the region to be inpainted is marked in green (R = 0, G = 255, B = 0). </li></ul>
  10. 10. IMAGE INPAINTING <ul><li>User Interface Module </li></ul><ul><ul><ul><li>This module would be responsible for providing the user with an interactive UI wherein user would be allowed to select the region to be inpainted by placing points on the image which would automatically be interpolated. </li></ul></ul></ul><ul><ul><ul><li>It also updates the user about the progress of the inpainting performed by showing intermediate results of the inpainting process. </li></ul></ul></ul><ul><ul><ul><li>Other features: Undo, Redo, Save, Save As, Fast Inpaint, Slow Inpaint, Pause, Help. </li></ul></ul></ul>
  11. 11. IMAGE INPAINTING <ul><li>User Interface Module </li></ul>Figure 2: Selecting target region using our tool.
  12. 12. IMAGE INPAINTING <ul><li>Image Inpainting Module </li></ul><ul><ul><li>Terms used in the literature: </li></ul></ul><ul><ul><li>Image: I </li></ul></ul><ul><ul><li>Region to be inpainted: Ω </li></ul></ul><ul><ul><li>Source Region (I- Ω ): φ </li></ul></ul><ul><ul><li>Boundary of the target region: δ Ω </li></ul></ul>Figure 3: Terminologies used in inpainting[2]
  13. 13. IMAGE INPAINTING <ul><li>Image Inpainting </li></ul><ul><ul><li>The function of this module would be to reconstruct the image based on the best exemplar approach [2]. </li></ul></ul><ul><li>Computing Filling Priorities , in which a predefined priority function is used to compute the filling order for all unfilled pixels p∈ δΩ in the beginning of each filling iteration. </li></ul><ul><li>Searching Example and Compositing , in which the most similar example is searched from the source region Φ to compose the given patch Ψ (of size N × N pixels) that centered on the given pixel p. </li></ul><ul><li>Updating Image Information , in which the boundary δΩ of the target region Ω and the required information for computing filling priorities are updated. </li></ul>
  14. 14. IMAGE INPAINTING <ul><li>Flow Chart of algorithm for Inpainting </li></ul>
  15. 15. IMAGE INPAINTING <ul><li>Input to the inpainting module </li></ul><ul><li>Input to the inpainting module would be the image with the region to be inpainted marked in a special color (in this case green). </li></ul><ul><li>From this image, construct a Boolean matrix that stores ‘1’ for every pixel that is marked to be inpainted and ‘0’ for the other pixels. Lets call this matrix as fillRegion. </li></ul>Figure 4: Input Image to the inpainting module
  16. 16. IMAGE INPAINTING <ul><li>Initialize Confidence Value </li></ul><ul><li>Confidence Value: In this algorithm, each pixel maintains a confidence value that represents our confidence in selecting that pixel. </li></ul><ul><li>This confidence value does not change once the pixel has been filled. </li></ul><ul><li>We initialize the confidence value for all the pixels in the source region (Φ) to be 1 and the confidence values for the pixels in target region (Ω) to be 0. </li></ul>
  17. 17. IMAGE INPAINTING <ul><li>Finding boundary of target region </li></ul><ul><li>To find the boundary of the target region, we can convolve the fillRegion matrix with Laplacian filter. </li></ul>Laplacian Filter Figure 5: Boundary of the target region 1 1 1 1 -8 1 1 1 1
  18. 18. IMAGE INPAINTING <ul><li>Working with patches </li></ul><ul><li>As with all other exemplar based algorithms, this algorithm replaces the target region patch by patch. </li></ul><ul><li>This patch is generally called the template window, ψ. </li></ul><ul><li>The size of ψ must be defined for the algorithm. </li></ul><ul><li>Without any loss of generality, let us consider a patch size of 9 x 9. </li></ul><ul><li>The patches (size 9x9) are defined around every pixel on δΩ. </li></ul>
  19. 19. IMAGE INPAINTING <ul><li>Why Patch Priorities? </li></ul><ul><li>The result of the inpainting algorithm depends on the order in which the target region is filled [2]. </li></ul><ul><li>Earlier approaches used the “onion peel” method where the target region is synthesized from outside inward in concentric layers. </li></ul><ul><li>In [2] however a different method for estimating the filling order is defined which takes into account the structural features of the image and defines a best-first-fill algorithm. </li></ul><ul><li>This order depends on the priorities assigned to the patches in the boundary of target region (δΩ) </li></ul>
  20. 20. IMAGE INPAINTING <ul><li>Onion Peel vs. Our Approach </li></ul><ul><li>Figure 6: Comparison with the onion peel algorithm. </li></ul><ul><li>The input image [2] </li></ul><ul><li>Image with the board selected to be removed. </li></ul><ul><li>The results of inpainting using the onion peel algorithm [2]. </li></ul><ul><li>The image with one board removed using our algorithm. </li></ul>(a) (b) (c) (d)
  21. 21. IMAGE INPAINTING <ul><li>Finding Patch Priorities </li></ul>
  22. 22. IMAGE INPAINTING <ul><li>Confidence and Data Terms </li></ul>Figure 7: Notation Diagram [2].
  23. 23. IMAGE INPAINTING <ul><li>Problem with this priority term </li></ul><ul><li>The problem with the calculation of priority term in this approach is that the confidence term tends to approach very small values quickly. </li></ul><ul><li>This is evident from the experiments performed by authors in [13]. </li></ul><ul><li>This makes the computed priorities undistinguishable and thus leads to incorrect filling order. They call this phenomenon as the “dropping effect”. </li></ul>Figure 8: Dropping Effect. (a) data term (b) confidence term (c) priority function
  24. 24. IMAGE INPAINTING <ul><li>Modifying the priority term </li></ul>
  25. 25. IMAGE INPAINTING <ul><li>Modifying the priority term (contd…) </li></ul>
  26. 26. IMAGE INPAINTING <ul><li>Modifying the priority term (contd…) </li></ul>
  27. 27. IMAGE INPAINTING <ul><li>Finding Best Exemplar for the chosen patch </li></ul><ul><li>The next step is to find the exemplar that best matches the information contained in the patch. We call this exemplar as best exemplar and can be found by calculating the patch with minimum mean square error with the existing information in the selected patch. </li></ul><ul><li>Mean Square error between two patches P and Q is defined as </li></ul>
  28. 28. IMAGE INPAINTING <ul><li>Handling patches with same MSE </li></ul>(a) (b) (c) Figure 9: (a) Input Image (b) Output using our implementation of Crimnisi’s approach. (c) Our Approach.
  29. 29. IMAGE INPAINTING <ul><li>Improving Time Complexity </li></ul><ul><li>Earlier approaches searched the complete image to find best exemplar. </li></ul><ul><li>We search only the surrounding portions from the image to find the best e exemplar. </li></ul><ul><li>The diameter of the surrounding region to search is calculated at run time b by taking into account the region to be inpainted. </li></ul><ul><li>We search for the best exemplar from a rectangle defined by (StartX, star startY) and (endX, endY), where, </li></ul>where, m = number of rows in the patch. n = number of columns in the patch. c r = maximum number of continuous green pixels in one row c c = maximum number of continuous green pixels in one column D x and D y are constants.
  30. 30. IMAGE INPAINTING <ul><li>Comparison with earlier approach </li></ul>Figure 10: Comparison with Criminisi’s Approach (a) The input image of size 416 x 316 [2]. (b) The image with island to be removed marked in green color. (c) The output using our implementation of Criminisi’s approach. The time taken was 2 minutes 35 seconds. (d) The output using our algorithm. Time taken was 2 minutes and 5 seconds. (a) (b) (d) (c)
  31. 31. IMAGE INPAINTING <ul><li>Update the patch </li></ul><ul><li>In this step, we replace the pixels of target region belonging to the patch with the corresponding pixels in best exemplar found in the previous step. </li></ul><ul><li>Update the confidence values. </li></ul><ul><ul><li>C(p) = C(q) </li></ul></ul><ul><ul><li>where C(q) represents the confidence term for the patch with maximum priority. As filling proceeds, the confidence values start decaying indicating that we are less sure about the values of those pixels. </li></ul></ul>
  32. 32. IMAGE INPAINTING <ul><li>Applications of Image Inpainting </li></ul><ul><li>Repairing Photographs: With age, photographs often get damaged or scratched. We can revert deterioration using inpainting. </li></ul><ul><li>Remove unwanted objects: Using inpainting, we can remove unwanted objects, text, etc. from the image. </li></ul><ul><li>Special Effects: This may be used in producing special effect. </li></ul><ul><li>Video inpainting: If extended to video inpainting, it would be able to provide a great tool to create special effects etc. </li></ul>
  33. 33. IMAGE INPAINTING <ul><li>Example: Remove Unwanted Objects </li></ul>Couldn’t get a clear shot of the scene.
  34. 34. IMAGE INPAINTING <ul><li>Example: Remove Unwanted Objects </li></ul>
  35. 35. IMAGE INPAINTING <ul><li>Example: Remove Unwanted Objects </li></ul>
  36. 36. IMAGE INPAINTING <ul><li>Example: Remove Unwanted Objects </li></ul>
  37. 37. IMAGE INPAINTING <ul><li>Another Example: Special Effects </li></ul>
  38. 38. IMAGE INPAINTING <ul><li>Example: Repairing Photographs </li></ul>
  39. 39. IMAGE INPAINTING <ul><li>Example: Repairing Photographs </li></ul>
  40. 40. IMAGE INPAINTING <ul><li>Refrences </li></ul>[1] Marcelo Bertalmio et. al., “Image Inpainting”, in International Conference on Computer Graphics and Interactive Techniques, Proceedings of the 27th annual conference on Computer graphics and interactive technique , 2000 (Available: )   [2] A. Criminisi, P. Perez and K. Toyama, “Region Filling and Object Removal by Exemplar- Based Image Inpainting”, in IEEE Transactions on Image Processing , Vol. 13, No. 9, September 2004 (Availble: )   [3] Manuel M. Oliveira, Brian Bowen, Richard McKenna and Yu-Sung Chang, “Fast Digital Image Inpainting”, in Proceedings of the International Conference on Visualization, Imaging and Image Processing (VIIP 2001) , Marbella, Spain. September 3-5, 2001 (Available: )   [4] Timothy K. Shih et. al., “Video inpainting and implant via diversified temporal continuations, in International Multimedia Conference, Proceedings of the 14th annual ACM international conference on Multimedia , 2006 (Available: )   [5] A.C. Kokaram, R.D. Morris, W.J. Fitzgerald, P.J.W. Rayner. “Interpolation of missing data in image sequences”, in IEEE Transactions on Image Processing 11(4) , pp. 1509-1519, 1995. [6] P.Elango and K.Murugesan. “Digital Image Inpainting Using Cellular Neural Network, in Int. J. Open Problems Compt. Math. , Vol. 2, No. 3, pp. 439-450, September 2009 (Available: )
  41. 41. IMAGE INPAINTING <ul><li>Refrences </li></ul>[7] Carola-Bibiane Schonlieb, Andrea Bertozzi, Martin Burger and Lin He. “Image Inpainting Using a Fourth-Order Total Variation Flow”, in SAMPTA’09 , Marseille, France, 2009 (Available: )   [8] Martin Burger, Lin He and Carola-Bibiane Schonlieb. “Cahn-Hilliard Inpainting and a Generalization for Grayvalue Images”, in UCLA CAM report , pp. 08-41, June 2008 (Available: )   [9] M Elad, J. –L Starck, P. Querre and D.L. Donoho. “Simultaneous Cartoon and texture image inpainting using morphological component analysis (MCA)”, Journal on Applied and Computational Harmonic Analysis , August, 2005 (Available: )   [10] Guillaume Forbin, Bernard Besserer, Jiri Boldys and David Tschumperle. “Temporal Extension to Exemplar-Based Inpainting applied to scratch correction in damaged image sequences”, in Visualization, Imaging and Image Processing (VIIP 2005) , Benidorm : Espange, 2005 (Available: )   [11] R.C. Gonzalez and R.E. Woods, Digital Image Processing , 2 nd ed. Pearson Education, 2002.   [12] M.J. Fadili, J. –L. Starck and F. Murtagh. “Inpainting and zooming using Sparse Representations”, The Computer Journal , 2009 (Available: )   [13] Wen-Huang Cheng, Chun-Wei Hsieh, Sheng-Kai Lin, Chia-Wei Wang and Ja-Ling Wu. “Robust Algorithm for Exemplar-Based Image Inpainting”, in The International Conference on Computer Graphics, Imaging and Vision (CGIV 2005) , 26-29 July 2005, Beijing, China, pp. 64-69.
  42. 42. IMAGE INPAINTING Thanks.