This document provides details on course work completed as part of a Computer Vision course. It includes source images and summaries of edge detection algorithms applied to the images. Edge detection was performed using Roberts, Sobel, Prewitt and Robinson operators, as well as Laplacian of Gaussian. Thresholding techniques are discussed for binarizing the edge detection outputs. The effects of mask size and sigma values on Laplacian of Gaussian are demonstrated. Pseudocode is provided for the convolution operations.
In this project we have implemented a tool to inpaint selected regions from an image. Inpainting refers to the art of restoring lost parts of image and reconstructing them based on the background information. The tool provides a user interface wherein the user can open an image for inpainting, select the parts
of the image that he wants to reconstruct. The tool would then automatically inpaint the selected area according to the background information. The image can
then be saved. The inpainting in based on the exemplar based approach. The basic aim of this approach is to find examples (i.e. patches) from the image and
replace the lost data with it. Applications of this technique include the restoration of old photographs and damaged film; removal of superimposed text like
dates, subtitles etc.; and the removal of entire objects from the image like microphones or wires in special effects.
This document summarizes the author's work on a traffic sign recognition project using deep learning. The author explores preprocessing techniques like grayscale conversion, histogram equalization, and data augmentation. Two neural network architectures are developed - K-Net and K-Net-vgg, based on LeNet and VGG respectively. K-Net-vgg achieves 99.14% accuracy on the validation set and 97.07% on the test set. The model is also tested on 10 unlabeled internet images, producing top-5 predictions for each.
The Image Panorama is a technique of stitching more images to create a more broader view which our normal eye does in a wider angle rather than that of the view which is restricted by the camera
Improved Alpha-Tested Magnification for Vector Textures and Special Effectsナム-Nam Nguyễn
This document presents a technique for improving the rendering of vector textures at high magnifications using distance fields. A distance field is generated from a high-resolution image and stored in a low-resolution texture. This allows the texture to be rendered using alpha testing on all hardware, producing crisp edges. Programmable shaders can apply effects like soft edges, outlines, and drop shadows by manipulating the distance field. The technique was integrated into the Source game engine to improve text and UI rendering with minimal performance impact.
It Works well on images while you want to edit an image or to repair old images. it also has great results on occluded images and good to use on censorship purposes. Appropriate reconstruction is one of its features.
one of the main and effective purposes is to complete images which have been destroyed during a time on SSDs or during transferring data in a transmission line or during transferring data between two devices such as laptop or Cellphones
Hope you all enjoy and make it as a reference
Comparative between global threshold and adaptative threshold concepts in ima...AssiaHAMZA
A digital image can be considered as a discrete representation of data possessing both spatial (layout) and
intensity (colour) information. Pixel intensities form a gateway communication between human perception
of things and digital image processing.
Image thresholding is a simple form of image segmentation. It is a way to create a binary image from a
grayscale or full-color image. This is typically done in order to separate "object" or foreground pixels from
background pixels to aid in image processing.
In this paper we aim to present a small and modest comparative between two kind of image thresholding.
The local and adapatative concepts may not give the same correct results at the end of a process, and we
aim to demonstrate which kind of the two
Inpainting refers to the art of restoring lost parts of image and reconstructing them based on the background information i.e Image inpainting is the process of reconstructing lost or deteriorated parts of images using information from surrounding areas. In fine art museums, inpainting of degraded paintings is traditionally carried out by professional artists and usually very time consuming.The purpose of inpainting is to reconstruct missing regions in a visually plausible manner so that it seems reasonable to the human eye. There have been several approaches proposed for the same.
This paper gives an overview of different Techniques of Image Inpainting.The proposed work includes the overview of PDE based inpainting algorithm and Texture synthesis based inpainting algorithm. This paper presents a brief survey on comparative study of these two techniques used for Image Inpainting.
Digital Composition of Mosaics using Edge Priority Tile AssignmentBill Kromydas
This document proposes a novel algorithm for composing digital mosaics using edge priority tile assignment. It begins by detecting edges in the target image and pruning small edges. Candidate tiles with similar edge structures are identified through template matching. A baseline mosaic is generated using mean square error criteria for tile assignment. Then, a second pass assigns tiles to edge regions, preferring candidate edge tiles if their MSE is within a threshold of the baseline. Optionally, edge tiles can be enhanced to further draw out the target image form. Experimental results on Apollo mission photos show the edge priority mosaic better reveals form through a supporting edge structure.
In this project we have implemented a tool to inpaint selected regions from an image. Inpainting refers to the art of restoring lost parts of image and reconstructing them based on the background information. The tool provides a user interface wherein the user can open an image for inpainting, select the parts
of the image that he wants to reconstruct. The tool would then automatically inpaint the selected area according to the background information. The image can
then be saved. The inpainting in based on the exemplar based approach. The basic aim of this approach is to find examples (i.e. patches) from the image and
replace the lost data with it. Applications of this technique include the restoration of old photographs and damaged film; removal of superimposed text like
dates, subtitles etc.; and the removal of entire objects from the image like microphones or wires in special effects.
This document summarizes the author's work on a traffic sign recognition project using deep learning. The author explores preprocessing techniques like grayscale conversion, histogram equalization, and data augmentation. Two neural network architectures are developed - K-Net and K-Net-vgg, based on LeNet and VGG respectively. K-Net-vgg achieves 99.14% accuracy on the validation set and 97.07% on the test set. The model is also tested on 10 unlabeled internet images, producing top-5 predictions for each.
The Image Panorama is a technique of stitching more images to create a more broader view which our normal eye does in a wider angle rather than that of the view which is restricted by the camera
Improved Alpha-Tested Magnification for Vector Textures and Special Effectsナム-Nam Nguyễn
This document presents a technique for improving the rendering of vector textures at high magnifications using distance fields. A distance field is generated from a high-resolution image and stored in a low-resolution texture. This allows the texture to be rendered using alpha testing on all hardware, producing crisp edges. Programmable shaders can apply effects like soft edges, outlines, and drop shadows by manipulating the distance field. The technique was integrated into the Source game engine to improve text and UI rendering with minimal performance impact.
It Works well on images while you want to edit an image or to repair old images. it also has great results on occluded images and good to use on censorship purposes. Appropriate reconstruction is one of its features.
one of the main and effective purposes is to complete images which have been destroyed during a time on SSDs or during transferring data in a transmission line or during transferring data between two devices such as laptop or Cellphones
Hope you all enjoy and make it as a reference
Comparative between global threshold and adaptative threshold concepts in ima...AssiaHAMZA
A digital image can be considered as a discrete representation of data possessing both spatial (layout) and
intensity (colour) information. Pixel intensities form a gateway communication between human perception
of things and digital image processing.
Image thresholding is a simple form of image segmentation. It is a way to create a binary image from a
grayscale or full-color image. This is typically done in order to separate "object" or foreground pixels from
background pixels to aid in image processing.
In this paper we aim to present a small and modest comparative between two kind of image thresholding.
The local and adapatative concepts may not give the same correct results at the end of a process, and we
aim to demonstrate which kind of the two
Inpainting refers to the art of restoring lost parts of image and reconstructing them based on the background information i.e Image inpainting is the process of reconstructing lost or deteriorated parts of images using information from surrounding areas. In fine art museums, inpainting of degraded paintings is traditionally carried out by professional artists and usually very time consuming.The purpose of inpainting is to reconstruct missing regions in a visually plausible manner so that it seems reasonable to the human eye. There have been several approaches proposed for the same.
This paper gives an overview of different Techniques of Image Inpainting.The proposed work includes the overview of PDE based inpainting algorithm and Texture synthesis based inpainting algorithm. This paper presents a brief survey on comparative study of these two techniques used for Image Inpainting.
Digital Composition of Mosaics using Edge Priority Tile AssignmentBill Kromydas
This document proposes a novel algorithm for composing digital mosaics using edge priority tile assignment. It begins by detecting edges in the target image and pruning small edges. Candidate tiles with similar edge structures are identified through template matching. A baseline mosaic is generated using mean square error criteria for tile assignment. Then, a second pass assigns tiles to edge regions, preferring candidate edge tiles if their MSE is within a threshold of the baseline. Optionally, edge tiles can be enhanced to further draw out the target image form. Experimental results on Apollo mission photos show the edge priority mosaic better reveals form through a supporting edge structure.
This document discusses various methods for contrast enhancement of images, including:
- Local color correction, which enhances contrast locally rather than globally.
- Simplest color balance, which clips a percentage of dark and light pixels before normalization.
- Screened Poisson equation, which acts as a high-pass filter using a single contrast parameter. Implementations of these methods in various color spaces like RGB, HSI, HSV, and HSL are provided. Local color correction is shown to perform better than global gamma correction by handling both dark and bright areas simultaneously.
The document discusses techniques for contrast enhancement of digital images through histogram processing. It describes histogram equalization, which increases contrast by spreading out the most frequent intensity values. Limitations include changes to image brightness. Bi-histogram and multi-histogram equalization partition histograms to minimize brightness changes. Brightness preserving dynamic fuzzy histogram equalization further improves brightness preservation through fuzzy histogram computation, dynamic equalization of histogram partitions, and normalization of image brightness. It provides objective metrics to evaluate contrast enhancement and brightness preservation capabilities of these techniques.
This document compares and analyzes several histogram equalization techniques for image enhancement:
1) Contrast Limited Adaptive Histogram Equalization (CLAHE) divides an image into contextual regions and applies histogram equalization to each region separately, limiting contrast.
2) Dualistic Sub-image Histogram Equalization (DSIHE) decomposes an image into two equal-area sub-images based on the probability density function, equalizes each sub-image, and combines the results.
3) Dynamic Histogram Equalization (DHE) partitions an image histogram based on local minima, allocates a gray scale range to each partition, and applies histogram equalization to each partition within its allocated range.
Region filling and object removal by exemplar based image inpaintingWoonghee Lee
To get rid of (an) object(s) at a picture or to restore a picture from scratches or holes, Criminisi at el. suggested an algorithm which is combied "texture synthesis" and "inpainting". I made the slide to present at a class to introduce this algorithm. I refered a slide http://bit.ly/1Ng7DNt. I wish this slide may help you to understand the algorithm. Thank you.
This document summarizes the key steps in a digital signal processing project on simulating a basic digital camera model. It discusses face detection using the Viola-Jones algorithm and Haar-like features. It then covers adding noise to images, designing mean and median filters to reduce noise, and optimizing filter performance. The document also discusses histogram equalization for color enhancement and a technique for contrast enhancement that considers color shifting and the human visual system.
This document discusses different morphological image processing techniques including dilation, erosion, opening, closing, and top-hat transformation. It provides examples of applying these techniques to grayscale images with different structuring element radii. The document indicates that the student's implementations of top-hat transformation with radii of 2 and 10 were unsuccessful at enhancing light objects, likely due to errors in the program, as the results had inconsistent features and it is unclear what happened with the larger radius. Figures are included showing the Guide User Interface results for comparison.
A Review on Image Inpainting to Restore ImageIOSR Journals
This document reviews various techniques for image inpainting to restore damaged images. It discusses diffusion-based inpainting and texture synthesis approaches. Specific techniques covered include:
1. PDE-based inpainting using isophote lines from surrounding areas.
2. Multiresolution inpainting dividing images into blocks and considering variance, percentage of damaged pixels.
3. Exemplar-based completion using image fragments from global examples.
4. Inpainting of natural scenes limiting search horizontally using Fourier transforms.
The document compares advantages and disadvantages of each approach for efficiently and accurately restoring images. Wavelet transforms and morphological component analysis are also reviewed for inpainting texture and cartoon layers
This document proposes and tests a fractional-pixel averaging method for de-screening images that were scanned from halftoned prints. It presents the formulation of fractional-pixel averaging, which involves dividing pixels into sub-pixels, assigning values to sub-pixels, and using resolution conversion to average sub-pixel values. The method was tested on an IS&T NIP16 test target scanned at 600 dpi, which exhibited moiré patterns due to interaction between the offset print screen and scanner resolution. Results showed that fractional-pixel averaging was effective at removing moiré patterns by smoothing textures while retaining intensity.
COLOUR IMAGE ENHANCEMENT BASED ON HISTOGRAM EQUALIZATIONecij
Histogram equalization is a nonlinear technique for adjusting the contrast of an image using its
histogram. It increases the brightness of a gray scale image which is different from the mean brightness of
the original image. There are various types of Histogram equalization techniques like Histogram
Equalization, Contrast Limited Adaptive Histogram Equalization, Brightness Preserving Bi Histogram
Equalization, Dualistic Sub Image Histogram Equalization, Minimum Mean Brightness Error Bi
Histogram Equalization, Recursive Mean Separate Histogram Equalization and Recursive Sub Image
Histogram Equalization. In this paper, the histogram equalization approach of gray-level images is
extended for colour images. The acquired image is converted into HSV (Hue, Saturation, Value). The
image is then decomposed into two parts by using exposure threshold and then equalized them
independently Over enhancement is also controlled in this method by using clipping threshold. For
measuring the performance of the enhanced image, entropy and contrast are calculated.
Contrast enhancement techniques are used to increase visual distinction between features in remote sensing images. This is done by manipulating the spectral properties as opposed to spatial properties. The main techniques discussed are contrast stretching and level slicing. Contrast stretching involves linear, histogram equalization, and Gaussian transforms to map the original pixel values to a new range to take advantage of the full display range. Level slicing segments values into discrete slices that are assigned a single display value and color. These techniques help enhance features that are difficult to distinguish due to narrow brightness ranges.
A Survey on Exemplar-Based Image Inpainting Techniquesijsrd.com
Preceding paper include exemplar-based image inpainting technique give idea how to inpaint destroyed region such as Criminisi algorithm, patch shifting scheme, search region prior method. Criminsi’s and Sarawut’s patch shifting scheme needed more time to inpaint an damaged region but proposed method decrease time complexity by searching only in related region of missing portion of image.
Morphology fundamentals consist of erosion and dilation, which are basic morphological operations. Erosion removes pixels from object boundaries, shrinking object sizes and enlarging holes. Dilation adds pixels to boundaries, enlarging object sizes and shrinking holes. Both operations use a structuring element to determine how many pixels are added or removed. Erosion compares the structuring element to the image, removing pixels where it is not contained. Dilation compares overlaps, adding pixels where the structuring element and image overlap by at least one element.
Image enhancement techniques can be divided into spatial and frequency domain methods. Spatial domain methods operate directly on pixel values using techniques like basic gray level transformations, contrast stretching and thresholding. These manipulations are used to accentuate image features, improve display quality or aid machine analysis by modifying pixel intensities within an image.
Contrast Limited Adaptive Histogram Equalization (CLAHE) is a contrast enhancement technique that differs from ordinary adaptive histogram equalization by limiting contrast in localized sections of images called tiles. CLAHE operates on tiles rather than the whole image, enhancing contrast in each tile while limiting amplification of noise. This technique can be used to improve contrast in applications like mammogram analysis and medical imaging while avoiding unwanted artifacts.
This document discusses image histogram equalization. It begins by defining an image histogram as a graphical representation of the number of pixels at each intensity value. Histogram equalization automatically determines a transformation function to produce a new image with a uniform histogram and increased contrast. This technique works by mapping the intensity values of the input image to a new range of values such that the histogram of the output image is uniform. The document provides an example of performing histogram equalization on an image and assigns related homework on digital image processing applications.
The document discusses image tampering detection. It aims to develop an algorithm to detect and classify possible image tampering such as cropping, rotation, blurring, sharpening, and insertion. The motivation is for security, law enforcement, and intelligence purposes. The document discusses different types of tampering like copy-paste, copy-move, cut-paste, and cut. It also discusses techniques like double JPEG compression detection and estimating quantization tables to detect tampering. The document presents some sample results and discusses the overall process that includes preprocessing, edge map generation, frequency transformation, and detection of resampling effects. It concludes saying different algorithms work for different types of forgeries and a comprehensive program can be developed with proper knowledge
The document discusses different types of images in Matlab including binary, grayscale, indexed, and RGB images. It also summarizes commands to convert between image types such as converting grayscale to indexed or truecolor to binary. Finally, it provides examples of how to view images, measure pixel values and distances, and crop images using the imtool command.
Digital image processing Tool presentationdikshabehl5392
The development of this image processing software will help editing process to be done effectively. It requires less space on hard disk; emphasizing only on the crucial image processing functions and the executable program will take less space.
This document presents research on detecting image splicing in images shared on the web. The researchers collected a dataset of over 13,000 real-world forged images from 82 verified cases found online. They evaluated existing splicing detection algorithms on this dataset and found that most algorithms could only detect a small fraction of forgeries. The noise-based method of Mahdian et al. was most successful but also prone to false positives. The researchers conclude that forensic traces are often lost due to the many alterations images undergo when shared online, posing challenges for splicing detection in real-world web images.
The document provides details about Jonathan Westlake's coursework for a Computer Vision course. It includes two image processing projects. The first project involves using mathematical morphology operations like dilation and erosion to separate touching coins in an image. Connected component labeling is then used to count the coins. The second project involves line and corner detection on an image of an airplane. Source code for the projects is available upon request.
Jawdat Mini Hackaton 2016 by Jumroh ArrasidJumroh Arrasid
Dokumen tersebut merupakan laporan tentang pengembangan topologi jaringan menggunakan SDN dan network automation untuk menghubungkan dua kantor yang berbeda lokasi melalui tunnel IPsec dan mengatur komunikasi antara kedua kantor tersebut.
This document discusses various methods for contrast enhancement of images, including:
- Local color correction, which enhances contrast locally rather than globally.
- Simplest color balance, which clips a percentage of dark and light pixels before normalization.
- Screened Poisson equation, which acts as a high-pass filter using a single contrast parameter. Implementations of these methods in various color spaces like RGB, HSI, HSV, and HSL are provided. Local color correction is shown to perform better than global gamma correction by handling both dark and bright areas simultaneously.
The document discusses techniques for contrast enhancement of digital images through histogram processing. It describes histogram equalization, which increases contrast by spreading out the most frequent intensity values. Limitations include changes to image brightness. Bi-histogram and multi-histogram equalization partition histograms to minimize brightness changes. Brightness preserving dynamic fuzzy histogram equalization further improves brightness preservation through fuzzy histogram computation, dynamic equalization of histogram partitions, and normalization of image brightness. It provides objective metrics to evaluate contrast enhancement and brightness preservation capabilities of these techniques.
This document compares and analyzes several histogram equalization techniques for image enhancement:
1) Contrast Limited Adaptive Histogram Equalization (CLAHE) divides an image into contextual regions and applies histogram equalization to each region separately, limiting contrast.
2) Dualistic Sub-image Histogram Equalization (DSIHE) decomposes an image into two equal-area sub-images based on the probability density function, equalizes each sub-image, and combines the results.
3) Dynamic Histogram Equalization (DHE) partitions an image histogram based on local minima, allocates a gray scale range to each partition, and applies histogram equalization to each partition within its allocated range.
Region filling and object removal by exemplar based image inpaintingWoonghee Lee
To get rid of (an) object(s) at a picture or to restore a picture from scratches or holes, Criminisi at el. suggested an algorithm which is combied "texture synthesis" and "inpainting". I made the slide to present at a class to introduce this algorithm. I refered a slide http://bit.ly/1Ng7DNt. I wish this slide may help you to understand the algorithm. Thank you.
This document summarizes the key steps in a digital signal processing project on simulating a basic digital camera model. It discusses face detection using the Viola-Jones algorithm and Haar-like features. It then covers adding noise to images, designing mean and median filters to reduce noise, and optimizing filter performance. The document also discusses histogram equalization for color enhancement and a technique for contrast enhancement that considers color shifting and the human visual system.
This document discusses different morphological image processing techniques including dilation, erosion, opening, closing, and top-hat transformation. It provides examples of applying these techniques to grayscale images with different structuring element radii. The document indicates that the student's implementations of top-hat transformation with radii of 2 and 10 were unsuccessful at enhancing light objects, likely due to errors in the program, as the results had inconsistent features and it is unclear what happened with the larger radius. Figures are included showing the Guide User Interface results for comparison.
A Review on Image Inpainting to Restore ImageIOSR Journals
This document reviews various techniques for image inpainting to restore damaged images. It discusses diffusion-based inpainting and texture synthesis approaches. Specific techniques covered include:
1. PDE-based inpainting using isophote lines from surrounding areas.
2. Multiresolution inpainting dividing images into blocks and considering variance, percentage of damaged pixels.
3. Exemplar-based completion using image fragments from global examples.
4. Inpainting of natural scenes limiting search horizontally using Fourier transforms.
The document compares advantages and disadvantages of each approach for efficiently and accurately restoring images. Wavelet transforms and morphological component analysis are also reviewed for inpainting texture and cartoon layers
This document proposes and tests a fractional-pixel averaging method for de-screening images that were scanned from halftoned prints. It presents the formulation of fractional-pixel averaging, which involves dividing pixels into sub-pixels, assigning values to sub-pixels, and using resolution conversion to average sub-pixel values. The method was tested on an IS&T NIP16 test target scanned at 600 dpi, which exhibited moiré patterns due to interaction between the offset print screen and scanner resolution. Results showed that fractional-pixel averaging was effective at removing moiré patterns by smoothing textures while retaining intensity.
COLOUR IMAGE ENHANCEMENT BASED ON HISTOGRAM EQUALIZATIONecij
Histogram equalization is a nonlinear technique for adjusting the contrast of an image using its
histogram. It increases the brightness of a gray scale image which is different from the mean brightness of
the original image. There are various types of Histogram equalization techniques like Histogram
Equalization, Contrast Limited Adaptive Histogram Equalization, Brightness Preserving Bi Histogram
Equalization, Dualistic Sub Image Histogram Equalization, Minimum Mean Brightness Error Bi
Histogram Equalization, Recursive Mean Separate Histogram Equalization and Recursive Sub Image
Histogram Equalization. In this paper, the histogram equalization approach of gray-level images is
extended for colour images. The acquired image is converted into HSV (Hue, Saturation, Value). The
image is then decomposed into two parts by using exposure threshold and then equalized them
independently Over enhancement is also controlled in this method by using clipping threshold. For
measuring the performance of the enhanced image, entropy and contrast are calculated.
Contrast enhancement techniques are used to increase visual distinction between features in remote sensing images. This is done by manipulating the spectral properties as opposed to spatial properties. The main techniques discussed are contrast stretching and level slicing. Contrast stretching involves linear, histogram equalization, and Gaussian transforms to map the original pixel values to a new range to take advantage of the full display range. Level slicing segments values into discrete slices that are assigned a single display value and color. These techniques help enhance features that are difficult to distinguish due to narrow brightness ranges.
A Survey on Exemplar-Based Image Inpainting Techniquesijsrd.com
Preceding paper include exemplar-based image inpainting technique give idea how to inpaint destroyed region such as Criminisi algorithm, patch shifting scheme, search region prior method. Criminsi’s and Sarawut’s patch shifting scheme needed more time to inpaint an damaged region but proposed method decrease time complexity by searching only in related region of missing portion of image.
Morphology fundamentals consist of erosion and dilation, which are basic morphological operations. Erosion removes pixels from object boundaries, shrinking object sizes and enlarging holes. Dilation adds pixels to boundaries, enlarging object sizes and shrinking holes. Both operations use a structuring element to determine how many pixels are added or removed. Erosion compares the structuring element to the image, removing pixels where it is not contained. Dilation compares overlaps, adding pixels where the structuring element and image overlap by at least one element.
Image enhancement techniques can be divided into spatial and frequency domain methods. Spatial domain methods operate directly on pixel values using techniques like basic gray level transformations, contrast stretching and thresholding. These manipulations are used to accentuate image features, improve display quality or aid machine analysis by modifying pixel intensities within an image.
Contrast Limited Adaptive Histogram Equalization (CLAHE) is a contrast enhancement technique that differs from ordinary adaptive histogram equalization by limiting contrast in localized sections of images called tiles. CLAHE operates on tiles rather than the whole image, enhancing contrast in each tile while limiting amplification of noise. This technique can be used to improve contrast in applications like mammogram analysis and medical imaging while avoiding unwanted artifacts.
This document discusses image histogram equalization. It begins by defining an image histogram as a graphical representation of the number of pixels at each intensity value. Histogram equalization automatically determines a transformation function to produce a new image with a uniform histogram and increased contrast. This technique works by mapping the intensity values of the input image to a new range of values such that the histogram of the output image is uniform. The document provides an example of performing histogram equalization on an image and assigns related homework on digital image processing applications.
The document discusses image tampering detection. It aims to develop an algorithm to detect and classify possible image tampering such as cropping, rotation, blurring, sharpening, and insertion. The motivation is for security, law enforcement, and intelligence purposes. The document discusses different types of tampering like copy-paste, copy-move, cut-paste, and cut. It also discusses techniques like double JPEG compression detection and estimating quantization tables to detect tampering. The document presents some sample results and discusses the overall process that includes preprocessing, edge map generation, frequency transformation, and detection of resampling effects. It concludes saying different algorithms work for different types of forgeries and a comprehensive program can be developed with proper knowledge
The document discusses different types of images in Matlab including binary, grayscale, indexed, and RGB images. It also summarizes commands to convert between image types such as converting grayscale to indexed or truecolor to binary. Finally, it provides examples of how to view images, measure pixel values and distances, and crop images using the imtool command.
Digital image processing Tool presentationdikshabehl5392
The development of this image processing software will help editing process to be done effectively. It requires less space on hard disk; emphasizing only on the crucial image processing functions and the executable program will take less space.
This document presents research on detecting image splicing in images shared on the web. The researchers collected a dataset of over 13,000 real-world forged images from 82 verified cases found online. They evaluated existing splicing detection algorithms on this dataset and found that most algorithms could only detect a small fraction of forgeries. The noise-based method of Mahdian et al. was most successful but also prone to false positives. The researchers conclude that forensic traces are often lost due to the many alterations images undergo when shared online, posing challenges for splicing detection in real-world web images.
The document provides details about Jonathan Westlake's coursework for a Computer Vision course. It includes two image processing projects. The first project involves using mathematical morphology operations like dilation and erosion to separate touching coins in an image. Connected component labeling is then used to count the coins. The second project involves line and corner detection on an image of an airplane. Source code for the projects is available upon request.
Jawdat Mini Hackaton 2016 by Jumroh ArrasidJumroh Arrasid
Dokumen tersebut merupakan laporan tentang pengembangan topologi jaringan menggunakan SDN dan network automation untuk menghubungkan dua kantor yang berbeda lokasi melalui tunnel IPsec dan mengatur komunikasi antara kedua kantor tersebut.
The document discusses several methods for migrating from IPv4 to IPv6 including native dual stack, DS-Lite, NAT64, and 6RD. Native dual stack allows simultaneous use of IPv4 and IPv6 but is the most complex to deploy. DS-Lite tunnels IPv4 packets over IPv6 to allow an IPv6-only access network. NAT64 provides IPv4-IPv6 translation to allow access to IPv4 servers from an IPv6 network. 6RD allows lightweight IPv6 deployment without upgrades by encapsulating IPv6 in IPv4. Each method has different impacts on the access network, subscriber edge, and home network domains.
The document discusses the benefits of exercise for mental health. Regular physical activity can help reduce anxiety and depression and improve mood and cognitive functioning. Exercise boosts blood flow, releases endorphins, and promotes changes in the brain which help enhance one's emotional well-being and mental clarity.
Leroy TR, a Sales and Distribution Co.with a 25 years history, a former salmon and Norwegian fish importer in an attempt to perform into an international seafood operator in the turkish seafood/retail and Horeca Industries Introduction by 11/2016
Este documento proporciona consejos sobre cómo ser un buen vendedor, incluyendo la importancia de la apariencia física, la postura, la mirada, los saludos al cliente, conocer los productos a la venta, responder preguntas del cliente, y despedirse agradeciendo la compra.
Home remedies for hemorrhoids are often used as an option because it is more secure, have virtually no side effects even if it takes longer treatment. Hemorrhoids are one type of disease that is very disturbing. Hemorrhoids appeared in the anus or rectum of patients, resulting in pain, bleeding and difficulty of patients to sit or defecate
Lerøy Turkey is a subsidiary of Lerøy Seafood Group, the world's second largest producer of Atlantic salmon. Lerøy Turkey started farming salmon in Turkey in 1991 and established a 3000m2 processing facility in 2000 with 5000 tons of annual processing capacity. It is a leading seafood company in Turkey with experience distributing Norwegian seafood for 15 years and potential for growth. The document provides information on Lerøy Turkey's certifications, facilities, products including salmon, trout, seabass, and local shrimp packaged using MAP packaging. Product information includes nutritional values, shelf life and packaging details.
The document discusses the CCNA certification, which is Cisco's entry-level networking certification and a prerequisite for other Cisco certifications. It covers topics like static and dynamic routing, router configuration modes, switching, campus area networks (CANs), and provides examples of CAN scenarios for different blocks at a campus including the engineering block, admin block, boys hostel block, and WAN block.
Monitoring Jaringan Komputer dan Server di GNS3Jumroh Arrasid
Dokumen ini membahas tentang monitoring jaringan perusahaan fiktif bernama Infinite Integrator menggunakan Cacti dan Kiwi Syslog Server. Topologi jaringan menerapkan VLAN, NAT, HSRP, dan SNMP. Software yang digunakan antara lain Cacti, Kiwi Syslog Server, GNS3, VirtualBox, dan Damn Small Linux. Proses monitoring dilakukan selama 6 hari dengan membuat lalu lintas dari dalam dan luar jaringan. Hasil monitoring menunjukkan tingginya lalu lintas
This document discusses lossless Huffman coding image compression using different block sizes and codebook sizes. It begins by introducing Huffman coding and image compression techniques. It then describes the methodology used, which involves reading an image in MATLAB, converting it to grayscale, extracting blocks from the image, quantizing the blocks using Huffman coding, and reconstructing the compressed image. 8 different compression scenarios are tested using various block and codebook sizes. The best scenario used a block size of 16 and codebook size of 50. Performance metrics like compression ratio, bit rate, PSNR, MSE and SNR are calculated. Enhancement techniques like Laplacian of Gaussian filtering and pseudo-coloring are then applied to the reconstructed image from the best
This document is a mini project report on digital image processing using MATLAB. It discusses various image processing techniques and applications implemented in MATLAB, including image formats, operations, and tools. Applications demonstrated include text recognition, color tracking, solving an engineering problem using image processing, creating a virtual slate using laser tracking, face detection, and distance estimation. The report provides examples of MATLAB functions used for tasks like importing, displaying, converting and cropping images, as well as analyzing and manipulating them.
This document describes an algorithm to identify cigarette butts in images. The algorithm uses color segmentation, edge detection, and enhancement techniques in Matlab. It turns the original image into a binary image segmented by the color of cigarette butts. Color and edge detection are used to create a binary mask. Enhancement techniques like dilation and hole filling are applied to smooth edges before labeling objects with random colors for visualization. While the algorithm identifies most cigarette butts, it does not fully eliminate background noise.
Image processing involves algorithms that take images as input and output other images. It is used to prepare digital images for viewing or analysis by enhancing structures within images. Common applications of image processing include adjusting properties like brightness, contrast, and gamma; detecting edges; blurring or sharpening; and performing operations like erosion and dilation. Principal component analysis (PCA) is a technique used to reduce the dimensionality of image data for analysis and recognition. Face recognition systems use PCA to extract feature vectors from images, then compare new images to the training set to identify faces.
IRJET- 3D Vision System using Calibrated Stereo CameraIRJET Journal
This document describes a 3D vision system that uses calibrated stereo cameras to estimate the depth of objects. It discusses using two digital cameras placed at different positions to capture images of the same object. Feature matching and disparity calculation algorithms are used to calculate depth based on the difference between images. The cameras are calibrated using camera parameters derived from images of a checkerboard pattern. Trigonometry formulas are then used to calculate depth based on the camera positions and disparity. A servo system is used to independently and synchronously move the cameras along the x and y axes to capture views of objects from different angles.
IRJET- Coloring Greyscale Images using Deep LearningIRJET Journal
1) The document proposes an automated approach to color grayscale images using deep learning and convolutional neural networks (CNNs).
2) A CNN model is trained on an image dataset containing 1300 colored images to predict color values for pixels in grayscale images.
3) The trained model is tested on 300 grayscale images and the predicted colored images are compared to the originals by calculating pixel deviations.
4) Evaluation shows that while some pixels have high errors, the average and median pixel deviations indicate the overall predicted images are acceptably close to the original colored images.
This document discusses image enhancement techniques in the spatial domain. It defines spatial domain processing as the direct manipulation of pixel values, as opposed to frequency domain processing which modifies the Fourier transform. The key techniques discussed are:
- Linear and non-linear transformations which map input pixel values to new output values.
- Spatial filters which operate on neighborhoods of pixels, including smoothing filters to reduce noise and sharpening filters to enhance edges.
- Histogram processing techniques like equalization to improve contrast in low contrast images.
The document provides examples of each technique and discusses their applications in image enhancement.
The document describes an algorithm for detecting text in camera-captured images. It begins with preprocessing steps like converting the color image to grayscale, applying edge detection and morphological operations like dilation and erosion. This gives initial bounding boxes containing candidate text regions. Further processing includes applying geometrical constraints to filter boxes, performing multiresolution analysis, connected component analysis and filtering by area to get the final text regions. Inversion and addition steps are used to handle text against different backgrounds.
This document describes a summer internship project on digital image processing and analysis conducted by Rajarshi Roy at the Indian Institute of Engineering Science and Technology under the guidance of Dr. Samit Biswas from May to June 2016. It includes an acknowledgment, table of contents, abstract, and analysis of various digital image processing techniques applied to images, including reading and writing images, applying filters like negative, sharpening, edge detection, transposing the image matrix, stretching images, and applying mean filtering. The document provides details on the code developed in C++ to perform these image processing functions and analyze the results.
A binarization technique for extraction of devanagari text from camera based ...sipij
This paper presents a binarization method for camera based natural scene (NS) images based on edge
analysis and morphological dilation. Image is converted to grey scale image and edge detection is carried
out using canny edge detection. The edge image is dilated using morphological dilation and analyzed to
remove edges corresponding to non-text regions. The image is binarized using mean and standard
deviation of edge pixels. Post processing of resulting images is done to fill gaps and to smooth text strokes.
The algorithm is tested on a variety of NS images captured using a digital camera under variable
resolutions, lightening conditions having text of different fonts, styles and backgrounds. The results are
compared with other standard techniques. The method is fast and works well for camera based natural
scene images.
The document provides an overview of image processing in MATLAB. It discusses the basic data structures used to represent images as matrices and different image types (binary, indexed, grayscale, truecolor). It provides examples of reading and displaying images, enhancing contrast, and calculating basic image statistics. Functions covered include imread, imshow, imhist, histeq, imwrite, imopen, imadjust, im2bw, bwlabel, and regionprops.
This document outlines an assignment for a computer vision course. Students are asked to implement 4 vision algorithms: 2 using OpenCV and 2 using MATLAB. The algorithms are the log-polar transform, background subtraction, histogram equalization, and contrast stretching. Students must also answer 3 short questions about orthographic vs perspective projection, efficient filtering, and sensors beyond cameras for computer vision.
A PROJECT REPORT ON REMOVAL OF UNNECESSARY OBJECTS FROM PHOTOS USING MASKINGIRJET Journal
This document presents a project report on removing unnecessary objects from photos using masking techniques. It discusses using algorithms like Fast Marching and Navier-Stokes to fill in missing image data and maintain continuity across boundaries. The Fast Marching method begins at region boundaries and works inward, prioritizing completion of boundary pixels first. Navier-Stokes uses fluid dynamics equations to continue intensity value functions and ensure they remain continuous at boundaries. Color filtering can also be used to segment specific colored objects or regions. The project aims to implement these techniques to remove unwanted objects from images and fill the resulting gaps seamlessly.
A Biometric Approach to Encrypt a File with the Help of Session KeySougata Das
The main objective of this work is to provide a two layer authentication system through biometric (face) and conventional session based password authentication. The encryption key for this authentication will be generated with the combination of the biometric key and session based password.
Estrazione automatica delle linee in un'immagine digitalefrancescapadoin
This document discusses techniques for automatically extracting lines from a digital image. It introduces two common edge detection operators - Sobel and Canny - to identify edges in an image that can then be input to the Hough transform for line detection. It also explores using different filtering masks prior to edge detection and adaptive thresholding to improve results. The identified lines from the Hough transform are then intersected and clustered using k-means to estimate vanishing points for perspective calculation. Experimental validation of the results is also discussed.
This document describes an algorithm to solve Wordoku puzzles by processing image elements. The key steps are: 1) Separating the puzzle grid and keyword, 2) Extracting characters from each, 3) Matching characters using classifiers like cross-correlation and support vector regression, 4) Solving the puzzle by filling values in a matrix, 5) Printing the solved puzzle by pasting characters. The algorithm was tested on various puzzles and fonts, achieving a 90% accuracy rate. Extensions to handle real images and optimize character extraction are proposed for future work.
Developing 3D Viewing Model from 2D Stereo Pair with its Occlusion RatioCSCJournals
We intend to make a 3D model using a stereo pair of images by using a novel method of local matching in pixel domain for calculating horizontal disparities. We also find the occlusion ratio using the stereo pair followed by the use of The Edge Detection and Image SegmentatiON (EDISON) system, on one the images, which provides a complete toolbox for discontinuity preserving filtering, segmentation and edge detection. Instead of assigning a disparity value to each pixel, a disparity plane is assigned to each segment. We then warp the segment disparities to the original image to get our final 3D viewing Model.
Similar to JonathanWestlake_ComputerVision_Project1 (20)
Developing 3D Viewing Model from 2D Stereo Pair with its Occlusion Ratio
JonathanWestlake_ComputerVision_Project1
1. 1
Jonathan Westlake
Software Engineer
Course Work Computer Vision
8/15/2015
The following is course work from my Computer Vision course during my Computer Science master’s
program. I have included it to demonstrate my understanding of 2D image processing. In practice many
of the techniques and data analysis are very similar to signal processing. An image is represented by data
values and I demonstrate various different ways I analysis the data to produce a desired result.
The following images and end results are 100% developed and tested by myself in C/C++. This is
everything except the initial starter code to read and write bitmaps. I can provide the source code if more
details on how a particular algorithm was implemented. However it was never intended as work to represent
the best practices of software development.
Thanks,
Jonathan
2. 2
CALIFORNIA STATE UNIVERSITY, LONG BEACH
College of Engineering
Department of Computer Engineering and Computer Science
Dr. Thinh V. Nguyen
Spring 2013
CECS-553/653: Machine Vision
PROJECT 1
Name: Westlake, Jonathan
Last, First
Dates: Date assigned: Wednesday February 6, 2013. Date due: Wednesday March 27, 2013. Late
submissions will receive penalty at 10% per day. This project is worth 40% of the project grade.
Objectives: The objectives of this project includes: (1) to familiarize students with the bitmap image file
format, (2) to perform edge detections.
Project Description:
Write a computer program to read in an image named “image.bmp” where image is the input to the
program and is selected from the image database. If the image is color, save the image in grey level as
“image_grey.bmp”. Note that the word “image” should be replaced with the appropriate image file name.
Perform the following operations:
1) Edge detection using Roberts, Sobel, Prewitt, and Robinson operators: (40 points) Apply the Roberts,
Sobel, Prewitt, and Robinson operators on the image_grey.bmp. Show the resulting output. Then,
threshold the output images to obtain the edge images. The edge pixels are black and the non-edge pixels
are white. Discuss how you select the threshold to obtain the edge images.
2) Edge detection using Laplacian of Gaussian: (60 points) Apply the Laplacian of Gaussian with mask sizes
of 11x11 and 21x 21 on the above images. Show the values of the masks. For each mask size, select
various values of . Use zero crossings to detect the edges. Discuss the effects of mask size and on the
resulting edge images.
3) Results: Use the following images: actress.bmp, pattern2.bmp, coins.bmp. Use any additional images
and/or values of NxN as appropriate.
Project Report: Follow the required format. Attach this sheet as cover sheets for the report. Attach
printouts of the above images as embedded pictures in the text. Scale the images to fit about 1/3 to ¼ of the
page. Discuss the results and specific implementations
END OF DOCUMENT
4. 4
Part 1.) Edge detection using Roberts, Sobel, Prewitt and Robinson
This project uses Roberts, Sobel, Prewitt and Robinson operators by using convolution on the images to
detect edges. The Roberts operator is a 2x2, and the other three are 3x3 (Given in the pseudocode). In all
of the resulting images it is seen that images performed with Roberts have a duller edge values in
comparison to all the other operators when they are presented. When using thresholding we can see that
Roberts operator does not necessary loose pixel information regarding what is a line and what isn’t.
However it does introduce a great deal more noise than the other three. This is assumed to be based
around the small amount of calculations that go into producing one pixel. The Sobel operator gives the
greatest contrast from dark to white pixels. Many of these images appear to stand out due to the think
white lines that are present throughout the images. Prewitt and Robinson results appear to be very similar
with only a slight difference in pixel locations form comparing images.
Thresholding can be useful in image processing by selecting a value that is both greater and lower than
values within an image. In this project thresholding is used to give a strong contrast from black and white
pixels. This is very useful when an image contains too many dark pixels and there needs to be a way to
differentiate between them. A pixel that is greater than the threshold is changed to white (or value 0), and
a pixel lower than the threshold is changed black (255). This is only true in the context of this project
since the masks show highlighted lines in white and we wish to change these colors for printing to black.
I decided in this project to go with the average of all the pixels to find a given threshold value. This
appears to be a decent approach where images have a uniform contrast among pixels. This means that
there isn’t the need for an adaptive approach where areas may be too dark, or too light. The problem
however with selecting the average pixel thresholding is that too much extra noise goes into the
calculation. This can be seen in the actress.bmp thresholding example using averaging alone. In the
images there is the addition of noise on the face that may not be useful in image processing. Therefore in
this project I decided to double this value to help eliminate that noise and focus on the lines. This
depends highly on the “goals” of what one is trying to achieve. In this project the goal I’m assuming is
that we want clean lines in the photo to use. This may not be the best threshold value for coins.bmp.
This is because the level may remove too much of the lines that form the circles of the coin and may not
be useful in a computer vision processing application that will fail at the discontinuities.
Above is actress.bmp’s histogram after Sobel operator has been performed. As discussed the using the
average value as the threshold shows extra noise in the photos. In other photographs a histogram may
show many different peaks, and it would be more ideal to find the best area between those peaks to use as
a threshold value. This project only doubles the average to find its threshold value. The use of
calculating a standard deviation from the average could also be used.
5. 5
Pseudocode Example:
Define Masks:
RobertsSx[2][2] = { { 0 ,1 }, {-1, 0} }
RobertsSy[2][2] = { {1 ,0}, {0, -1}}
SobelSx[3][3] = { { -1, 0, 1 }, { -2, 0, 2 }, { -1, 0, 1} }
SobelSy[3][3] = { { 1, 2, 1 }, { 0, 0, 0 }, { -1, -2, -1} }
PrewittSx[3][3] = { { -1, 0, 1 }, { -1, 0, 1 }, { -1, 0, 1} }
PrewittSy[3][3] = { { 1, 1, 1 }, { 0, 0, 0 }, { -1, -1, -1} }
RobinsonSx[3][3] = { { -1, 1, 1 }, { -1, -2, 1 }, { -1, 1, 1} }
RobinsonSy[3][3] = { { 1, 1, 1 }, { 1, -2, 1}, { -1, -1, -1} }
Define image – consists of a matrix that is loaded from one color of a 24-bit Bitmap Image. The
single color is used for 8-bit grey scale imaging.
For Every Mask Set {MaskNameRx, MaskNameRy}
MatrixRx = Convolution(MaskNameRx, image)
MatrixRy = Convolution(MaskNameRy, image)
For Every index in MatrixRx[0..x][0..y] and MatrixRy[0..x][0..y]
MatrixFinal[][]=√(MatrixRx[][]2
+ MatrixRy[][]2
)
Threshold = average(MatrixFinal)
For Every index in MatrixFinal[0..n]
if MatrixFinal[][] less than threshold
Set MatrixFinal[][] to BLACK
else
Set MatrixFinal[][] to WHITE
Write MatrixFinal to Output
Define method Convolution – Assume kernel has equal rows, columns to not require a flip in matrix.
inputs: MatrixKernel, MatrixImage : Dimensions(MatrixKernel) < Dimensions( MatrixImage )
For Every index in MatrixImage[0..x][0..y] thats boundaries are >
length(MatrixKernel[][]) – so offset doesn’t access elements out of bounds.
For Every index in MatrixKernel[0..x][0..y]
{IoffsetX,IoffsetY} = Offset of Kernel Window in relation to Image Matirx
MatrixOutput[0..x][0..y] = Summation(MatrixKernel[KoffsetX][KoffsetY] *
MatrixImage[IoffsetX][IoffsetY])
return MatrixOutput
6. 6
actress.bmp
Roberts Sobel Prewitt Robinson
Thresholding (Using Average of all pixels):
Roberts Sobel Prewitt Robinson
Roberts Sobel Prewitt Robinson
Thresholding (2 Times the Average)
Results show thick edge detection in areas where contrast between light and dark elements are vivid. This
is expected since areas like the shoulder should show a solid line, while marks around the face shouldn’t
show up at all. The Roberts operator with its smaller kernel size does not pull in a large enough window
of information and blocky looking pixels are apparent in its thresholded photos. A threshold value that is
that blocks many dark pixels will remove some elements in the photo that give texture and structure to the
image.
9. 9
Part 2.Edge detection using Laplacian of Gaussian)
When creating a mask with a given sigma value it expands values into the range based on the calculation
from the LoG(x,y) values entered into the function above. In the examples below a sigma value of (1.3 to
2.2) will best fit into an 11x11 mask. We could test higher values of sigma in the lower mask size, but
this will truncate important values that go outside of the 11x11 bounds. This is also true with using a
smaller value of sigma, where many of the elements within the mask will not be used since they are of
value 0. The larger 21x21 masks will best fit a sigma value of 2.5 and above. It is also noted that the
values within the mask get smaller, and a larger scale value is used to show them as integers.
A scale value is used only for representation to round up the values to the nearest constant. These
calculations can be done in floating point.
Zero-Crossing
After the LoG(x,y) matrix has been applied to the image a resulting image that consists of negative and
positive values. To find the edges of the image zero-processing will be performed in both left to right and
top to bottom of the image. When a cell that it adjacent to the right goes from positive to negative, or
negative to positive it will be marked as a zero-crossing pixel, and the resulting matrix will show that line.
The idea is to only show the locations where this does happen.
Zero-Crossing Matrix Theory:
The system calculates a zero crossing by checking a change in sign when the scanner moves from left to
right, or top to bottom. This can be easily achieved by multiplying the elements together and checking
their result. A negative result means the two elements are different and a zero crossing has occurred,
while a positive result means the pixels are the same type and a zero crossing has not occurred. This is
why if ((x,y) * (x+1,y ) < 0 is used in the example below.
Basic Example:
Pseudocode Example
Give SigmaValue, MaskSize
MatrixLoG = Calculate LapacianOfGaussian (SigmaValue, MaskSize)
M = convolution(MatrixLog, Image)
MatrixX = From M[x0..xn][y0..yn] if ((x,y) * (x+1,y ) < 0 store BLACK, else store WHITE
MatrixY = From M[x0..xn][y0..yn] if ((x,y) * (x,y+1) < 0) store BLACK, else store WHITE
MatrixComplete = For every element in MatrixX, and MatrixY if zero crossing has occurred store
BLACK, else store WHITE.
Results in =>
The examples in this text will show what the LoG(x,y) mask does to the image and the zero-crossings are
performed with that processed image.
10. 10
Project Results
LoG - Mask Size: 11x11 Sigma Value: 1.4
Scale Factor: 3000
0 0 0 0 1 2 1 0 0 0 0
0 0 2 6 10 12 10 6 2 0 0
0 2 9 20 30 32 30 20 9 2 0
0 6 20 33 19 1 19 33 20 6 0
1 10 30 19 -73 -143 -73 19 30 10 1
2 12 32 1 -143 -248 -143 1 32 12 2
1 10 30 19 -73 -143 -73 19 30 10 1
0 6 20 33 19 1 19 33 20 6 0
0 2 9 20 30 32 30 20 9 2 0
0 0 2 6 10 12 10 6 2 0 0
0 0 0 0 1 2 1 0 0 0 0
Below are the results of the photograph before zero-crossing with the images after convolution of the
kernel above has been performed on the image. The zero-crossing algorithm will use these images to
trace where an element goes from dark to light, or light to dark. These findings are recorded in a separate
image as the zero-crossings.
Zero-Crossing: when adjacent pixel changes from light to dark, or dark to light:
11. 11
LoG - Mask Size: 11x11 Sigma Value: 2.0
Scale Factor: 5000
1 2 4 6 8 9 8 6 4 2 1
2 5 9 12 13 13 13 12 9 5 2
4 9 13 12 7 4 7 12 13 9 4
6 12 12 0 -19 -30 -19 0 12 12 6
8 13 7 -19 -58 -76 -58 -19 7 13 8
9 13 4 -30 -76 -99 -76 -30 4 13 9
8 13 7 -19 -58 -76 -58 -19 7 13 8
6 12 12 0 -19 -30 -19 0 12 12 6
4 9 13 12 7 4 7 12 13 9 4
2 5 9 12 13 13 13 12 9 5 2
1 2 4 6 8 9 8 6 4 2 1
This sigma value appears to be too large for this mask’s dimensions.
The edge crossing images from this example remove more noise than the previous example. Detecting
the edges with this mask on the tomatoes image did not provide detailed lines. My assumptions are using
this sigma value without the right mask size may delete important lines within an image. (as seen here)
Zero-Crossing:
12. 12
LoG - Mask Size: 21x21 Sigma Value: 2.5
Scale Value: 100000
0 0 0 0 0 0 0 1 1 1 1 1 1 1 0 0 0 0 0 0 0
0 0 0 0 0 1 2 3 5 6 6 6 5 3 2 1 0 0 0 0 0
0 0 0 0 1 4 7 11 15 18 20 18 15 11 7 4 1 0 0 0 0
0 0 0 2 5 10 18 28 38 44 47 44 38 28 18 10 5 2 0 0 0
0 0 1 5 12 24 40 57 73 82 85 82 73 57 40 24 12 5 1 0 0
0 1 4 10 24 44 69 92 105 109 110 109 105 92 69 44 24 10 4 1 0
0 2 7 18 40 69 98 110 98 75 63 75 98 110 98 69 40 18 7 2 0
1 3 11 28 57 92 110 84 11 -73 -111 -73 11 84 110 92 57 28 11 3 1
1 5 15 38 73 105 98 11 -154 -327 -402 -327 -154 11 98 105 73 38 15 5 1
1 6 18 44 82 109 75 -73 -327 -583 -692 -583 -327 -73 75 109 82 44 18 6 1
1 6 20 47 85 110 63 -111 -402 -692 -814 -692 -402 -111 63 110 85 47 20 6 1
1 6 18 44 82 109 75 -73 -327 -583 -692 -583 -327 -73 75 109 82 44 18 6 1
1 5 15 38 73 105 98 11 -154 -327 -402 -327 -154 11 98 105 73 38 15 5 1
1 3 11 28 57 92 110 84 11 -73 -111 -73 11 84 110 92 57 28 11 3 1
0 2 7 18 40 69 98 110 98 75 63 75 98 110 98 69 40 18 7 2 0
0 1 4 10 24 44 69 92 105 109 110 109 105 92 69 44 24 10 4 1 0
0 0 1 5 12 24 40 57 73 82 85 82 73 57 40 24 12 5 1 0 0
0 0 0 2 5 10 18 28 38 44 47 44 38 28 18 10 5 2 0 0 0
0 0 0 0 1 4 7 11 15 18 20 18 15 11 7 4 1 0 0 0 0
0 0 0 0 0 1 2 3 5 6 6 6 5 3 2 1 0 0 0 0 0
0 0 0 0 0 0 0 1 1 1 1 1 1 1 0 0 0 0 0 0 0
The mask above has been generated into a 21 by 21 mask and starts at a sigma value of 2.5. This appears
to be the best choice for this mask because any larger of a sigma value the outer ring will start to be cutoff
by the mask dimensions.
This masked performed on a binary picture gives the following result:
Here is what the picture looks like with the 21x21 mask from above performed on the image:
The larger the sigma value is used, the larger the lines will be expanded in these photos. This should
show zero-crossings farther apart.
Zero Crossings:
13. 13
Larger Sigma produces larger line blocks:
LoG - Mask Size: 21x21 Sigma Value: 3.5
Scale Factor: 1000000
4 8 14 24 37 52 69 85 98 107 110 107 98 85 69 52 37 24 14 8 4
8 15 28 45 67 93 119 143 163 175 179 175 163 143 119 93 67 45 28 15 8
14 28 48 76 110 147 183 213 234 246 250 246 234 213 183 147 110 76 48 28 14
24 45 76 116 163 209 246 271 283 286 287 286 283 271 246 209 163 116 76 45 24
37 67 110 163 217 262 285 282 262 239 229 239 262 282 285 262 217 163 110 67 37
52 93 147 209 262 286 267 205 119 44 15 44 119 205 267 286 262 209 147 93 52
69 119 183 246 285 267 175 15 -172 -324 -383 -324 -172 15 175 267 285 246 183 119 69
85 143 213 271 282 205 15 -269 -585 -834 -929 -834 -585 -269 15 205 282 271 213 143 85
98 163 234 283 262 119 -172 -585 -1030 -1376 -1507 -1376 -1030 -585 -172 119 262 283 234 163 98
107 175 246 286 239 44 -324 -834 -1376 -1795 -1953 -1795 -1376 -834 -324 44 239 286 246 175 107
110 179 250 287 229 15 -383 -929 -1507 -1953 -2121 -1953 -1507 -929 -383 15 229 287 250 179 110
107 175 246 286 239 44 -324 -834 -1376 -1795 -1953 -1795 -1376 -834 -324 44 239 286 246 175 107
98 163 234 283 262 119 -172 -585 -1030 -1376 -1507 -1376 -1030 -585 -172 119 262 283 234 163 98
85 143 213 271 282 205 15 -269 -585 -834 -929 -834 -585 -269 15 205 282 271 213 143 85
69 119 183 246 285 267 175 15 -172 -324 -383 -324 -172 15 175 267 285 246 183 119 69
52 93 147 209 262 286 267 205 119 44 15 44 119 205 267 286 262 209 147 93 52
37 67 110 163 217 262 285 282 262 239 229 239 262 282 285 262 217 163 110 67 37
24 45 76 116 163 209 246 271 283 286 287 286 283 271 246 209 163 116 76 45 24
14 28 48 76 110 147 183 213 234 246 250 246 234 213 183 147 110 76 48 28 14
8 15 28 45 67 93 119 143 163 175 179 175 163 143 119 93 67 45 28 15 8
4 8 14 24 37 52 69 85 98 107 110 107 98 85 69 52 37 24 14 8 4
This final photograph shows many edge crossings all throughout the face. As sigma has been growing in
size, smaller elements have been becoming more pronounced in many of the later photos. I have found
that selecting a sigma value that expands nicely into the mask size works the best. This is the idea of
having all the elements that the LoG(x,y) produces without missing the outer edges of the Mexican hat
shape it makes. The larger the sigma value the thicker the edges appear to be when zero crossings are
performed on the image.