Digital image processing Tool presentation


Published on

The development of this image processing software will help editing process to be done effectively. It requires less space on hard disk; emphasizing only on the crucial image processing functions and the executable program will take less space.

Published in: Technology
  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Digital image processing Tool presentation

  1. 1. Digital Image Processing: Learning Kit Submitted By: Diksha Behl (10104763) Sahil Handa (10104729) Siddharth Sharma (10104737)
  2. 2. The enlargement of an image is done if we want to see the finer details of the image.The size of the image is increased but no blurring occurs. In our software we have provided the enlargement by 1.2 times of the actual size. Example: Conclusion Hence, the enlargement of an image is required if want to see the finer details of an image. It is one of the basic function which is used when we capture images of small sizes and want to see its larger size. Enlargement
  3. 3. An image such as this represents a right hand. If we "flip" the image on a horizontal axis, we arrive at an inverted hand: Example: Conclusion Hence, the flipping of an image is required if we want to see an image up-down. It is one of the basic functions which are used when we capture inverted pictures from camera. Flip
  4. 4. An image such as this represents a right hand. If we "mirror" the image on a vertical axis, we arrive at left hand: Example: Conclusion Hence, the mirroring of an image is required if one want to see an image left-right. It is one of the basic functions which is used when we capture an image on the mirror and we want to get the correct version of the image. Mirror
  5. 5. Gamma Correction maps a narrow range of grey scale values in to a wider range of output values. For gamma > 1 : It broadens the darker region but at the same time will narrow up the brighter region. i.e.The image will get darker as gamma increases from 1 onwards. For gamma < 1 : It broadens the brighter region but at the same time will narrow up the dark region. i.e.The image will get brighter as gamma decreases. Trying to reproduce colors accurately also requires some knowledge of gamma correction because varying the value of gamma correction changes not only the brightness, but also the ratios of red to green to blue in a color image. Conclusion Hence, Gamma Correction is used if we want to convert a dark image into a brighter image. Gamma Correction
  6. 6. PMF : helps us calculating the probability of each Example: pixel value in an image. CDF: gives us the cumulative sum of these values. Further on , this CDF is multiplied by levels , to find the new pixel intensities , which are mapped into old values , and your histogram is equalized. Conclusion Hence, Histogram Equalization is used for enhancing the appearance of images. Suppose we have an image which is predominantly dark.Then its histogram would be skewed towards the lower end of the gray scale and all the image detail is compressed into the dark end of the histogram. If we cold ‘stretch out’ the gray levels at the dark end to produce a more uniformly distributed histogram then the image would become much clearer. Histogram Equalization
  7. 7. RGB images are composed of three independent channels for red, green and blue primary color components CMYK images have four channels for cyan, magenta, yellow and black ink plates, The formula used for converting color images to gray is (11*R + 16*G + 5*B) /32 Color Filter and color to grey
  8. 8. The negative of an image with gray levels in the range [0, L-1] is obtained by using the negative transformation, which is given by the expression s = L - 1 - r. Example: Conclusion This type of processing is particularly suited for enhancing white or gray detail embedded in dark regions of an image, especially when the black areas are dominant in size. Inversion
  9. 9. The Average (mean) filter smoothes image data, thus eliminating noise. For example: a1 a2 a3 a4 a5 a6 3x3 filter a7 a8 a9 window The average filter computes the sum of all pixels in the filter window and then divides the sum by the number of pixels in the filter window: Filtered pixel = (a1 + a2 + a3 + a4 + a5 + a6 + a7 + a8 + a9) / 9 The median filter is a sliding-window spatial filter, but it replaces the center value in the window with the median of all the pixel values in the window. Filters
  10. 10. Conclusion • the Max filter is a non-linear digital filtering technique, often used to find the brightest points in an image. • average filters are used for blurring ( removal of small details from an image) and noise reduction. •The median filter is a non-linear digital filtering technique, often used to remove noise from images or other signals. • the Min filter is a non-linear digital filtering technique, often used to find the darkest points in an image. Example of min filter Filters
  11. 11. Smoothening can be performed using Mean Filters or Order Statistic Filters. It is used to remove blurring.  Arithmetic mean filter  Geometric mean filter  Harmonic mean filter  Min filter  Max filter  Median filter Sharpening is used to increase the contrast of an image.This can be achieved by Contrast Stretching and Laplacian Operators. Smoothening and sharpening
  12. 12. An edge is a jump in intensity. For Point Detection, we use the standard high pass mask and to ensure that this mask detects only points and not lines we set a threshold value i.e., we say a point has been detected at the location on which a mask is centered only if after applying the mask Let us consider the problem of detecting edges in the following one dimensional signal. Here, we may intuitively say that there should be an edge between the 4th and 5th pixels. If the intensity difference is higher between the 4th and the 5th pixels than we easily identify the edge. Point and Edge Detection
  13. 13. • Morphology is used to extract image structures that are helpful in representing regions and shapes. • Erosion process will allow thicker lines to get skinny and detect the hole . • In dilation process lightly drawn get thick in the image. • Opening essentially removes the outer tiny "hairline" leaks and restores the text in image. It isolates the objects which may be just touching one another. • Closing eliminates small holes and tends to fuse narrow breaks. Example of closing: Morphology
  14. 14. Color Based Segmentation Color based segmentation basically reduces the color range and replaces the whole range with a specific representative color of that range. Seed Point based Segmentation Here segmentation is performed using seed points to segment the image into regions having color similar to the seed points. Example of seed point based segmentation: Segmentation
  15. 15. Fourier Transform The Fourier Transform is an important image processing tool which is used to decompose an image into its sine and cosine components. The output of the transformation represents the image in the Fourier or frequency domain, while the input image is the spatial domain equivalent. In the Fourier domain image, each point represents a particular frequency contained in the spatial domain image. The Fourier Transform is used in a wide range of applications, such as image analysis, image filtering, image reconstruction and image compression.
  16. 16. Image Stitching A panorama is simply a wide angle view of a physical view. There are so many map projections to arrange the stitched images.We used rectilinear projections to project the resulting image. In rectilinear projections images are viewed on two dimensional planes.
  17. 17. Algorithm Used for Stitching 1) Compute homography between images (using surf detectors) use the cv::SurfFeatureDetector in interface Use cv::drawKeypoints OpenCV function to show the scale factor associated with each feature 2) Project one corners image using estimated homography to have an estimate of the warped images () using homography.ProjectPoints() function. 3) Create an image with size equal to the one computed in step 2. 4) Warp image according to homography (all pixels of an image and the result will be to transform this image to the other view.) using warp perspective function.
  18. 18. Show images in DataGridView Detect keypoints and corresponding descriptors of both images via SURF detectors Vote for uniqueness of features using GPU Take input images Feature matching using KNN BRUTE FORCE matching Removing outliers and calculate the homography with inliers using RANSAC Warp one of both images