Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our User Agreement and Privacy Policy.

Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our Privacy Policy and User Agreement for details.

Successfully reported this slideshow.

Like this presentation? Why not share!

- Image proceesing with matlab by Ashutosh Shahi 19198 views
- Basics of Image Processing using MA... by vkn13 67400 views
- Getting started with image processi... by Pantech ProLabs I... 42131 views
- Introduction to Digital Image Proce... by Ray Phan 57656 views
- Image Processing Using MATLAB by Amarjeetsingh Thakur 2724 views
- Matlab Working With Images by matlab Content 15693 views

3,785 views

Published on

No Downloads

Total views

3,785

On SlideShare

0

From Embeds

0

Number of Embeds

14

Shares

0

Downloads

445

Comments

10

Likes

18

No notes for slide

- 1. Follow me @ : http://sriramemarose.blogspot.in/ & linkedin/sriramemarose
- 2. Every technology comes from Nature: Eye - Sensor to acquire photons Brain - Processor to process photoelectric signals from eye
- 3. Step 1. Light(white light) falling on objects Step 2. Eye lens focuses the light on retina Step 3. Image formation on retina, and Step 4. Developing electric potential on retina (Photoelectric effect) Step 5. Optical nerves transmitting developed potentials to brain (Processor)
- 4. Optic nerves – Transmission medium Hey, I got potentials of X values (Temporal lobe) Yes, I know what does it mean (Frontal lobe) To frontal lobe, From Temporal lobe
- 5. Different species absorbs different spectral wavelength Which implies different sensors(eye) have different reception abilities
- 6. Color of the images depends on the type photo receptors Primary color images – RGB Photoreceptor – Cones Gray scale images (commonly known as black and white ) Photoreceptor - Rods
- 7. Man made technology that mimics operation of an eye Array of photoreceptors and film (to act as retina - cones and rods) Lens to focus light from surrounding/objects on the photoreceptors (mimics Iris and eye lens)
- 8. )1,1()1,1()0,1( )1,1()1,1()0,1( )1,0()1,0()0,0( ),( NMfMfMf Nfff Nfff yxf Gray line – Continuous analog signals from sensors Dotted lines – Sampling time Red line – Quantized signal Digital representation of image obtained from the quantized signal
- 9. Different types of images often used, Color – RGB -> remember cones in eyes? R –> 0-255 G –> 0-255 B –> 0-255 Grayscale -> remember rods in eyes? 0 – Pure black/white 1-254 – Shades of black and white(gray) 255 – Pure black/white Boolean 0- Pure black/white 1- Pure white/black
- 10. Single pixel with respective RGB values RGB Image
- 11. Combination of RGB values of each pixel contributing to form an image
- 12. Pure black->0 Shades of black&white -> 1-254 White-> 255
- 13. Things to keep in mind, Image -> 2 dimensional matrix of size(mxn) Image processing -> Manipulating the values of each element of the matrix )1,1()1,1()0,1( )1,1()1,1()0,1( )1,0()1,0()0,0( ),( NMfMfMf Nfff Nfff yxf From the above representation, f is an image f(0,0) -> single pixel of an image (similarly for all values of f(x,y) f(0,0) = 0-255 for grayscale 0/1 for binary 0-255 for each of R,G and B
- 14. From the image given below, how specific color(say blue) can be extracted?
- 15. Algorithm: Load an RGB image Get the size(mxn) of the image Create a new matrix of zeros of size mxn Read the values of R,G,B in each pixel while traversing through every pixels of the image Restore pixels with required color to 1 and rest to 0 to the newly created matrix Display the newly created matrix and the resultant image would be the filtered image of specific color
- 16. Input image: Output image(Extracted blue objects): Snippet: c=imread('F:matlab sample images1.png'); [m,n,t]=size(c); tmp=zeros(m,n); for i=1:m for j=1:n if(c(i,j,1)==0 && c(i,j,2)==0 && c(i,j,3)==255) tmp(i,j)=1; end end end imshow(tmp);
- 17. From the image, count number of red objects,
- 18. Algorithm: Load the image Get the size of the image Find appropriate threshold level for red color Traverse through every pixel, Replace pixels with red threshold to 1 and remaining pixels to 0 Find the objects with enclosed boundaries in the new image Count the boundaries to know number of objects
- 19. Input image: Output image(Extracted red objects): Snippet: c=imread('F:matlab sample images1.png'); [m,n,t]=size(c); tmp=zeros(m,n); for i=1:m for j=1:n if(c(i,j,1)==255 && c(i,j,2)==0 && c(i,j,3)==0) tmp(i,j)=1; end end end imshow(tmp); ss=bwboundaries(tmp); num=length(ss); Output: num = 3
- 20. Thresholding is used to segment an image by setting all pixels whose intensity values are above a threshold to a foreground value and all the remaining pixels to a background value. The pixels are partitioned depending on their intensity value Global Thresholding, g(x,y) = 0, if f(x,y)<=T g(x,y) = 1, if f(x,y)>T g(x,y) = a, if f(x,y)>T2 g(x,y) = b, if T1<f(x,y)<=T2 g(x,y) = c, if f(x,y)<=T1 Multiple thresholding,
- 21. From the given image, Find the total number of objects present?
- 22. Algorithm: Load the image Convert the image into grayscale(incase of an RGB image) Fix a certain threshold level to be applied to the image Convert the image into binary by applying the threshold level Count the boundaries to count the number of objects
- 23. At 0.25 threshold At 0.5 threshold At 0.6 thresholdAt 0.75 threshold
- 24. Snippet: img=imread('F:matlab sample imagescolor.png'); img1=rgb2gray(img); Thresholdvalue=0.75; img2=im2bw(img1,Thresholdvalue); figure,imshow(img2); % to detect num of objects B=bwboundaries(img2); num=length(B);
- 25. Snippet: img=imread('F:matlab sample imagescolor.png'); img1=rgb2gray(img); Thresholdvalue=0.75; img2=im2bw(img1,Thresholdvalue); figure,imshow(img2); % to detect num of objects B=bwboundaries(img2); num=length(B); % to draw bow over objects figure,imshow(img2); hold on; for k=1:length(B), boundary = B{k}; plot(boundary(:,2), boundary(:,1), 'r','LineWidth',2); end
- 26. Given an image of English alphabets, segment each and every alphabets Perform basic morphological operations on the letters Detect edges Filter the noises if any Replace the pixel with maximum value found in the defined pixel set (dilate) Fill the holes in the images Label every blob in the image Draw the bounding box over each detected blob
- 27. Snippet: a=imread('F:matlab sample imagesMYWORDS.png'); im=rgb2gray(a); c=edge(im); se = strel('square',8); I= imdilate(c, se); img=imfill(I,'holes'); figure,imshow(img); [Ilabel num] = bwlabel(img); disp(num); Iprops = regionprops(Ilabel); Ibox = [Iprops.BoundingBox]; Ibox = reshape(Ibox,[4 num]); imshow(I) hold on; for cnt = 1:num rectangle('position',Ibox(:,cnt),'edgecolor','r'); end
- 28. 1. Write a program that solves the given equations calibration and measurement Hint: for manual calculation, to get values of x1,x2,y1 and y2 use imtool in matlab
- 29. Algorithm: Load two images to be matched Detect edges of both images Traverse through each pixel and count number of black and white points in one image (total value) Compare value of each pixels of both the images (matched value) Find the match percentage, Match percentage= ((matched value)/total value)*100) if match percentage exceeds certain threshold(say 90%), display, ‘image matches’
- 30. Input Image: Output Image after edge detection: Note: This method works for identical images and can be used for finger print and IRIS matching
- 31. From the given image, find the nuts and washers based on its features
- 32. Algorithm: Analyze the image Look for detectable features of nuts/washers Preprocess the image to enhance the detectable feature Hint - Use morphological operations Create a detector to detect the feature Mark the detected results
- 33. Mathematical operation on two functions f and g, producing a third function that is a modified version of one of the original functions Example: • Feature detection Creating a convolution kernel for detecting edges: • Analyze the logic to detect edges • Choose a kernel with appropriate values to detect the lines • Create a sliding window for the convolution kernel • Slide the window through every pixel of the image
- 34. Input image Output image After convolution
- 35. Algorithm: • Load an image • Create a kernel to detect horizontal edges • Eg: • Find the transpose of the kernel to obtain the vertical edges • Apply the kernels to the image to filter the horizontal and vertical components
- 36. Resultant image after applying horizontal filter kernel
- 37. Resultant image after applying vertical filter kernel
- 38. Snippet: Using convolution: rgb = imread('F:matlab sample images2.png'); I = rgb2gray(rgb); imshow(I) hy = fspecial('sobel'); hx = hy'; hrFilt=conv2(I,hy); vrFilt=conv2(I,hx); Using Fiters: rgb = imread('F:matlab sample images2.png'); I = rgb2gray(rgb); hy = fspecial('sobel'); hx = hy'; Iy = imfilter(double(I), hy, 'replicate'); Ix = imfilter(double(I), hx, 'replicate');
- 39. Often includes, Image color conversion Histogram equalization Edge detection Morphological operations Erode Dilate Open Close
- 40. To detect the required feature in an image, • First subtract the unwanted features • Enhance the required feature • Create a detector to detect the feature
- 41. Gray scale Histogram equalization
- 42. Edge detection:
- 43. Morphological close:
- 44. Image dilation:
- 45. Detect the feature in the preprocessed image
- 46. • Fusion: putting information together coming from different sources/data • Registration: computing the geometrical transformation between two data Applications: • Medical Imaging • Remote sensing • Augmented Reality etc
- 47. Courtesy: G. Malandain, PhD, Senior Scientist, INRA
- 48. Courtesy: G. Malandain, PhD, Senior Scientist, INRA
- 49. PET scan of Brain MRI scan of Brain + = Output of multimodal registration( Different scanners)

No public clipboards found for this slide

Login to see the comments