Computer Vision: Shape from Specularities and Motion


Published on

Published in: Education
  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Computer Vision: Shape from Specularities and Motion

  1. 1. Use of Specularities and Motion in the Extraction of Surface Shape Damian Gordon [email_address]
  2. 2. Introduction <ul><li>Introduction - Image Geometry </li></ul><ul><li>Photometric Stereo (1) </li></ul><ul><li>Structured Highlights (1) </li></ul><ul><li>Stereo Techniques (2) </li></ul><ul><li>Motion Techniques (3) </li></ul><ul><li>Solder Joint Inspection (1) </li></ul>
  3. 3. Specular Surface <ul><li>Angle of Incidence = Angle of Reflection </li></ul>
  4. 4. Image Geometry ________________________
  5. 5. Image Formation <ul><li>Geometry - determines where in the image plane the projection of a point in a scene will be located </li></ul><ul><li>Physics of Light - determines the brightness of a point in the image plane as a function of scene illumination and surface properties </li></ul>
  6. 6. Image Formation
  7. 7. Image Formation <ul><li>The LINE OF SIGHT of a point in the scene is the line that passes through the point of interest and the centre of projection </li></ul><ul><li>The above model leads to image inversion, to avoid this, assume the image plane is in front of the centre of projection </li></ul>
  8. 8. Image Formation
  9. 9. Perspective Projection <ul><li>(x’,y’) may be found by computing the co-ordinates of the intersection of the line of sight, passing thru’ (x,y,z) and the image plane </li></ul><ul><li>By two sets of similar triangles : </li></ul><ul><li>x’=fx/z and y’=fy/z </li></ul>
  10. 10. Image Irradiance (Brightness) <ul><li>The irradiance of a point in the image plane E(x’,y’) is determined by the amt of energy radiated by the corresponding scene in the direction of the image point : E(x’,y’)=L(x,y,z) </li></ul><ul><li>Two factors determine radiance emitted by a surface patch </li></ul><ul><li>I) Illumination falling on scene patch </li></ul><ul><li>- determined by the patch’s position relative to the distribution of light sources </li></ul><ul><li>II) Fraction of incident illumination reflected by patch </li></ul><ul><li>- determined by optical properties of the patch </li></ul>
  11. 11. Image Irradiance <ul><li>(  i  i  is the direction of the point source of scene illumination </li></ul><ul><li>(  e  e  is the direction of the energy emitted from the surface patch </li></ul><ul><li>E(  i  i  is the energy arriving at a patch </li></ul><ul><li>L(  e  e  is the energy radiated from the patch </li></ul>
  12. 12. Image Irradiance <ul><li>The relationship between radiance and irradiance may be defined as follows : </li></ul><ul><li>L(  e  e  f(  i  i  e  e  E(  i  i   </li></ul><ul><li> where f(  i  i  e  e  is the bidirectional reflectance distribution function (BRDF) </li></ul><ul><li>BRDF - depends on optical properties of the surface </li></ul>
  13. 13. Types of Reflectance <ul><li>Lambertian Reflectance </li></ul><ul><li>Specular Reflectance </li></ul><ul><li>Hybrid Reflectance </li></ul><ul><li>Electron Microscopy Reflectance (not covered) </li></ul>
  14. 14. Lambertian Reflectance <ul><li>Appears equally bright from all viewing directions for a fixed illumination distribution </li></ul><ul><li>Does not absorb any incident illumination </li></ul><ul><li>BRDF is a constant (1/  ) </li></ul>
  15. 15. Lambertian Reflectance - Point Source <ul><li>Perceived brightness illuminated by a distant point source </li></ul><ul><li>L(  e  e    Cos  s -- Lambert Cosine Rule </li></ul><ul><li>this means, a surface patch captures the most illumination if it is orientated so that the surface normal of the patch points in the direction of illumination </li></ul>
  16. 16. Lambertian Reflectance - Uniform Source <ul><li>Perceived brightness illuminated by a uniform source </li></ul><ul><li>L(  e  e   </li></ul><ul><li>this means, no matter how a surface is illuminated, it receives the same amount of illumination </li></ul>
  17. 17. Specular Reflectance <ul><li>Reflects all incident illumination in a direction that has the same angle with respect to the surface normal, but on the oppside of the surface normal </li></ul><ul><li>light in the direction (  i  i   is reflected to (  e  e  (  i  i +  </li></ul><ul><li>BRDF is  (  e -  i    (  e -  i -  / Sin  i Cos  i </li></ul>
  18. 18. Specular Reflectance <ul><li>Perceived brightness is </li></ul><ul><li>L(  e  e    e  e  </li></ul><ul><li>this means, the incoming rays of light are reflected from the surface like a perfect mirrior </li></ul>
  19. 19. Hybrid Reflectance <ul><li>Mixture of Lambertian and Specular reflectance </li></ul><ul><li>BRDF is  </li></ul><ul><li> (  e -  i    (  e -  i -  / Sin  i Cos  i </li></ul><ul><li>where  is the mixture of the two reflectance functions </li></ul>
  20. 20. Surface Orientation <ul><li>If (x,y,z) is a point on a surface and (x,y) is the same point on the image plane, with distance z from the camera (depth), then a nearby point is </li></ul><ul><li>(x+  x, y+  y) </li></ul><ul><li>the change in depth can be expressed as </li></ul><ul><li> z = (  z/  x)  x + (  z/  y)  y </li></ul>
  21. 21. Surface Orientation <ul><li>The size of the partial derivaties of z with respect to x and y are related to the orientation of the surface patch. </li></ul><ul><li>The gradient of (x,y,z) is the vector (p,q) which is given by </li></ul><ul><li> p = (  z/  x)  q  (  z/  y) </li></ul>
  22. 22. Reflectance Map <ul><li>For a given light source distribution and a given surface material, the reflectance of all surface orientations of p and q can be catalogued or computed to yield the reflectance map R(p,q) which leads to the image irradiance equation </li></ul><ul><li>E(x,y) = R(p,q) </li></ul>
  23. 23. Reflectance Map <ul><li>i.e., that the irradiance at a point in the image plane is equal to the reflectance map value for surface orientation p and q in the corresponding point in a scene </li></ul><ul><li>in other words, given a change in surface orientation, the reflectance map allows you to calculate a change in image intensity. </li></ul>
  24. 24. Shape from Shading <ul><li>the oppside problem, we know E(x,y) = R(p,q), so we need to calculate p and q for each point (x,y) in the image </li></ul><ul><li>Two unknows, one equation, therefore, a constraint must be applied. </li></ul>
  25. 25. Shape from Shading <ul><li>Smoothness constraint </li></ul><ul><li>Objects are made of a smooth surface, which depart from smoothness only along their edges </li></ul><ul><li>may be expressed as </li></ul>
  26. 26. Shape from Shading
  27. 27. Photometric Stereo <ul><li>Asssume a scene with Lambertian reflectance </li></ul><ul><li>Each point (x,y) will have brightness E(x,y) and possible orientations p and q for a given light source </li></ul><ul><li>if the same surface is illuminated by a point source in a different location, the reflectance map will the different </li></ul>
  28. 28. Photometric Stereo <ul><li>Using this method, surface orientation may be uniquely identified </li></ul><ul><li>In reality, not all incident light is radiated from a surface, this is accounted for by adding an albedo factor (  ) into the image irradiance eqn. </li></ul><ul><li>E(x,y) =  R(p,q) </li></ul>
  29. 29. Photometric Stereo ________________________
  30. 30. Determining Surface Orientations of Specular Surfaces by Using the Photometric Stereo Method Katsusi Ikeuchi Ministry of International Trade in Industry, Japan
  31. 31. Introduction <ul><li>Photometric stereo may be used to determine the surface orientation of a patch </li></ul><ul><li>for diffuse surfaces, point source illumination is used </li></ul><ul><li>for specular surfaces, a distributed light source is required </li></ul>
  32. 32. Image Radiance <ul><li>For a specular surface and an extended light source : </li></ul><ul><li>L e (  e  e  L i  e  e  </li></ul><ul><li>Relationship between reflected radiance and image irradiance </li></ul><ul><li>E p = {(  /4)(d/f p ) 2 Cos 4  }L e </li></ul><ul><li>fp = focal length </li></ul><ul><li>d = diameter of aperture </li></ul><ul><li> = off-axis angle </li></ul>
  33. 33. Image Radiance <ul><li>from this a brightness distribution may be derived </li></ul><ul><li>and from that an inverse transformation </li></ul>
  34. 34. System Implementation <ul><li>Two Stage Process </li></ul><ul><ul><li>Off-Line Job </li></ul></ul><ul><ul><li>On-Line Job </li></ul></ul>
  35. 35. Off-Line Job <ul><li>Light Source : Three linear lamps, placed symmetrically 120 degrees apart </li></ul><ul><li>Lookup Table : Could use 3D table, but observed triples often contain errors </li></ul><ul><li>Instead use 2D lookup Table - each element has two alternatives </li></ul><ul><li>Each alternative consists of a surface orientation and an instensity </li></ul>
  36. 36. Off-Line Job
  37. 37. On-Line Job <ul><li>Normalization is required to cancel the effect of albedo </li></ul><ul><li>Brightness calibration is required also </li></ul><ul><li>The correct alternative of the two solutions is found by comparing the distance between the actual third image brightness and the element of the matrix </li></ul>
  38. 38. Results <ul><li>Works well in a contrainted environment </li></ul><ul><li>has problems if the surface is not smooth </li></ul>
  39. 39. Extracting the Shape and Roughness of Specular Lobe Objects Using Four Light Photometric Stereo Fredric Solomon Katsushi Ikeuchi Carnegie Mellon
  40. 40. Structured Highlights __________________________
  41. 41. Structured Highlight Inspection of Specular Surfaces Arthur C. Sanderson Lee E. Weiss Shree K. Nayar Carnegie Mellon
  42. 42. Introduction <ul><li>Structured Highlight approach yields 3D images from point sources and images </li></ul><ul><li>‘ Highlight’ - light source reflected on a specular surface </li></ul>
  43. 43. Introduction <ul><li>Angle of Incidence = Angle of Reflection </li></ul><ul><li>A fixed camera will image a reflected light ray (highlight) only if it is positioned and orientationed correctly </li></ul>
  44. 44. Introduction <ul><li>Once a highlight is observed, if the direction of the incident ray is known, the orientation of the surface element may be found </li></ul><ul><li>A spherical array of fixed point light sources is used to ensure all positions and directions are scanned </li></ul>
  45. 45. Lambertian Reflectance <ul><li>The reflectance relationship for a Lambertian model of image E(x,y) </li></ul><ul><li>E(x,y) = A (n . s) </li></ul><ul><li>n = surface normal (unit vector) </li></ul><ul><li>s = source direction (unit vector) </li></ul><ul><li>A = constant related to illumination intensity and surface albedo </li></ul>
  46. 46. Hybrid Reflectance <ul><li>The reflectance relationship for a hybrid model of image E(x,y) </li></ul><ul><li>E(x,y) = A k (n . s) + (a/2)(1-k) . </li></ul><ul><li>[2(n . z)(n . s)-(z . s)] </li></ul><ul><li>z = viewing direction (unit vector) </li></ul><ul><li>k = relative weight of specular and Lambertian components </li></ul><ul><li>n = sharpness of the specularity </li></ul>
  47. 47. Structured Hightlight Inspection <ul><li>Using the above equation, the slope of any point may be calculated </li></ul><ul><li>Surface orientation may be determined by the sources that produce local peaks in the reflectance map. </li></ul>
  48. 48. Camera Models <ul><li>Perspective Camera Model </li></ul><ul><li>Orthographic Projection Model </li></ul><ul><li>“ Fixed” Camera Model </li></ul>
  49. 49. Perspective Camera Model <ul><li>All reflected rays pass though a focal point </li></ul><ul><li>this model provides very accurate measurements, but requires extensive calibration procedures </li></ul>
  50. 50. Orthographic Projection Model <ul><li>the focal point is assumed to be an infinite distance from the camera and all the reflected rays are perpendicular to the image plane </li></ul>
  51. 51. “ Fixed” Camera Model <ul><li>all rays are emitted from a single point on the reflectance plane and all surface normal estimates are computed to that reference point </li></ul>
  52. 52. Camera Models - Accuracy <ul><li>Perspective Camera Model </li></ul><ul><ul><li>Most accurate </li></ul></ul><ul><li>“ Fixed” Camera Model </li></ul><ul><ul><li>Next most accurate </li></ul></ul><ul><li>Orthographic Projection Model </li></ul><ul><ul><li>Most sensitive to error </li></ul></ul>
  53. 53. SHINY - Structured Highlight INspection sYstem <ul><li>Highlightrs are extracted from images and tablulated </li></ul><ul><li>Surface normals are computed based on lookup tables dervied from calibration experiments </li></ul><ul><li>Reconstruction is done using interpolation followed by smoothing </li></ul>
  54. 54. Stereo Hightlight Algorithm <ul><li>The assumption of a distant source to uniquely identify the angle of incidence of illumination is an approximation </li></ul><ul><li>To improve this, a second camera is used with stereo matching for greater accuracy </li></ul>
  55. 55. Results <ul><li>With two cameras need to resolve stereo matching ambiguities, therefore, need further constraints </li></ul><ul><li>This technique is slow (1988) </li></ul>
  56. 56. Stereo Techniques ________________________
  57. 57. Stereo in the Presence of Specular Reflection Dinkar N. Bhat Shree K. Nayar Columbia University
  58. 58. Introduction <ul><li>Stereo is a direct method of obtaining the 3D structure of the visual world </li></ul><ul><li>But, it suffers from the fact that the correspondence problem is inherently underconstrained </li></ul>
  59. 59. Correspondence Problem <ul><li>the most common constraint is that intensities of corresponding points in images are identical </li></ul><ul><li>The assumption is not valid for specular surfaces (since intensity is dependant on viewing direction) </li></ul>
  60. 60. Specular Reflection <ul><li>When a specular surface is smooth, the distribution of the specular intensity is concentrated </li></ul><ul><li>As the surface becomes rougher, the peak volume of the specular intensity decreases and the distribution widens </li></ul>
  61. 61. Specular Reflection Smooth Surface Rough Surface
  62. 62. Implications for Stereo <ul><li>The total image intensity of any point is the sum of the diffuse and specular intensity conponents </li></ul><ul><li>Since the change in diffuse components is very small relative to the changes in specular components, it follows that the overall change in intensity is approximately equal to the specular intensity differences </li></ul><ul><li>I diff ~= | I s1 - I s2 | </li></ul>
  63. 63. Implications for Stereo <ul><li>This approximation will assist in determining an optimal binocular stereo configuration, which minimises specular correspondence problems but maximises precision in depth estimation </li></ul>
  64. 64. Binocular Stereo Configuration
  65. 65. Vergence <ul><li>When cameras are orientated such that their optical axes intersect at a point in space, this point is refered to as the point vergence </li></ul><ul><li>Depth accuracy is directly proportional to vergence (…which conflicts with the requirement to minimize intensity differences) </li></ul>
  66. 66. Binocular Stereo <ul><li>Determining the maximum acceptable vergence can be formulated as a constrained optimization problem </li></ul><ul><li>f obj = v 1 . v 2 </li></ul><ul><li>c 1 : I diff < a specified threshold </li></ul><ul><li>c 2 : the cameras lie in the X-Z plane </li></ul>
  67. 67. Experiments <ul><li>Two uniformly rough cylindrical objects wrapped, one is gift wrapper and the other in xerox paper </li></ul><ul><li>Similar patterns were marking on both </li></ul>
  68. 69. Trinocular Stereo <ul><li>Required in environments which are less structured and where surface roughness cannot be estimated </li></ul><ul><li>Allows intensity difference at a point to be constrained to a threshold in at least one of the stereo pairs </li></ul>
  69. 70. Trinocular Stereo
  70. 71. Experiments <ul><li>The experiments done indicate that the reconstruction algorithm works resonably well in an unconstrained environment </li></ul>
  71. 72. Retrieving Shape Information from Multiple Images of a Specular Surface Howard Schultz University of Massachusetts
  72. 73. Introduction <ul><li>This research extends a diffuse mutli-image shape-from-shading technique to perform in the specular domain </li></ul>
  73. 74. Viewing Geometry <ul><li>Assumes an ideal camera with focal length f viewing a surface </li></ul><ul><li>The camera focal point is located at P and O is a point on the surface </li></ul><ul><li>From Snell’s Law an equation can be derived relating the objects position in space to its image on the image plane </li></ul>
  74. 75. Viewing Geometry
  75. 76. Image Synthesis <ul><li>the specular surface stereo method requires a model that predicts accurately the irradiance at each pixel </li></ul><ul><li>Use Idealized Image Synthesis Model </li></ul><ul><li>this will allow us to determine that the irradiance is directly proportional to the product of the radiance and the reflection co-efficient </li></ul>
  76. 77. Specular Surface Stereo <ul><li>Starting at a known evelation, an iterative process is used to determine shape </li></ul><ul><li>Two-step process, determine orientation and propagation </li></ul>
  77. 78. Surface Orientation <ul><li>Identify the pixels that view the surface point (by calculating an inverse of a projective transform) </li></ul><ul><li>A value of (p,q) is found such that the predicted irradiance at E(p,q) match the observed values </li></ul>
  78. 79. Surface Propagation <ul><li>if a point is known on a surface, it is possible to recover shape by propagation </li></ul><ul><li>If (x,y) has elevation h and gradient (p,q) then (x+  x, y+  y) has elevation </li></ul><ul><li>h’ = h +p  x +q  y </li></ul>
  79. 80. Obtaining Seed Values <ul><li>if there are surface features with diffuse proprties (e.g. scratchs or rough spots), use feature matching methods </li></ul><ul><li>if surface is smooth, use a laser range finder </li></ul>
  80. 81. Results <ul><li>Tests were done on four _simulated_ images to determine the feasibilty of the method, the results were 99% accurate </li></ul><ul><li>Using this method in the ‘real world’ would require more constraints </li></ul>
  81. 82. Motion Techniques ________________________
  82. 83. A Theory of Specular Surface Geometry Michael Oren Shree K. Nayar Columbia University
  83. 84. Introduction <ul><li>Develops a 2D profile recovery technique and generalize to 3D surface recovery </li></ul><ul><li>Two major issues associated with </li></ul><ul><li>specular surfaces </li></ul><ul><ul><li>detection </li></ul></ul><ul><ul><li>shape recovery </li></ul></ul>
  84. 85. Introduction <ul><li>Specular surfaces introduce a new kind of image feature, a virtual feature </li></ul><ul><li>A virtual feature is the reflection by a specular surface of another scene point which travels over the surface when the observer moves. </li></ul>
  85. 86. Curve Representation <ul><li>Cartesian co-ordinates result in complex equations describing specular motion </li></ul><ul><li>Using the Legendre transform to represent the curve as an envelope of tangents </li></ul>
  86. 87. Curve Representation
  87. 88. 2D Caustics <ul><li>When a camera moves around an object the virtual features move on the specular surface, producing a family of reflected rays (the envelope defined by this family is called the caustic ) </li></ul><ul><li>On the other hand, the caustic of a real feature is one single point (the actual position of the feature in the scene where all the reflected rays intersect) </li></ul>
  88. 89. Test Image
  89. 90. 2D Caustics <ul><li>Using this, feature classification is simply a matter of computing a caustic and determining whether it is a point or a curve </li></ul><ul><li>Features are tracked from one frame to the next using a sum of square difference (SSD) correlation operator </li></ul>
  90. 91. 2D Profile Recovery <ul><li>The camera is moved in the plane of the profile and the features are tracked </li></ul><ul><li>An equation may be derived relating the caustic to the surface profile, allowing the recovery of the 2D profile from the image. </li></ul>
  91. 92. 3D Surface Recovery <ul><li>The 3D camera motion problem will result in an arbitrary space curve rather than a family of curves as in the 2D case </li></ul><ul><li>The 3D problem cannot but reduced to a finite number of 2D profile problems </li></ul>
  92. 93. 3D Surface Recovery <ul><li>The concept behind the derevation of the 3D caustic curve is to decompose the caustic point position at any given instant into two orthogonal components </li></ul><ul><li>As the camera moves along the specular object, a virtual feature travels along the 3D profile on the objects surface. </li></ul><ul><li>It is possible to develop an equation which relates the trajectory of the virtual feature to the surface profile </li></ul>
  93. 94. Results <ul><li>The 2D testing involved tracking two features on two different specular surfaces, in both experiments the profile was accurately estimated </li></ul><ul><li>The 3D testing involved tracking a highlight on a specular surface, the recovered curve is in strong agreement with the actual surface </li></ul>
  94. 95. Epipolar Geometry ________________________
  95. 96. Epipolar Geometry <ul><li>two cameras are displaced from each other by a baseline distance </li></ul><ul><li>Object point X forms two distinct image points x and x’ </li></ul>
  96. 97. Epipolar Geometry <ul><li>Assume images formed in front of camera to avoid inversion problem </li></ul><ul><li>point (x’, y’) in the images plane from a real point (x, y, z) may be calculated as </li></ul><ul><li>x’ = fx/z and y’ = fy/z </li></ul><ul><li>the displacement between the locations of image point is called the disparity </li></ul>
  97. 98. Epipolar Geometry <ul><li>the plane passing through the two camera centres and the object point is called the epipolar plane </li></ul><ul><li>the intersection of the image plane and the epipolar plane is called the epipolar line </li></ul>
  98. 99. Generalizing Epipolar-Plane Image Analysis on the Spatiotemporal Surface H. Harlyn Baker Robert C. Bolles SRI International
  99. 100. Introduction <ul><li>The technique of Epipolar-Plane Image Analysis involves obtaining depth estimates for a point by taking a large number of images </li></ul><ul><li>This gives a large baseline and higher accuracy </li></ul><ul><li>It also minimises the correspondence problem </li></ul>
  100. 101. Epipolar-Plane Image Analysis <ul><li>this technique imposes the following constraints </li></ul><ul><ul><li>the camera is moving along a linear path </li></ul></ul><ul><ul><li>it acquires images at equal spacing as it is moved </li></ul></ul><ul><ul><li>the camera’s view is orthogonal to the direction of travel </li></ul></ul>
  101. 102. Epipolar-Plane Image Analysis <ul><li>the traditional notion of epipolar lines is generalized to an epipolar plane </li></ul><ul><li>using this, plus the fact that the camera is always moving along a linear path and we may conclude the a given scence feature will always be restricted to a given epipolar plane </li></ul>
  102. 103. Epipolar-Plane Image Analysis
  103. 104. The Spatiotemporal Surface <ul><li>As images are collected, they are stacked up into a spatiotemporal surface </li></ul><ul><li>as each new image is obtained its spatial and temporal edge contours sre constructed </li></ul><ul><li>using a 3D Laplacian of a 3D Gaussian </li></ul>
  104. 105. The Spatiotemporal Surface
  105. 106. 3D Surface Estimation and Model Construction From Specular Motion in Image Sequences Jiang Yu Zheng Norihiro Abe Kyushu Institiute of Technology Yoshihiro Fukagawa Torey Corporation
  106. 107. Introduction <ul><li>This technique reconstructs 3D models of complex objects with specular surfaces </li></ul><ul><li>The process involves rotating the object under inspection </li></ul>
  107. 108. System Setup
  108. 109. Projected Highlights <ul><li>An extended light source project highlight stripes onto the object </li></ul><ul><li>The stripes gradually shift across the object surface and pass most point once </li></ul><ul><li>The specular motion is captured in epipolar-plane images </li></ul>
  109. 110. Feature tracking <ul><li>We know how to detect corners and edge of surface patterns </li></ul><ul><li>The motion type of highlights in EPI can be used to determine five categories of shape </li></ul><ul><ul><li>convex corner </li></ul></ul><ul><ul><li>convex </li></ul></ul><ul><ul><li>planer </li></ul></ul><ul><ul><li>concave </li></ul></ul><ul><ul><li>concave corner </li></ul></ul>
  110. 111. EPI-Plane Images <ul><li>During the rotation, highlights will split and merge, appear and dissapear, etc. </li></ul>
  111. 112. Results <ul><li>Using EPIs results in very accurate reconstruction of surface shapes </li></ul>
  112. 113. Solder Joint Inspection ____________________________
  113. 114. Visual Inspection System for the Classification of Solder Joints Tae-Hyeon Kim Young Shik Moon Sung Han Park Hanyang University Kwang-Jin Yoon LG Industrial Systems
  114. 115. Introduction <ul><li>Uses three layers of ring shaped LED arrays, with different illumination angles </li></ul><ul><li>Solder Joint are segemented and classified using either their 2D features or their 3D features </li></ul>
  115. 116. Classification of Joints
  116. 117. Preprocessing <ul><li>Objective is to identify and segement the soldered regions </li></ul><ul><li>Solder is isolated both vertically and horozontally </li></ul>
  117. 118. Feature Extraction - 2D <ul><li>Average gray level value of I 1 and I 3 </li></ul><ul><li>X 1 = 1/N *  I K (x,y) </li></ul><ul><li>Percentage of highlights of I 1 and I 2 </li></ul><ul><li>X 2 = 1/N *  U(x,y) * 100 </li></ul><ul><li>U(x,y) = thresholded image of I 1 </li></ul>
  118. 119. Feature Extraction - 3D <ul><li>Shape recovery is done using a hybrid reflectance model for all samples not in the confidence interval </li></ul><ul><li>A reflectance map is built up representing intensity values as a function of orientation for each illumination angle </li></ul><ul><li>For each point, three intensity values are recovered and from these and the reflectance map, the orientation is estimated </li></ul>
  119. 120. Classification -2D <ul><li>Uses 3-Layer backpropagation neural network </li></ul><ul><li>Four input nodes for four features </li></ul><ul><li>Five hidden layer nodes </li></ul><ul><li>Four output nodes for four solder types </li></ul>
  120. 121. Classification - 3D <ul><li>Bayes Classifier assuming Gaussian Distribution </li></ul>
  121. 122. Inspection System
  122. 123. Results
  123. 124. Results