Use of Specularities and Motion in the Extraction of Surface Shape Damian Gordon [email_address]
Introduction Introduction - Image Geometry Photometric Stereo (1) Structured Highlights (1) Stereo Techniques (2) Motion Techniques (3) Solder Joint Inspection (1)
Specular Surface Angle of Incidence = Angle of Reflection
Image Geometry ________________________
Image Formation Geometry - determines where in the image plane the projection of a point in a scene will be located Physics of Light - determines the brightness of a point in the image plane as a function of scene illumination and surface properties
Image Formation
Image Formation The LINE OF SIGHT of a point in the scene is the line that passes through the point of interest and the centre of projection The above model leads to image inversion, to avoid this, assume the image plane is in front of the centre of projection
Image Formation
Perspective Projection (x’,y’) may be found by computing the co-ordinates of the intersection of the line of sight, passing thru’ (x,y,z) and the image plane By two sets of similar triangles : x’=fx/z  and  y’=fy/z
Image Irradiance (Brightness) The irradiance of a point in the image plane E(x’,y’) is determined by the amt of energy radiated by the corresponding scene in the direction of the image point : E(x’,y’)=L(x,y,z) Two factors determine radiance emitted by a surface patch I) Illumination falling on scene patch  - determined by the patch’s position relative to the distribution of  light sources II) Fraction of incident illumination reflected by patch  - determined by optical properties of the patch
Image Irradiance (  i  i  is the direction of the point source of scene illumination (  e  e  is the direction of the energy emitted from the surface patch E(  i  i  is the energy arriving at a patch L(  e  e  is the energy radiated from the patch
Image Irradiance The relationship between radiance and irradiance may be defined as follows : L(  e  e  f(  i  i  e  e  E(  i  i    where f(  i  i  e  e   is the  bidirectional reflectance distribution function  (BRDF) BRDF - depends on optical properties of the surface
Types of Reflectance Lambertian Reflectance Specular Reflectance Hybrid Reflectance Electron Microscopy Reflectance  (not covered)
Lambertian Reflectance  Appears equally bright from all viewing directions for a fixed illumination distribution Does not absorb any incident illumination BRDF is a constant (1/  )
Lambertian Reflectance -  Point Source Perceived brightness illuminated by a distant point source L(  e  e      Cos    s  --  Lambert Cosine Rule this means, a surface patch captures the most illumination if it is orientated so that the surface normal of the patch points in the direction of illumination
Lambertian Reflectance -  Uniform Source Perceived brightness illuminated by a uniform source L(  e  e   this means, no matter how a surface is illuminated, it receives the same amount of illumination
Specular Reflectance  Reflects all incident illumination in a direction that has the same angle with respect to the surface normal, but on the oppside of the surface normal light in the direction (  i  i   is reflected to (  e  e  (  i  i +    BRDF is   (  e  -  i    (  e -  i -   / Sin   i  Cos   i
Specular Reflectance  Perceived brightness is L(  e  e    e  e  this means, the incoming rays of light are reflected from the surface like a perfect mirrior
Hybrid Reflectance  Mixture of Lambertian and Specular reflectance BRDF is    (  e  -  i    (  e -  i -   / Sin   i  Cos   i where    is   the mixture of the two reflectance functions
Surface Orientation If (x,y,z) is a point on a surface and (x,y) is the same point on the image plane, with distance z from the camera (depth), then a nearby point is  (x+  x, y+  y) the change in depth can be expressed as  z = (  z/  x)  x + (  z/  y)  y
Surface Orientation The size of the partial derivaties of z with respect to x and y are related to the orientation of the surface patch. The gradient of (x,y,z) is the vector (p,q) which is given by  p = (  z/  x)  q   (  z/  y)
Reflectance Map For a given light source distribution and a given surface material, the reflectance of all surface orientations of p and q can be catalogued or computed to yield the reflectance map R(p,q) which  leads to the  image irradiance equation E(x,y) = R(p,q)
Reflectance Map i.e., that the irradiance at a point in the image plane is equal to the reflectance map value for surface orientation p and q in the corresponding point in a scene in other words, given a change in surface orientation, the reflectance map allows you to calculate a change in image intensity.
Shape from Shading the oppside problem, we know E(x,y) = R(p,q), so we need to calculate p and q for each point (x,y) in the image Two unknows, one equation, therefore, a constraint must be applied.
Shape from Shading Smoothness constraint  Objects are made of a smooth surface, which depart from smoothness only along their edges may be expressed as
Shape from Shading
Photometric Stereo Asssume a scene with Lambertian reflectance Each point (x,y) will have brightness E(x,y) and possible orientations p and q for a given light source if the same surface is illuminated by a point source in a different location, the reflectance map will the different
Photometric Stereo Using this method, surface orientation may be uniquely identified In reality, not all incident light is radiated from a surface, this is accounted for by adding an albedo factor (  ) into the image irradiance eqn. E(x,y) =   R(p,q)
Photometric Stereo ________________________
Determining Surface Orientations of Specular Surfaces by Using the Photometric Stereo Method Katsusi Ikeuchi Ministry of International Trade in Industry, Japan
Introduction Photometric stereo may be used to determine the surface orientation of a patch for diffuse surfaces, point source illumination is used for specular surfaces, a distributed light source is required
Image Radiance For a specular surface and an extended light source : L e (  e  e  L i  e  e  Relationship between reflected radiance and image irradiance E p  = {(  /4)(d/f p ) 2 Cos 4  }L e fp =   focal length  d = diameter of aperture     = off-axis angle
Image Radiance from this a brightness distribution may be derived and from that an inverse transformation
System Implementation Two Stage Process Off-Line Job On-Line Job
Off-Line Job Light Source : Three linear lamps, placed symmetrically 120 degrees apart Lookup Table : Could use 3D table, but observed triples often contain errors Instead use 2D lookup Table - each element has two alternatives Each alternative consists of a surface orientation and an instensity
Off-Line Job
On-Line Job Normalization is required to cancel the effect of albedo Brightness calibration is required also The correct alternative of the two solutions is found by comparing the distance between the actual third image brightness and the element of the matrix
Results Works well in a contrainted environment has problems if the surface is not smooth
Extracting the Shape and Roughness of Specular Lobe Objects Using Four Light Photometric Stereo Fredric Solomon Katsushi Ikeuchi Carnegie Mellon
Structured Highlights __________________________
Structured Highlight Inspection of Specular Surfaces Arthur C. Sanderson Lee E. Weiss Shree K. Nayar Carnegie Mellon
Introduction Structured Highlight approach yields 3D images from point sources and images ‘ Highlight’ - light source reflected on a specular surface
Introduction Angle of Incidence = Angle of Reflection A fixed camera will image a reflected light ray (highlight) only if it is positioned and orientationed correctly
Introduction Once a highlight is observed, if the direction of the incident ray is known, the orientation of the surface element may be found A spherical array of fixed point light sources is used to ensure all positions and directions are scanned
Lambertian Reflectance The reflectance relationship for a Lambertian model of image E(x,y) E(x,y) = A (n . s) n = surface normal (unit vector) s = source direction (unit vector) A = constant related to illumination intensity and  surface albedo
Hybrid Reflectance The reflectance relationship for a hybrid model of image E(x,y) E(x,y) = A k (n . s) + (a/2)(1-k) . [2(n . z)(n . s)-(z . s)] z = viewing direction (unit vector) k = relative weight of specular and Lambertian  components n = sharpness of the specularity
Structured Hightlight Inspection Using the above equation, the slope of any point may be calculated Surface orientation may be determined by the sources that produce local peaks in the reflectance map.
Camera Models Perspective Camera Model Orthographic Projection Model “ Fixed” Camera Model
Perspective Camera Model All reflected rays pass though a focal point this model provides very accurate measurements, but requires extensive calibration procedures
Orthographic Projection Model the focal point is assumed to be an infinite distance from the camera and all the reflected rays are perpendicular to the image plane
“ Fixed” Camera Model all rays are emitted from a single point on the reflectance plane and all surface normal estimates are computed to that reference point
Camera Models - Accuracy Perspective Camera Model Most accurate “ Fixed” Camera Model Next most accurate Orthographic Projection Model Most sensitive to error
SHINY - Structured Highlight INspection sYstem Highlightrs are extracted from images and tablulated Surface normals are computed based on lookup tables dervied from calibration experiments Reconstruction is done using interpolation followed by smoothing
Stereo Hightlight Algorithm The assumption of a distant source to uniquely identify the angle of incidence of illumination is an approximation To improve this, a second camera is used with stereo matching for greater accuracy
Results With two cameras need to resolve stereo matching ambiguities, therefore, need further constraints This technique is slow (1988)
Stereo Techniques ________________________
Stereo in the Presence of Specular Reflection Dinkar N. Bhat Shree K. Nayar Columbia University
Introduction Stereo is a direct method of obtaining the 3D structure of the visual world But, it suffers from the  fact that the  correspondence problem  is inherently underconstrained
Correspondence Problem the most common constraint is that intensities of corresponding points in images are identical The assumption is not valid for specular surfaces (since intensity is dependant on viewing direction)
Specular Reflection When a specular surface is smooth, the distribution of the specular intensity is concentrated  As the surface becomes rougher, the peak volume of the specular intensity decreases and the distribution widens
Specular Reflection Smooth Surface Rough Surface
Implications for Stereo The total image intensity of any point is the sum of the diffuse and specular intensity conponents Since the change in diffuse components is very small relative to the changes in specular components, it follows that the overall change in intensity is approximately equal to the specular intensity differences I diff  ~= | I s1  - I s2 |
Implications for Stereo This approximation will assist in determining an optimal binocular stereo configuration, which minimises specular correspondence problems but maximises precision in depth estimation
Binocular Stereo Configuration
Vergence When cameras are orientated such that their optical axes intersect at a point in space, this point is refered to as the  point vergence Depth accuracy is directly proportional to vergence (…which conflicts with the requirement to minimize intensity differences)
Binocular Stereo Determining the maximum acceptable vergence can be formulated as a constrained optimization problem f obj  = v 1  . v 2 c 1  : I diff  < a specified threshold c 2 : the cameras lie in the X-Z plane
Experiments Two uniformly rough cylindrical objects wrapped, one is gift wrapper and the other in xerox paper Similar patterns were marking on both
 
Trinocular Stereo Required in environments which are less structured and where surface roughness cannot be estimated Allows intensity difference at a point to be constrained to a threshold in at least one of the stereo pairs
Trinocular Stereo
Experiments The experiments done indicate that the reconstruction algorithm works resonably well in an unconstrained environment
Retrieving Shape Information from Multiple Images of a Specular Surface Howard Schultz University of Massachusetts
Introduction This research extends a diffuse mutli-image shape-from-shading technique to perform in the specular domain
Viewing Geometry Assumes an ideal camera with focal length  f  viewing a surface The camera focal point is located at P and O is a point on the surface From Snell’s Law an equation can be derived relating the objects position in space to its image on the image plane
Viewing Geometry
Image Synthesis the specular surface stereo method requires a model that predicts accurately the irradiance at each pixel Use  Idealized Image Synthesis Model this will allow us to determine that the  irradiance is directly proportional to the product of the radiance and the reflection co-efficient
Specular Surface Stereo Starting at a known evelation, an iterative process is used to determine shape Two-step process, determine orientation and propagation
Surface Orientation Identify the pixels that view the surface point (by calculating an inverse of a projective transform) A value of (p,q) is found such that the predicted irradiance at E(p,q) match the observed values
Surface Propagation if a point is known on a surface, it is possible to recover shape by propagation If (x,y) has elevation h and gradient (p,q) then (x+  x, y+  y) has elevation h’ = h +p  x +q  y
Obtaining Seed Values if there are surface features with diffuse proprties (e.g. scratchs or rough spots), use feature matching methods if surface is smooth, use a laser range finder
Results Tests were done on four _simulated_ images to determine the feasibilty of the method, the results were 99% accurate Using this method in the ‘real world’ would require more constraints
Motion Techniques ________________________
A Theory of Specular Surface Geometry Michael Oren Shree K. Nayar Columbia University
Introduction Develops a 2D profile recovery technique and generalize to 3D surface recovery Two major issues associated with specular surfaces detection shape recovery
Introduction Specular surfaces introduce a new kind of image feature, a  virtual feature A virtual feature is the reflection by a specular surface of another scene point which travels over the surface when the observer moves.
Curve Representation Cartesian co-ordinates result in complex equations describing specular motion  Using the Legendre transform to represent the curve as an envelope of tangents
Curve Representation
2D Caustics When a camera moves around an object the virtual features move on the specular surface, producing a family of reflected rays (the envelope defined by this family is called the  caustic )  On the other hand, the caustic of a real feature is one single point (the actual position of the feature in the scene where all the reflected rays intersect)
Test Image
2D Caustics Using this, feature classification is simply a matter of computing a caustic and determining whether it is a point or a curve Features are tracked from one frame to the next using a sum of square difference (SSD) correlation operator
2D Profile Recovery The camera is moved in the plane of the profile and the features are tracked An equation may be derived relating the caustic to the surface profile, allowing the recovery of the 2D profile from the image.
3D Surface Recovery The 3D camera motion problem will result in an arbitrary space curve rather than a family of curves as in the 2D case The 3D problem cannot but reduced to a finite number of 2D profile problems
3D Surface Recovery The concept behind the derevation of the 3D caustic curve is to decompose the caustic point position at any given instant into two orthogonal components As the camera moves along the specular object, a virtual feature travels along the 3D profile on the objects surface.  It is possible to develop an equation which relates the trajectory of the virtual feature to the surface profile
Results The 2D testing involved tracking two features on two different specular surfaces, in both experiments the profile was accurately estimated The 3D testing involved tracking a highlight on a specular surface, the recovered curve is in strong agreement with the actual surface
Epipolar Geometry ________________________
Epipolar Geometry two cameras are displaced from each other by a  baseline distance Object point X forms two distinct image points x and x’
Epipolar Geometry Assume images formed in front of camera to avoid inversion problem point (x’, y’) in the images plane from a real point (x, y, z) may be calculated as x’ = fx/z  and  y’ = fy/z the displacement between the locations of image point is called the  disparity
Epipolar Geometry the plane passing through the two camera centres and the object point is called the  epipolar plane the intersection of the image plane and the epipolar plane is called the  epipolar line
Generalizing Epipolar-Plane Image Analysis on the Spatiotemporal Surface H. Harlyn Baker Robert C. Bolles SRI International
Introduction The technique of  Epipolar-Plane Image Analysis  involves obtaining depth estimates for a point by taking a large number of images This gives a large baseline and higher accuracy It also minimises the correspondence problem
Epipolar-Plane Image Analysis this technique imposes the following constraints the camera is moving along a linear path it acquires images at equal spacing as it is moved the camera’s view is orthogonal to the direction of travel
Epipolar-Plane Image Analysis the traditional notion of epipolar lines is generalized to an epipolar plane using this, plus the fact that the camera is always moving along a linear path and we may conclude the a given scence feature will always be restricted to a given epipolar plane
Epipolar-Plane Image Analysis
The Spatiotemporal Surface As images are collected, they are stacked up into a  spatiotemporal surface as each new image is obtained its  spatial  and  temporal  edge contours sre constructed using a 3D Laplacian of a 3D Gaussian
The Spatiotemporal Surface
3D Surface Estimation and Model Construction From Specular Motion in Image Sequences   Jiang Yu Zheng Norihiro Abe Kyushu Institiute of Technology Yoshihiro Fukagawa Torey Corporation
Introduction This technique reconstructs 3D models of complex objects with specular surfaces The process involves rotating the object under inspection
System Setup
Projected Highlights An extended light source project highlight stripes onto the object The stripes gradually shift across the object surface and pass most point once The specular motion is captured in epipolar-plane images
Feature tracking We know how to detect corners and edge of surface patterns The motion type of highlights in EPI can be used to determine five categories of shape convex corner convex planer concave concave corner
EPI-Plane Images During the rotation, highlights will split and merge, appear and dissapear, etc.
Results Using EPIs results in very accurate reconstruction of surface shapes
Solder Joint Inspection ____________________________
Visual Inspection System for the Classification of Solder Joints Tae-Hyeon Kim Young Shik Moon Sung Han Park Hanyang University Kwang-Jin Yoon LG Industrial Systems
Introduction Uses three layers of ring shaped LED arrays, with different illumination angles Solder Joint are segemented and classified using either their 2D features or their 3D features
Classification of Joints
Preprocessing Objective is to identify and segement the soldered regions Solder is isolated both vertically and horozontally
Feature Extraction - 2D Average gray level value of I 1  and I 3 X 1  = 1/N *    I K (x,y) Percentage of highlights of I 1  and I 2 X 2  = 1/N *    U(x,y) * 100 U(x,y) = thresholded image of I 1
Feature Extraction - 3D Shape recovery is done using a hybrid reflectance model for all samples not in the confidence interval A reflectance map is built up representing intensity values as a function of orientation for each illumination angle  For each point, three intensity values are recovered and from these and the reflectance map, the orientation is estimated
Classification -2D Uses 3-Layer backpropagation neural network Four input nodes for four features Five hidden layer nodes Four output nodes for four solder types
Classification - 3D Bayes Classifier assuming Gaussian Distribution
Inspection System
Results
Results

Computer Vision: Shape from Specularities and Motion

  • 1.
    Use of Specularitiesand Motion in the Extraction of Surface Shape Damian Gordon [email_address]
  • 2.
    Introduction Introduction -Image Geometry Photometric Stereo (1) Structured Highlights (1) Stereo Techniques (2) Motion Techniques (3) Solder Joint Inspection (1)
  • 3.
    Specular Surface Angleof Incidence = Angle of Reflection
  • 4.
  • 5.
    Image Formation Geometry- determines where in the image plane the projection of a point in a scene will be located Physics of Light - determines the brightness of a point in the image plane as a function of scene illumination and surface properties
  • 6.
  • 7.
    Image Formation TheLINE OF SIGHT of a point in the scene is the line that passes through the point of interest and the centre of projection The above model leads to image inversion, to avoid this, assume the image plane is in front of the centre of projection
  • 8.
  • 9.
    Perspective Projection (x’,y’)may be found by computing the co-ordinates of the intersection of the line of sight, passing thru’ (x,y,z) and the image plane By two sets of similar triangles : x’=fx/z and y’=fy/z
  • 10.
    Image Irradiance (Brightness)The irradiance of a point in the image plane E(x’,y’) is determined by the amt of energy radiated by the corresponding scene in the direction of the image point : E(x’,y’)=L(x,y,z) Two factors determine radiance emitted by a surface patch I) Illumination falling on scene patch - determined by the patch’s position relative to the distribution of light sources II) Fraction of incident illumination reflected by patch - determined by optical properties of the patch
  • 11.
    Image Irradiance ( i  i  is the direction of the point source of scene illumination (  e  e  is the direction of the energy emitted from the surface patch E(  i  i  is the energy arriving at a patch L(  e  e  is the energy radiated from the patch
  • 12.
    Image Irradiance Therelationship between radiance and irradiance may be defined as follows : L(  e  e  f(  i  i  e  e  E(  i  i    where f(  i  i  e  e  is the bidirectional reflectance distribution function (BRDF) BRDF - depends on optical properties of the surface
  • 13.
    Types of ReflectanceLambertian Reflectance Specular Reflectance Hybrid Reflectance Electron Microscopy Reflectance (not covered)
  • 14.
    Lambertian Reflectance Appears equally bright from all viewing directions for a fixed illumination distribution Does not absorb any incident illumination BRDF is a constant (1/  )
  • 15.
    Lambertian Reflectance - Point Source Perceived brightness illuminated by a distant point source L(  e  e    Cos  s -- Lambert Cosine Rule this means, a surface patch captures the most illumination if it is orientated so that the surface normal of the patch points in the direction of illumination
  • 16.
    Lambertian Reflectance - Uniform Source Perceived brightness illuminated by a uniform source L(  e  e   this means, no matter how a surface is illuminated, it receives the same amount of illumination
  • 17.
    Specular Reflectance Reflects all incident illumination in a direction that has the same angle with respect to the surface normal, but on the oppside of the surface normal light in the direction (  i  i   is reflected to (  e  e  (  i  i +  BRDF is  (  e -  i    (  e -  i -  / Sin  i Cos  i
  • 18.
    Specular Reflectance Perceived brightness is L(  e  e    e  e  this means, the incoming rays of light are reflected from the surface like a perfect mirrior
  • 19.
    Hybrid Reflectance Mixture of Lambertian and Specular reflectance BRDF is   (  e -  i    (  e -  i -  / Sin  i Cos  i where  is the mixture of the two reflectance functions
  • 20.
    Surface Orientation If(x,y,z) is a point on a surface and (x,y) is the same point on the image plane, with distance z from the camera (depth), then a nearby point is (x+  x, y+  y) the change in depth can be expressed as  z = (  z/  x)  x + (  z/  y)  y
  • 21.
    Surface Orientation Thesize of the partial derivaties of z with respect to x and y are related to the orientation of the surface patch. The gradient of (x,y,z) is the vector (p,q) which is given by  p = (  z/  x)  q  (  z/  y)
  • 22.
    Reflectance Map Fora given light source distribution and a given surface material, the reflectance of all surface orientations of p and q can be catalogued or computed to yield the reflectance map R(p,q) which leads to the image irradiance equation E(x,y) = R(p,q)
  • 23.
    Reflectance Map i.e.,that the irradiance at a point in the image plane is equal to the reflectance map value for surface orientation p and q in the corresponding point in a scene in other words, given a change in surface orientation, the reflectance map allows you to calculate a change in image intensity.
  • 24.
    Shape from Shadingthe oppside problem, we know E(x,y) = R(p,q), so we need to calculate p and q for each point (x,y) in the image Two unknows, one equation, therefore, a constraint must be applied.
  • 25.
    Shape from ShadingSmoothness constraint Objects are made of a smooth surface, which depart from smoothness only along their edges may be expressed as
  • 26.
  • 27.
    Photometric Stereo Asssumea scene with Lambertian reflectance Each point (x,y) will have brightness E(x,y) and possible orientations p and q for a given light source if the same surface is illuminated by a point source in a different location, the reflectance map will the different
  • 28.
    Photometric Stereo Usingthis method, surface orientation may be uniquely identified In reality, not all incident light is radiated from a surface, this is accounted for by adding an albedo factor (  ) into the image irradiance eqn. E(x,y) =  R(p,q)
  • 29.
  • 30.
    Determining Surface Orientationsof Specular Surfaces by Using the Photometric Stereo Method Katsusi Ikeuchi Ministry of International Trade in Industry, Japan
  • 31.
    Introduction Photometric stereomay be used to determine the surface orientation of a patch for diffuse surfaces, point source illumination is used for specular surfaces, a distributed light source is required
  • 32.
    Image Radiance Fora specular surface and an extended light source : L e (  e  e  L i  e  e  Relationship between reflected radiance and image irradiance E p = {(  /4)(d/f p ) 2 Cos 4  }L e fp = focal length d = diameter of aperture  = off-axis angle
  • 33.
    Image Radiance fromthis a brightness distribution may be derived and from that an inverse transformation
  • 34.
    System Implementation TwoStage Process Off-Line Job On-Line Job
  • 35.
    Off-Line Job LightSource : Three linear lamps, placed symmetrically 120 degrees apart Lookup Table : Could use 3D table, but observed triples often contain errors Instead use 2D lookup Table - each element has two alternatives Each alternative consists of a surface orientation and an instensity
  • 36.
  • 37.
    On-Line Job Normalizationis required to cancel the effect of albedo Brightness calibration is required also The correct alternative of the two solutions is found by comparing the distance between the actual third image brightness and the element of the matrix
  • 38.
    Results Works wellin a contrainted environment has problems if the surface is not smooth
  • 39.
    Extracting the Shapeand Roughness of Specular Lobe Objects Using Four Light Photometric Stereo Fredric Solomon Katsushi Ikeuchi Carnegie Mellon
  • 40.
  • 41.
    Structured Highlight Inspectionof Specular Surfaces Arthur C. Sanderson Lee E. Weiss Shree K. Nayar Carnegie Mellon
  • 42.
    Introduction Structured Highlightapproach yields 3D images from point sources and images ‘ Highlight’ - light source reflected on a specular surface
  • 43.
    Introduction Angle ofIncidence = Angle of Reflection A fixed camera will image a reflected light ray (highlight) only if it is positioned and orientationed correctly
  • 44.
    Introduction Once ahighlight is observed, if the direction of the incident ray is known, the orientation of the surface element may be found A spherical array of fixed point light sources is used to ensure all positions and directions are scanned
  • 45.
    Lambertian Reflectance Thereflectance relationship for a Lambertian model of image E(x,y) E(x,y) = A (n . s) n = surface normal (unit vector) s = source direction (unit vector) A = constant related to illumination intensity and surface albedo
  • 46.
    Hybrid Reflectance Thereflectance relationship for a hybrid model of image E(x,y) E(x,y) = A k (n . s) + (a/2)(1-k) . [2(n . z)(n . s)-(z . s)] z = viewing direction (unit vector) k = relative weight of specular and Lambertian components n = sharpness of the specularity
  • 47.
    Structured Hightlight InspectionUsing the above equation, the slope of any point may be calculated Surface orientation may be determined by the sources that produce local peaks in the reflectance map.
  • 48.
    Camera Models PerspectiveCamera Model Orthographic Projection Model “ Fixed” Camera Model
  • 49.
    Perspective Camera ModelAll reflected rays pass though a focal point this model provides very accurate measurements, but requires extensive calibration procedures
  • 50.
    Orthographic Projection Modelthe focal point is assumed to be an infinite distance from the camera and all the reflected rays are perpendicular to the image plane
  • 51.
    “ Fixed” CameraModel all rays are emitted from a single point on the reflectance plane and all surface normal estimates are computed to that reference point
  • 52.
    Camera Models -Accuracy Perspective Camera Model Most accurate “ Fixed” Camera Model Next most accurate Orthographic Projection Model Most sensitive to error
  • 53.
    SHINY - StructuredHighlight INspection sYstem Highlightrs are extracted from images and tablulated Surface normals are computed based on lookup tables dervied from calibration experiments Reconstruction is done using interpolation followed by smoothing
  • 54.
    Stereo Hightlight AlgorithmThe assumption of a distant source to uniquely identify the angle of incidence of illumination is an approximation To improve this, a second camera is used with stereo matching for greater accuracy
  • 55.
    Results With twocameras need to resolve stereo matching ambiguities, therefore, need further constraints This technique is slow (1988)
  • 56.
  • 57.
    Stereo in thePresence of Specular Reflection Dinkar N. Bhat Shree K. Nayar Columbia University
  • 58.
    Introduction Stereo isa direct method of obtaining the 3D structure of the visual world But, it suffers from the fact that the correspondence problem is inherently underconstrained
  • 59.
    Correspondence Problem themost common constraint is that intensities of corresponding points in images are identical The assumption is not valid for specular surfaces (since intensity is dependant on viewing direction)
  • 60.
    Specular Reflection Whena specular surface is smooth, the distribution of the specular intensity is concentrated As the surface becomes rougher, the peak volume of the specular intensity decreases and the distribution widens
  • 61.
    Specular Reflection SmoothSurface Rough Surface
  • 62.
    Implications for StereoThe total image intensity of any point is the sum of the diffuse and specular intensity conponents Since the change in diffuse components is very small relative to the changes in specular components, it follows that the overall change in intensity is approximately equal to the specular intensity differences I diff ~= | I s1 - I s2 |
  • 63.
    Implications for StereoThis approximation will assist in determining an optimal binocular stereo configuration, which minimises specular correspondence problems but maximises precision in depth estimation
  • 64.
  • 65.
    Vergence When camerasare orientated such that their optical axes intersect at a point in space, this point is refered to as the point vergence Depth accuracy is directly proportional to vergence (…which conflicts with the requirement to minimize intensity differences)
  • 66.
    Binocular Stereo Determiningthe maximum acceptable vergence can be formulated as a constrained optimization problem f obj = v 1 . v 2 c 1 : I diff < a specified threshold c 2 : the cameras lie in the X-Z plane
  • 67.
    Experiments Two uniformlyrough cylindrical objects wrapped, one is gift wrapper and the other in xerox paper Similar patterns were marking on both
  • 68.
  • 69.
    Trinocular Stereo Requiredin environments which are less structured and where surface roughness cannot be estimated Allows intensity difference at a point to be constrained to a threshold in at least one of the stereo pairs
  • 70.
  • 71.
    Experiments The experimentsdone indicate that the reconstruction algorithm works resonably well in an unconstrained environment
  • 72.
    Retrieving Shape Informationfrom Multiple Images of a Specular Surface Howard Schultz University of Massachusetts
  • 73.
    Introduction This researchextends a diffuse mutli-image shape-from-shading technique to perform in the specular domain
  • 74.
    Viewing Geometry Assumesan ideal camera with focal length f viewing a surface The camera focal point is located at P and O is a point on the surface From Snell’s Law an equation can be derived relating the objects position in space to its image on the image plane
  • 75.
  • 76.
    Image Synthesis thespecular surface stereo method requires a model that predicts accurately the irradiance at each pixel Use Idealized Image Synthesis Model this will allow us to determine that the irradiance is directly proportional to the product of the radiance and the reflection co-efficient
  • 77.
    Specular Surface StereoStarting at a known evelation, an iterative process is used to determine shape Two-step process, determine orientation and propagation
  • 78.
    Surface Orientation Identifythe pixels that view the surface point (by calculating an inverse of a projective transform) A value of (p,q) is found such that the predicted irradiance at E(p,q) match the observed values
  • 79.
    Surface Propagation ifa point is known on a surface, it is possible to recover shape by propagation If (x,y) has elevation h and gradient (p,q) then (x+  x, y+  y) has elevation h’ = h +p  x +q  y
  • 80.
    Obtaining Seed Valuesif there are surface features with diffuse proprties (e.g. scratchs or rough spots), use feature matching methods if surface is smooth, use a laser range finder
  • 81.
    Results Tests weredone on four _simulated_ images to determine the feasibilty of the method, the results were 99% accurate Using this method in the ‘real world’ would require more constraints
  • 82.
  • 83.
    A Theory ofSpecular Surface Geometry Michael Oren Shree K. Nayar Columbia University
  • 84.
    Introduction Develops a2D profile recovery technique and generalize to 3D surface recovery Two major issues associated with specular surfaces detection shape recovery
  • 85.
    Introduction Specular surfacesintroduce a new kind of image feature, a virtual feature A virtual feature is the reflection by a specular surface of another scene point which travels over the surface when the observer moves.
  • 86.
    Curve Representation Cartesianco-ordinates result in complex equations describing specular motion Using the Legendre transform to represent the curve as an envelope of tangents
  • 87.
  • 88.
    2D Caustics Whena camera moves around an object the virtual features move on the specular surface, producing a family of reflected rays (the envelope defined by this family is called the caustic ) On the other hand, the caustic of a real feature is one single point (the actual position of the feature in the scene where all the reflected rays intersect)
  • 89.
  • 90.
    2D Caustics Usingthis, feature classification is simply a matter of computing a caustic and determining whether it is a point or a curve Features are tracked from one frame to the next using a sum of square difference (SSD) correlation operator
  • 91.
    2D Profile RecoveryThe camera is moved in the plane of the profile and the features are tracked An equation may be derived relating the caustic to the surface profile, allowing the recovery of the 2D profile from the image.
  • 92.
    3D Surface RecoveryThe 3D camera motion problem will result in an arbitrary space curve rather than a family of curves as in the 2D case The 3D problem cannot but reduced to a finite number of 2D profile problems
  • 93.
    3D Surface RecoveryThe concept behind the derevation of the 3D caustic curve is to decompose the caustic point position at any given instant into two orthogonal components As the camera moves along the specular object, a virtual feature travels along the 3D profile on the objects surface. It is possible to develop an equation which relates the trajectory of the virtual feature to the surface profile
  • 94.
    Results The 2Dtesting involved tracking two features on two different specular surfaces, in both experiments the profile was accurately estimated The 3D testing involved tracking a highlight on a specular surface, the recovered curve is in strong agreement with the actual surface
  • 95.
  • 96.
    Epipolar Geometry twocameras are displaced from each other by a baseline distance Object point X forms two distinct image points x and x’
  • 97.
    Epipolar Geometry Assumeimages formed in front of camera to avoid inversion problem point (x’, y’) in the images plane from a real point (x, y, z) may be calculated as x’ = fx/z and y’ = fy/z the displacement between the locations of image point is called the disparity
  • 98.
    Epipolar Geometry theplane passing through the two camera centres and the object point is called the epipolar plane the intersection of the image plane and the epipolar plane is called the epipolar line
  • 99.
    Generalizing Epipolar-Plane ImageAnalysis on the Spatiotemporal Surface H. Harlyn Baker Robert C. Bolles SRI International
  • 100.
    Introduction The techniqueof Epipolar-Plane Image Analysis involves obtaining depth estimates for a point by taking a large number of images This gives a large baseline and higher accuracy It also minimises the correspondence problem
  • 101.
    Epipolar-Plane Image Analysisthis technique imposes the following constraints the camera is moving along a linear path it acquires images at equal spacing as it is moved the camera’s view is orthogonal to the direction of travel
  • 102.
    Epipolar-Plane Image Analysisthe traditional notion of epipolar lines is generalized to an epipolar plane using this, plus the fact that the camera is always moving along a linear path and we may conclude the a given scence feature will always be restricted to a given epipolar plane
  • 103.
  • 104.
    The Spatiotemporal SurfaceAs images are collected, they are stacked up into a spatiotemporal surface as each new image is obtained its spatial and temporal edge contours sre constructed using a 3D Laplacian of a 3D Gaussian
  • 105.
  • 106.
    3D Surface Estimationand Model Construction From Specular Motion in Image Sequences Jiang Yu Zheng Norihiro Abe Kyushu Institiute of Technology Yoshihiro Fukagawa Torey Corporation
  • 107.
    Introduction This techniquereconstructs 3D models of complex objects with specular surfaces The process involves rotating the object under inspection
  • 108.
  • 109.
    Projected Highlights Anextended light source project highlight stripes onto the object The stripes gradually shift across the object surface and pass most point once The specular motion is captured in epipolar-plane images
  • 110.
    Feature tracking Weknow how to detect corners and edge of surface patterns The motion type of highlights in EPI can be used to determine five categories of shape convex corner convex planer concave concave corner
  • 111.
    EPI-Plane Images Duringthe rotation, highlights will split and merge, appear and dissapear, etc.
  • 112.
    Results Using EPIsresults in very accurate reconstruction of surface shapes
  • 113.
    Solder Joint Inspection____________________________
  • 114.
    Visual Inspection Systemfor the Classification of Solder Joints Tae-Hyeon Kim Young Shik Moon Sung Han Park Hanyang University Kwang-Jin Yoon LG Industrial Systems
  • 115.
    Introduction Uses threelayers of ring shaped LED arrays, with different illumination angles Solder Joint are segemented and classified using either their 2D features or their 3D features
  • 116.
  • 117.
    Preprocessing Objective isto identify and segement the soldered regions Solder is isolated both vertically and horozontally
  • 118.
    Feature Extraction -2D Average gray level value of I 1 and I 3 X 1 = 1/N *  I K (x,y) Percentage of highlights of I 1 and I 2 X 2 = 1/N *  U(x,y) * 100 U(x,y) = thresholded image of I 1
  • 119.
    Feature Extraction -3D Shape recovery is done using a hybrid reflectance model for all samples not in the confidence interval A reflectance map is built up representing intensity values as a function of orientation for each illumination angle For each point, three intensity values are recovered and from these and the reflectance map, the orientation is estimated
  • 120.
    Classification -2D Uses3-Layer backpropagation neural network Four input nodes for four features Five hidden layer nodes Four output nodes for four solder types
  • 121.
    Classification - 3DBayes Classifier assuming Gaussian Distribution
  • 122.
  • 123.
  • 124.