SlideShare a Scribd company logo
1 of 25
INTRODUCTION
TO ROBOTICS
VISION SENSORS
DR. HEMA C.R.
Road Map
Robot Vision
Imaging Sensors
Vision Systems
Visual Servoing
Configuration of Vision System
Image Processing
Gray level histogram
Image Segmentation
Region based segmentation
Image interpretation
SENSORS FOR ROBOTICS LECTURE 9 DR. HEMA C.R. 2
Robot Vision
A robot vision system consists of one or more cameras,
special-purpose lighting, software, and a robot or robots.
Vision sensors are used in robot to provide information
about the work area and objects to the robot.
Images of the working area or object are processed using
image processing software to determine position and
orientation of objects in the work cell.
Vision is also used in mobile robots to navigate.
Depending on the application, the camera might be
mounted on the robot or could be in a fixed position within
the cell.
SENSORS FOR ROBOTICS LECTURE 9 DR. HEMA C.R. 3
Imaging Sensors
Image sensors convert light into electric charge and process it into
electronic signals
Image Sensors
◦ Charge Coupled Device CCD
◦ All pixels are devoted to light capture
◦ Output is uniform
◦ High image quality
◦ Used in cell phone cameras
◦ Complementary Metal Oxide Semiconductor CMOS
◦ Pixels devoted to light capture are limited
◦ Output is not uniform
◦ High Image quality
◦ Used in professional and industrial cameras
Lighting Techniques
The three lighting techniques used in vision
applications are:
◦ Front lighting,
◦ Back lighting
◦ Structured lighting
Vision Systems
Vision Systems are of two types namely: (a) Stand alone and (b)
PC based. Standalone systems are
Smart Camera: These are self-contained and do not require
separate computers, there are two types of image sensors used
in smart cameras namely (a) CCD image sensors and (b) CMOS
image sensors.
Vision Sensors: These are integrated devices which do not
require any programming and are systems between smart cams
and vision systems
Digital Cameras are classified based on the type of sensors and
memory storage devices used namely (a) CCD image, (b) CMOS
image, (c) Flash memory, (d) Memory stick (e) Smart Media cards
(f) Removable drives
SENSORS FOR ROBOTICS LECTURE 9 DR. HEMA C.R. 6
Image File Formats
Images are stored in a computer in one of the
following formats, depending on the application of
the images stored.
◦ Tagged Image Format [.tif]
◦ Portable Network Graphics [.png]
◦ Joint Photographic Experts Group [.jpeg, .jpg]
◦ Bitmap [.bmp]
◦ Graphics Interchange Format [.gif]
◦ Raster Images [.ras]
◦ Postscript [.ps]
SENSORS FOR ROBOTICS LECTURE 9 DR. HEMA C.R. 7
Vision based Robot Control
 Vision-based robot control also known as Visual Servoing,
is a technique which uses feedback information extracted
from a vision sensor (visual feedback) to control the
motion of a robot.
 There are two fundamental configurations of the robot
end-effector (hand) and the camera:
 Eye-in-hand, or end-point closed-loop control, where the camera is
attached to the moving hand and observing the relative position of
the target.
 Eye-to-hand, or end-point open-loop control, where the camera is
fixed in the world and observing the target and the motion of the
hand.
SENSORS FOR ROBOTICS LECTURE 9 DR. HEMA C.R. 8
 Vision allows a robotic system to obtain
geometrical and qualitative information on the
surrounding environment
 high level control motion planning
(look-and-move  visual grasping)
 low level control measures used in the control
loop
 Visual servoing control is broadly classified into the
following types and they are based on feedback of visual
measurements
 image-based visual servoing
 position-based visual servoing
 hybrid visual servoing
Visual Servoing
SENSORS FOR ROBOTICS LECTURE
9 DR. HEMA C.R.
9
Image based Visual
Servoing
 The control law is based on the error between current and
desired features on the image plane, and does not involve
any estimate of the pose of the target.
 Image processing is aimed at extracting numerical
information referred to as image feature parameters.
 The features may be the coordinates of visual features,
lines or moments of regions.
 IBVS has difficulties with motions very large rotations,
which has come to be called camera retreat.
SENSORS FOR ROBOTICS LECTURE 9 DR. HEMA C.R. 10
Position based Visual
Servoing
 PBVS is a model-based technique (with a single camera).
 The pose of the object of interest is estimated with respect to the
camera and then a command is issued to the robot controller, which
in turn controls the robot.
 In this case the image features are extracted as well, but are
additionally used to estimate 3D information (pose of the object in
Cartesian space), hence it is servoing in 3D.
 Pose estimation methods are based on the measurement of a
certain number of points or correspondences
 Numerical pose estimation methods are based on the integration of
the linear mapping between the camera velocity in the operational
space and the time derivative of the feature parameters in the
image plane
SENSORS FOR ROBOTICS LECTURE 9 DR. HEMA C.R. 11
 Multi-camera systems
 information about its depth by evaluating its
distance with respect to the visual system
 3D vision or stereo vision
 Mono-camera systems
 two images of the same object from two different
poses
 if only a single image is available, the depth can
be estimated on the basis ofgeometrical
characteristics of the object known in advance.
 This is cheaper and easier to calibrate
Visual System Configuration
SENSORS FOR ROBOTICS LECTURE 9 DR. HEMA C.R. 12
Visual System Configuration
 Eye-to-hand : fixed location
 advantage is that the camera field of view does
not change during the execution of the task,
implying that the accuracy of such
measurements is constant
 the manipulator occludes, in part or in whole,
the view of the objects
 Eye-in-hand : mobile configuration
 the camera is placed on the manipulator
 high variability in the accuracy of measurements
 the accuracy becomes almost constant and is
usually higher than that achievable with eye-to-
hand cameras
SENSORS FOR ROBOTICS LECTURE 9 DR. HEMA C.R. 13
Visual System Configuration
 Hybrid configuration consisting of one or more
cameras in eye-to-hand configuration, and one
or more cameras in eye-in-hand configuration
 ensures a good accuracy throughout the
workspace, while avoiding the problems of
occlusions
SENSORS FOR ROBOTICS LECTURE 9 DR. HEMA C.R. 14
 Visual information is very rich and varied
 complex and computational expensive
transformations before it can be used for
controlling a robotic system
 extraction of numerical information from the
image  image feature parameters
 Two basic operations
 segmentation  a representation suitable for the
identification of measurable features of the
image
 interpretation measurement of the feature
parameters of the image
Image Processing
SENSORS FOR ROBOTICS LECTURE 9 DR. HEMA C.R. 15
Image Processing
 The source information is contained in a two
dimensional memory array representing the spatial
sample of the image
 image function I (x,y) is a vector function whose
components represent the values of one or
more physical quantities related to the pixel in a
sampled and quantized form
 light intensity in the wavelengths of red, green
and blue
 or in shades of gray (number of gray levels
depends on resolution  256 gray levels)
SENSORS FOR ROBOTICS LECTURE 9 DR. HEMA C.R. 16
Gray-level Histogram
SENSORS FOR ROBOTICS LECTURE 9 DR. HEMA C.R.
 Provides the frequency of occurrence of each gray level in the image
 The gray levels are quantized from 0 to 255
 The value h(p) of the histogram at a particular gray level p ∈[0, 255]
is the number of image pixels with gray level p
 If this value is divided by the total number of pixels, the histogram is
termed
Normalized histogram
17
 Consists of a grouping process, by which the image is
divided into a certain number of groups, referred to as
segments (component of each group similar with
respect to one or more characteristics)
 Distinct objects of the environment
 Or homogeneous object parts
 Finding connected regions of the image
 Grouping sets of pixels sharing common features
into two-dimensional connected areas
 High memory usage
 Low computational load
Image segmentation
SENSORS FOR ROBOTICS LECTURE 9 DR. HEMA C.R. 18
Image segmentation
 Detection of boundaries
 Identifying the pixels corresponding to object
contours and isolating them from the rest of the
image
 The boundary of an object, once extracted, can
be used to define the position and shape of the
object itself
 Complementary
SENSORS FOR ROBOTICS LECTURE 9 DR. HEMA C.R. 19
 Obtaining connected regions by
continuous merging of initially small
groups of adjacent pixels into larger
ones
 If the pixels belonging to these regions
satisfy a common property, termed
uniformity predicate
(Verifying gray level)
 Binary segmentation or image
binarization by comparing the gray
level of each pixel with a threshold l
 The peaks of the histogram are termed
modes (for the dark objects the closest
minimum to the left)
Region-based segmentation
SENSORS FOR ROBOTICS LECTURE 9 DR. HEMA C.R. 20
Region-based segmentation
 In the presence of multiple objects, a further
elaboration is required to separate the
connected regions corresponding to the single
objects
 The gray-scale histogram is noisy and the modes
are difficult to identify
 various techniques have been developed to
increase the robustness of binary
segmentation
 appropriate filtering of the image before
binarization
 algorithms for automatic selection of the
threshold
SENSORS FOR ROBOTICS LECTURE 9 DR. HEMA C.R. 21
 Boundary-based segmentation techniques usually
obtain a boundary by grouping many single local
edges
 Corresponding to local discontinuities of image gray
level
 Local edges are sets of pixels where the light intensity
changes abruptly
 The algorithms for boundary detection
 Derive an intermediate image based on local edges
from the original gray-scale image
 Construct short-curve segments by edge linking
 Obtain the boundaries by joining these curve
segments through geometric primitives often
known in advance
Boundary-based segmentation
SENSORS FOR ROBOTICS LECTURE 9 DR. HEMA C.R. 22
Boundary-based segmentation
 Edge detection is essentially a filtering process whereas
boundary detection is a higher level task usually requiring
more sophisticated software
 Edge detection can be performed by grouping the
pixels where the magnitude of the gradient is
greater than a threshold
 In case of simple and well-defined shapes, boundary
detection becomes straightforward and segmentation
reduces to the sole edge detection
 Several edge detection techniques exist, most of them
require the calculation of the gradient or of the laplacian of
function I(XI, YI )
SENSORS FOR ROBOTICS LECTURE 9 DR. HEMA C.R. 23
Boundary-based
segmentation
 Visual servoing is based on the mapping
between the feature parameters of an object
measured in the image plane of the camera
and the operational space variables defining
the relative pose of the object with respect to
the camera
 Often it is sufficient to derive a differential
mapping in terms of velocity (easier to solve
 linear, numerical integration algorithms)
SENSORS FOR ROBOTICS LECTURE 9 DR. HEMA C.R. 24
SENSORS FOR ROBOTICS LECTURE 9 DR. HEMA C.R. 25

More Related Content

Similar to Vision Sensors.pptx

Concept of stereo vision based virtual touch
Concept of stereo vision based virtual touchConcept of stereo vision based virtual touch
Concept of stereo vision based virtual touch
Vivek Chamorshikar
 
Report bep thomas_blanken
Report bep thomas_blankenReport bep thomas_blanken
Report bep thomas_blanken
xepost
 
Game Engine Overview
Game Engine OverviewGame Engine Overview
Game Engine Overview
Sharad Mitra
 
Stereo Correspondence Algorithms for Robotic Applications Under Ideal And Non...
Stereo Correspondence Algorithms for Robotic Applications Under Ideal And Non...Stereo Correspondence Algorithms for Robotic Applications Under Ideal And Non...
Stereo Correspondence Algorithms for Robotic Applications Under Ideal And Non...
CSCJournals
 
10.1109@ICCMC48092.2020.ICCMC-000167.pdf
10.1109@ICCMC48092.2020.ICCMC-000167.pdf10.1109@ICCMC48092.2020.ICCMC-000167.pdf
10.1109@ICCMC48092.2020.ICCMC-000167.pdf
mokamojah
 
Presentation Object Recognition And Tracking Project
Presentation Object Recognition And Tracking ProjectPresentation Object Recognition And Tracking Project
Presentation Object Recognition And Tracking Project
Prathamesh Joshi
 

Similar to Vision Sensors.pptx (20)

Intelligent indoor mobile robot navigation using stereo vision
Intelligent indoor mobile robot navigation using stereo visionIntelligent indoor mobile robot navigation using stereo vision
Intelligent indoor mobile robot navigation using stereo vision
 
Concept of stereo vision based virtual touch
Concept of stereo vision based virtual touchConcept of stereo vision based virtual touch
Concept of stereo vision based virtual touch
 
An Assessment of Image Matching Algorithms in Depth Estimation
An Assessment of Image Matching Algorithms in Depth EstimationAn Assessment of Image Matching Algorithms in Depth Estimation
An Assessment of Image Matching Algorithms in Depth Estimation
 
A Fast Single-Pixel Laser Imager for VR/AR Headset Tracking
A Fast Single-Pixel Laser Imager for VR/AR Headset TrackingA Fast Single-Pixel Laser Imager for VR/AR Headset Tracking
A Fast Single-Pixel Laser Imager for VR/AR Headset Tracking
 
Visual pattern recognition in robotics
Visual pattern recognition in roboticsVisual pattern recognition in robotics
Visual pattern recognition in robotics
 
detection and disabling of digital camera
detection and disabling of digital cameradetection and disabling of digital camera
detection and disabling of digital camera
 
Visual pattern recognition in robotics
Visual pattern recognition in roboticsVisual pattern recognition in robotics
Visual pattern recognition in robotics
 
CVGIP 2010 Part 3
CVGIP 2010 Part 3CVGIP 2010 Part 3
CVGIP 2010 Part 3
 
Photogrammetry 1.
Photogrammetry 1.Photogrammetry 1.
Photogrammetry 1.
 
D04432528
D04432528D04432528
D04432528
 
Large scale 3 d point cloud compression using adaptive radial distance predic...
Large scale 3 d point cloud compression using adaptive radial distance predic...Large scale 3 d point cloud compression using adaptive radial distance predic...
Large scale 3 d point cloud compression using adaptive radial distance predic...
 
Report bep thomas_blanken
Report bep thomas_blankenReport bep thomas_blanken
Report bep thomas_blanken
 
Automatic License Plate Detection in Foggy Condition using Enhanced OTSU Tech...
Automatic License Plate Detection in Foggy Condition using Enhanced OTSU Tech...Automatic License Plate Detection in Foggy Condition using Enhanced OTSU Tech...
Automatic License Plate Detection in Foggy Condition using Enhanced OTSU Tech...
 
Game Engine Overview
Game Engine OverviewGame Engine Overview
Game Engine Overview
 
Rapid Laser Scanning the process
Rapid Laser Scanning the processRapid Laser Scanning the process
Rapid Laser Scanning the process
 
Stereo Correspondence Algorithms for Robotic Applications Under Ideal And Non...
Stereo Correspondence Algorithms for Robotic Applications Under Ideal And Non...Stereo Correspondence Algorithms for Robotic Applications Under Ideal And Non...
Stereo Correspondence Algorithms for Robotic Applications Under Ideal And Non...
 
10.1109@ICCMC48092.2020.ICCMC-000167.pdf
10.1109@ICCMC48092.2020.ICCMC-000167.pdf10.1109@ICCMC48092.2020.ICCMC-000167.pdf
10.1109@ICCMC48092.2020.ICCMC-000167.pdf
 
Presentation Object Recognition And Tracking Project
Presentation Object Recognition And Tracking ProjectPresentation Object Recognition And Tracking Project
Presentation Object Recognition And Tracking Project
 
Positionit android app
Positionit android appPositionit android app
Positionit android app
 
Camera Calibration Market
Camera Calibration MarketCamera Calibration Market
Camera Calibration Market
 

Recently uploaded

Cara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak Hamil
Cara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak HamilCara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak Hamil
Cara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak Hamil
Cara Menggugurkan Kandungan 087776558899
 
DeepFakes presentation : brief idea of DeepFakes
DeepFakes presentation : brief idea of DeepFakesDeepFakes presentation : brief idea of DeepFakes
DeepFakes presentation : brief idea of DeepFakes
MayuraD1
 
Call Girls in South Ex (delhi) call me [🔝9953056974🔝] escort service 24X7
Call Girls in South Ex (delhi) call me [🔝9953056974🔝] escort service 24X7Call Girls in South Ex (delhi) call me [🔝9953056974🔝] escort service 24X7
Call Girls in South Ex (delhi) call me [🔝9953056974🔝] escort service 24X7
9953056974 Low Rate Call Girls In Saket, Delhi NCR
 
notes on Evolution Of Analytic Scalability.ppt
notes on Evolution Of Analytic Scalability.pptnotes on Evolution Of Analytic Scalability.ppt
notes on Evolution Of Analytic Scalability.ppt
MsecMca
 
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
ssuser89054b
 

Recently uploaded (20)

Online electricity billing project report..pdf
Online electricity billing project report..pdfOnline electricity billing project report..pdf
Online electricity billing project report..pdf
 
Learn the concepts of Thermodynamics on Magic Marks
Learn the concepts of Thermodynamics on Magic MarksLearn the concepts of Thermodynamics on Magic Marks
Learn the concepts of Thermodynamics on Magic Marks
 
Employee leave management system project.
Employee leave management system project.Employee leave management system project.
Employee leave management system project.
 
Cara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak Hamil
Cara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak HamilCara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak Hamil
Cara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak Hamil
 
Introduction to Serverless with AWS Lambda
Introduction to Serverless with AWS LambdaIntroduction to Serverless with AWS Lambda
Introduction to Serverless with AWS Lambda
 
DeepFakes presentation : brief idea of DeepFakes
DeepFakes presentation : brief idea of DeepFakesDeepFakes presentation : brief idea of DeepFakes
DeepFakes presentation : brief idea of DeepFakes
 
Unleashing the Power of the SORA AI lastest leap
Unleashing the Power of the SORA AI lastest leapUnleashing the Power of the SORA AI lastest leap
Unleashing the Power of the SORA AI lastest leap
 
Engineering Drawing focus on projection of planes
Engineering Drawing focus on projection of planesEngineering Drawing focus on projection of planes
Engineering Drawing focus on projection of planes
 
AIRCANVAS[1].pdf mini project for btech students
AIRCANVAS[1].pdf mini project for btech studentsAIRCANVAS[1].pdf mini project for btech students
AIRCANVAS[1].pdf mini project for btech students
 
Rums floating Omkareshwar FSPV IM_16112021.pdf
Rums floating Omkareshwar FSPV IM_16112021.pdfRums floating Omkareshwar FSPV IM_16112021.pdf
Rums floating Omkareshwar FSPV IM_16112021.pdf
 
kiln thermal load.pptx kiln tgermal load
kiln thermal load.pptx kiln tgermal loadkiln thermal load.pptx kiln tgermal load
kiln thermal load.pptx kiln tgermal load
 
HAND TOOLS USED AT ELECTRONICS WORK PRESENTED BY KOUSTAV SARKAR
HAND TOOLS USED AT ELECTRONICS WORK PRESENTED BY KOUSTAV SARKARHAND TOOLS USED AT ELECTRONICS WORK PRESENTED BY KOUSTAV SARKAR
HAND TOOLS USED AT ELECTRONICS WORK PRESENTED BY KOUSTAV SARKAR
 
Call Girls in South Ex (delhi) call me [🔝9953056974🔝] escort service 24X7
Call Girls in South Ex (delhi) call me [🔝9953056974🔝] escort service 24X7Call Girls in South Ex (delhi) call me [🔝9953056974🔝] escort service 24X7
Call Girls in South Ex (delhi) call me [🔝9953056974🔝] escort service 24X7
 
Computer Networks Basics of Network Devices
Computer Networks  Basics of Network DevicesComputer Networks  Basics of Network Devices
Computer Networks Basics of Network Devices
 
notes on Evolution Of Analytic Scalability.ppt
notes on Evolution Of Analytic Scalability.pptnotes on Evolution Of Analytic Scalability.ppt
notes on Evolution Of Analytic Scalability.ppt
 
HOA1&2 - Module 3 - PREHISTORCI ARCHITECTURE OF KERALA.pptx
HOA1&2 - Module 3 - PREHISTORCI ARCHITECTURE OF KERALA.pptxHOA1&2 - Module 3 - PREHISTORCI ARCHITECTURE OF KERALA.pptx
HOA1&2 - Module 3 - PREHISTORCI ARCHITECTURE OF KERALA.pptx
 
Thermal Engineering Unit - I & II . ppt
Thermal Engineering  Unit - I & II . pptThermal Engineering  Unit - I & II . ppt
Thermal Engineering Unit - I & II . ppt
 
Bhubaneswar🌹Call Girls Bhubaneswar ❤Komal 9777949614 💟 Full Trusted CALL GIRL...
Bhubaneswar🌹Call Girls Bhubaneswar ❤Komal 9777949614 💟 Full Trusted CALL GIRL...Bhubaneswar🌹Call Girls Bhubaneswar ❤Komal 9777949614 💟 Full Trusted CALL GIRL...
Bhubaneswar🌹Call Girls Bhubaneswar ❤Komal 9777949614 💟 Full Trusted CALL GIRL...
 
Thermal Engineering -unit - III & IV.ppt
Thermal Engineering -unit - III & IV.pptThermal Engineering -unit - III & IV.ppt
Thermal Engineering -unit - III & IV.ppt
 
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
 

Vision Sensors.pptx

  • 2. Road Map Robot Vision Imaging Sensors Vision Systems Visual Servoing Configuration of Vision System Image Processing Gray level histogram Image Segmentation Region based segmentation Image interpretation SENSORS FOR ROBOTICS LECTURE 9 DR. HEMA C.R. 2
  • 3. Robot Vision A robot vision system consists of one or more cameras, special-purpose lighting, software, and a robot or robots. Vision sensors are used in robot to provide information about the work area and objects to the robot. Images of the working area or object are processed using image processing software to determine position and orientation of objects in the work cell. Vision is also used in mobile robots to navigate. Depending on the application, the camera might be mounted on the robot or could be in a fixed position within the cell. SENSORS FOR ROBOTICS LECTURE 9 DR. HEMA C.R. 3
  • 4. Imaging Sensors Image sensors convert light into electric charge and process it into electronic signals Image Sensors ◦ Charge Coupled Device CCD ◦ All pixels are devoted to light capture ◦ Output is uniform ◦ High image quality ◦ Used in cell phone cameras ◦ Complementary Metal Oxide Semiconductor CMOS ◦ Pixels devoted to light capture are limited ◦ Output is not uniform ◦ High Image quality ◦ Used in professional and industrial cameras
  • 5. Lighting Techniques The three lighting techniques used in vision applications are: ◦ Front lighting, ◦ Back lighting ◦ Structured lighting
  • 6. Vision Systems Vision Systems are of two types namely: (a) Stand alone and (b) PC based. Standalone systems are Smart Camera: These are self-contained and do not require separate computers, there are two types of image sensors used in smart cameras namely (a) CCD image sensors and (b) CMOS image sensors. Vision Sensors: These are integrated devices which do not require any programming and are systems between smart cams and vision systems Digital Cameras are classified based on the type of sensors and memory storage devices used namely (a) CCD image, (b) CMOS image, (c) Flash memory, (d) Memory stick (e) Smart Media cards (f) Removable drives SENSORS FOR ROBOTICS LECTURE 9 DR. HEMA C.R. 6
  • 7. Image File Formats Images are stored in a computer in one of the following formats, depending on the application of the images stored. ◦ Tagged Image Format [.tif] ◦ Portable Network Graphics [.png] ◦ Joint Photographic Experts Group [.jpeg, .jpg] ◦ Bitmap [.bmp] ◦ Graphics Interchange Format [.gif] ◦ Raster Images [.ras] ◦ Postscript [.ps] SENSORS FOR ROBOTICS LECTURE 9 DR. HEMA C.R. 7
  • 8. Vision based Robot Control  Vision-based robot control also known as Visual Servoing, is a technique which uses feedback information extracted from a vision sensor (visual feedback) to control the motion of a robot.  There are two fundamental configurations of the robot end-effector (hand) and the camera:  Eye-in-hand, or end-point closed-loop control, where the camera is attached to the moving hand and observing the relative position of the target.  Eye-to-hand, or end-point open-loop control, where the camera is fixed in the world and observing the target and the motion of the hand. SENSORS FOR ROBOTICS LECTURE 9 DR. HEMA C.R. 8
  • 9.  Vision allows a robotic system to obtain geometrical and qualitative information on the surrounding environment  high level control motion planning (look-and-move  visual grasping)  low level control measures used in the control loop  Visual servoing control is broadly classified into the following types and they are based on feedback of visual measurements  image-based visual servoing  position-based visual servoing  hybrid visual servoing Visual Servoing SENSORS FOR ROBOTICS LECTURE 9 DR. HEMA C.R. 9
  • 10. Image based Visual Servoing  The control law is based on the error between current and desired features on the image plane, and does not involve any estimate of the pose of the target.  Image processing is aimed at extracting numerical information referred to as image feature parameters.  The features may be the coordinates of visual features, lines or moments of regions.  IBVS has difficulties with motions very large rotations, which has come to be called camera retreat. SENSORS FOR ROBOTICS LECTURE 9 DR. HEMA C.R. 10
  • 11. Position based Visual Servoing  PBVS is a model-based technique (with a single camera).  The pose of the object of interest is estimated with respect to the camera and then a command is issued to the robot controller, which in turn controls the robot.  In this case the image features are extracted as well, but are additionally used to estimate 3D information (pose of the object in Cartesian space), hence it is servoing in 3D.  Pose estimation methods are based on the measurement of a certain number of points or correspondences  Numerical pose estimation methods are based on the integration of the linear mapping between the camera velocity in the operational space and the time derivative of the feature parameters in the image plane SENSORS FOR ROBOTICS LECTURE 9 DR. HEMA C.R. 11
  • 12.  Multi-camera systems  information about its depth by evaluating its distance with respect to the visual system  3D vision or stereo vision  Mono-camera systems  two images of the same object from two different poses  if only a single image is available, the depth can be estimated on the basis ofgeometrical characteristics of the object known in advance.  This is cheaper and easier to calibrate Visual System Configuration SENSORS FOR ROBOTICS LECTURE 9 DR. HEMA C.R. 12
  • 13. Visual System Configuration  Eye-to-hand : fixed location  advantage is that the camera field of view does not change during the execution of the task, implying that the accuracy of such measurements is constant  the manipulator occludes, in part or in whole, the view of the objects  Eye-in-hand : mobile configuration  the camera is placed on the manipulator  high variability in the accuracy of measurements  the accuracy becomes almost constant and is usually higher than that achievable with eye-to- hand cameras SENSORS FOR ROBOTICS LECTURE 9 DR. HEMA C.R. 13
  • 14. Visual System Configuration  Hybrid configuration consisting of one or more cameras in eye-to-hand configuration, and one or more cameras in eye-in-hand configuration  ensures a good accuracy throughout the workspace, while avoiding the problems of occlusions SENSORS FOR ROBOTICS LECTURE 9 DR. HEMA C.R. 14
  • 15.  Visual information is very rich and varied  complex and computational expensive transformations before it can be used for controlling a robotic system  extraction of numerical information from the image  image feature parameters  Two basic operations  segmentation  a representation suitable for the identification of measurable features of the image  interpretation measurement of the feature parameters of the image Image Processing SENSORS FOR ROBOTICS LECTURE 9 DR. HEMA C.R. 15
  • 16. Image Processing  The source information is contained in a two dimensional memory array representing the spatial sample of the image  image function I (x,y) is a vector function whose components represent the values of one or more physical quantities related to the pixel in a sampled and quantized form  light intensity in the wavelengths of red, green and blue  or in shades of gray (number of gray levels depends on resolution  256 gray levels) SENSORS FOR ROBOTICS LECTURE 9 DR. HEMA C.R. 16
  • 17. Gray-level Histogram SENSORS FOR ROBOTICS LECTURE 9 DR. HEMA C.R.  Provides the frequency of occurrence of each gray level in the image  The gray levels are quantized from 0 to 255  The value h(p) of the histogram at a particular gray level p ∈[0, 255] is the number of image pixels with gray level p  If this value is divided by the total number of pixels, the histogram is termed Normalized histogram 17
  • 18.  Consists of a grouping process, by which the image is divided into a certain number of groups, referred to as segments (component of each group similar with respect to one or more characteristics)  Distinct objects of the environment  Or homogeneous object parts  Finding connected regions of the image  Grouping sets of pixels sharing common features into two-dimensional connected areas  High memory usage  Low computational load Image segmentation SENSORS FOR ROBOTICS LECTURE 9 DR. HEMA C.R. 18
  • 19. Image segmentation  Detection of boundaries  Identifying the pixels corresponding to object contours and isolating them from the rest of the image  The boundary of an object, once extracted, can be used to define the position and shape of the object itself  Complementary SENSORS FOR ROBOTICS LECTURE 9 DR. HEMA C.R. 19
  • 20.  Obtaining connected regions by continuous merging of initially small groups of adjacent pixels into larger ones  If the pixels belonging to these regions satisfy a common property, termed uniformity predicate (Verifying gray level)  Binary segmentation or image binarization by comparing the gray level of each pixel with a threshold l  The peaks of the histogram are termed modes (for the dark objects the closest minimum to the left) Region-based segmentation SENSORS FOR ROBOTICS LECTURE 9 DR. HEMA C.R. 20
  • 21. Region-based segmentation  In the presence of multiple objects, a further elaboration is required to separate the connected regions corresponding to the single objects  The gray-scale histogram is noisy and the modes are difficult to identify  various techniques have been developed to increase the robustness of binary segmentation  appropriate filtering of the image before binarization  algorithms for automatic selection of the threshold SENSORS FOR ROBOTICS LECTURE 9 DR. HEMA C.R. 21
  • 22.  Boundary-based segmentation techniques usually obtain a boundary by grouping many single local edges  Corresponding to local discontinuities of image gray level  Local edges are sets of pixels where the light intensity changes abruptly  The algorithms for boundary detection  Derive an intermediate image based on local edges from the original gray-scale image  Construct short-curve segments by edge linking  Obtain the boundaries by joining these curve segments through geometric primitives often known in advance Boundary-based segmentation SENSORS FOR ROBOTICS LECTURE 9 DR. HEMA C.R. 22
  • 23. Boundary-based segmentation  Edge detection is essentially a filtering process whereas boundary detection is a higher level task usually requiring more sophisticated software  Edge detection can be performed by grouping the pixels where the magnitude of the gradient is greater than a threshold  In case of simple and well-defined shapes, boundary detection becomes straightforward and segmentation reduces to the sole edge detection  Several edge detection techniques exist, most of them require the calculation of the gradient or of the laplacian of function I(XI, YI ) SENSORS FOR ROBOTICS LECTURE 9 DR. HEMA C.R. 23
  • 24. Boundary-based segmentation  Visual servoing is based on the mapping between the feature parameters of an object measured in the image plane of the camera and the operational space variables defining the relative pose of the object with respect to the camera  Often it is sufficient to derive a differential mapping in terms of velocity (easier to solve  linear, numerical integration algorithms) SENSORS FOR ROBOTICS LECTURE 9 DR. HEMA C.R. 24
  • 25. SENSORS FOR ROBOTICS LECTURE 9 DR. HEMA C.R. 25