Stereo vision uses two cameras to capture 3D information by processing two images of the same scene taken from slightly different angles. The seminar discussed concepts of stereo vision and its potential use for a virtual touch screen. Requirements for such a system include using two cameras for stereo vision capabilities, mouse input replacement with touch, and GUI modification for touch events. Challenges like correspondence and calibration problems were also covered, along with solutions like correlation-based algorithms. Applications of stereo vision include robotics, surveillance and 3D mapping.
A really simple presentation about Stereo Vision, especially the stereo vision in real time applied to mobile robotics. In the talk I explained how the stereo vision works, I presented a simplified mathematical model to achieve it and I proposed what for me is the best hardware to use to achieve real-time stereo vision, that is, the sensor ZED produced by Stereolabs used together with the board Jetson TX1 by Nvidia.
The presentation is really simple and has been made for an audience with limited knowledge about computer vision and the underhood mathematics.
The talk was held during the event "Officine Robotiche 2016" in Rome, 21-22 May 2016
Build Your Own 3D Scanner:
Introduction
http://mesh.brown.edu/byo3d/
SIGGRAPH 2009 Courses
Douglas Lanman and Gabriel Taubin
This course provides a beginner with the necessary mathematics, software, and practical details to leverage projector-camera systems in their own 3D scanning projects. An example-driven approach is used throughout; each new concept is illustrated using a practical scanner implemented with off-the-shelf parts. The course concludes by detailing how these new approaches are used in rapid prototyping, entertainment, cultural heritage, and web-based applications.
A really simple presentation about Stereo Vision, especially the stereo vision in real time applied to mobile robotics. In the talk I explained how the stereo vision works, I presented a simplified mathematical model to achieve it and I proposed what for me is the best hardware to use to achieve real-time stereo vision, that is, the sensor ZED produced by Stereolabs used together with the board Jetson TX1 by Nvidia.
The presentation is really simple and has been made for an audience with limited knowledge about computer vision and the underhood mathematics.
The talk was held during the event "Officine Robotiche 2016" in Rome, 21-22 May 2016
Build Your Own 3D Scanner:
Introduction
http://mesh.brown.edu/byo3d/
SIGGRAPH 2009 Courses
Douglas Lanman and Gabriel Taubin
This course provides a beginner with the necessary mathematics, software, and practical details to leverage projector-camera systems in their own 3D scanning projects. An example-driven approach is used throughout; each new concept is illustrated using a practical scanner implemented with off-the-shelf parts. The course concludes by detailing how these new approaches are used in rapid prototyping, entertainment, cultural heritage, and web-based applications.
Neural Scene Representation & Rendering: Introduction to Novel View SynthesisVincent Sitzmann
An overview over the neural scene representation and rendering framework and an introduction to novel view synthesis approaches. Slides made for the Eurographics, CVPR, and SIGGRAPH courses on neural rendering, connected to the state-of-the-art report on Neural Rendering at Eurographics 2020.
Feel free to re-use the slides! I just ask that you keep some form of attribution, either at the beginning of your presentation, or in the slide footer.
Build Your Own 3D Scanner: 3D Scanning with Swept-PlanesDouglas Lanman
Build Your Own 3D Scanner:
3D Scanning with Swept-Planes
http://mesh.brown.edu/byo3d/
SIGGRAPH 2009 Courses
Douglas Lanman and Gabriel Taubin
This course provides a beginner with the necessary mathematics, software, and practical details to leverage projector-camera systems in their own 3D scanning projects. An example-driven approach is used throughout; each new concept is illustrated using a practical scanner implemented with off-the-shelf parts. The course concludes by detailing how these new approaches are used in rapid prototyping, entertainment, cultural heritage, and web-based applications.
Build Your Own 3D Scanner: Surface ReconstructionDouglas Lanman
Build Your Own 3D Scanner:
Surface Reconstruction
http://mesh.brown.edu/byo3d/
SIGGRAPH 2009 Courses
Douglas Lanman and Gabriel Taubin
This course provides a beginner with the necessary mathematics, software, and practical details to leverage projector-camera systems in their own 3D scanning projects. An example-driven approach is used throughout; each new concept is illustrated using a practical scanner implemented with off-the-shelf parts. The course concludes by detailing how these new approaches are used in rapid prototyping, entertainment, cultural heritage, and web-based applications.
Edge detection is the name for a set of mathematical methods which aim at identifying points in a digital image at which the image brightness changes sharply or, more formally, has discontinuities.
3D Reconstruction from Multiple uncalibrated 2D Images of an ObjectAnkur Tyagi
3D reconstruction is the process of capturing the shape and appearance of real objects. In this project we are using passive methods which only use sensors to measure the radiance reflected or emitted by the objects surface to infer its 3D structure.
Build Your Own 3D Scanner:
Conclusion
http://mesh.brown.edu/byo3d/
SIGGRAPH 2009 Courses
Douglas Lanman and Gabriel Taubin
This course provides a beginner with the necessary mathematics, software, and practical details to leverage projector-camera systems in their own 3D scanning projects. An example-driven approach is used throughout; each new concept is illustrated using a practical scanner implemented with off-the-shelf parts. The course concludes by detailing how these new approaches are used in rapid prototyping, entertainment, cultural heritage, and web-based applications.
Two Dimensional Image Reconstruction Algorithmsmastersrihari
Convolution Back-Projection (CBP) Algorithm was used for the reconstruction of the image. The performance was compared by implementing the algorithm by using RAM- LAK filter, Shepp- Logan filter and also No filter being used.
DimEye Corp Presents Revolutionary VLS (Video Laser Scan) at SS IMMR 2013Patrick Raymond
DimEye Corp. Introduces the Revolutionary VLS (Video Laser Scan) to the Subsea Survey IMMR audience in Galveston Texas (November 2013).
VLS™ (Video Laser Scan) by DimEye Corp. is a revolution in Optical 3D Measurement. VLS Provides High Definition Visual Inspection, As-Built 3D Modeling of Industrial/Subsea Equipment, 3D High Density Mapping of Deformations and Defects (for example: Cracks, Dents, Bulges, Corrosion)
VLS™ is a unique combination of photogrammetry and Laser Techniques which provides the Advantages of both technologies without the disadvantages. VLS™ can also be operated by your existing technicians.
VLS™ is a High Accuracy Metrology Tool Linked to NIST (National Industry of Standards and Technology) that provides High Redundancy through volume of data collected (1000s of stills can be captured from HD video in seconds rather than individual photos taken manually at each location). VLS™ provides Reliable Accuracy Estimates (thanks to advanced processing and calibration algorithms developed by DimEye after years of industry experience in multiple measurement environments and scenarios).
Implementing Camshift on a Mobile Robot for Person Tracking and Pursuit_ICDMSoma Boubou
This is the slides for a paper presented in ICDM workshop in Vancouver-Canada 2011.
In the paper we describe a Camshift implementation on mobile robotic system for tracking and pursuing a moving person with a monocular camera. Camshift algorithm uses color distribution information to track moving object. It is computationally efficient for working in real-time applications and robust to image noise. It can deal well with illumination changes, shadows and irregular objects motion (linear/non-linear). We compared the Camshift with a HSV color based tracking and our results show that the Camshift method outperformed the HSV color based tracking. Moreover, the former method is much more robust against different illumination conditions.
Paper link:
http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=6137446&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D6137446
This paper proposed a facial expression recognition approach based on Gabor wavelet transform. Gabor wavelet filter is first used as pre-processing stage for extraction of the feature vector representation. Dimensionality of the feature vector is reduced using Principal Component Analysis and Local binary pattern (LBP) Algorithms. Experiments were carried out of The Japanese female facial expression (JAFFE) database. In all experiments conducted on JAFFE database, results obtained reveal that GW+LBP has outperformed other approaches in this paper with Average recognition rate of 90% under the same experimental setting.
A Lecture I gave to an Artificial Intelligence undergraduate class taught by Hien Nguyen, Ph.D. at the University of Wisconsin Whitewater in the fall of 2011
Goal location prediction based on deep learning using RGB-D camerajournalBEEI
In the navigation system, the desired destination position plays an essential role since the path planning algorithms takes a current location and goal location as inputs as well as the map of the surrounding environment. The generated path from path planning algorithm is used to guide a user to his final destination. This paper presents a proposed algorithm based on RGB-D camera to predict the goal coordinates in 2D occupancy grid map for visually impaired people navigation system. In recent years, deep learning methods have been used in many object detection tasks. So, the object detection method based on convolution neural network method is adopted in the proposed algorithm. The measuring distance between the current position of a sensor and the detected object depends on the depth data that is acquired from RGB-D camera. Both of the object detected coordinates and depth data has been integrated to get an accurate goal location in a 2D map. This proposed algorithm has been tested on various real-time scenarios. The experiments results indicate to the effectiveness of the proposed algorithm.
An Assessment of Image Matching Algorithms in Depth EstimationCSCJournals
Computer vision is often used with mobile robot for feature tracking, landmark sensing, and obstacle detection. Almost all high-end robotics systems are now equipped with pairs of cameras arranged to provide depth perception. In stereo vision application, the disparity between the stereo images allows depth estimation within a scene. Detecting conjugate pair in stereo images is a challenging problem known as the correspondence problem. The goal of this research is to assess the performance of SIFT, MSER, and SURF, the well known matching algorithms, in solving the correspondence problem and then in estimating the depth within the scene. The results of each algorithm are evaluated and presented. The conclusion and recommendations for future works, lead towards the improvement of these powerful algorithms to achieve a higher level of efficiency within the scope of their performance.
Neural Scene Representation & Rendering: Introduction to Novel View SynthesisVincent Sitzmann
An overview over the neural scene representation and rendering framework and an introduction to novel view synthesis approaches. Slides made for the Eurographics, CVPR, and SIGGRAPH courses on neural rendering, connected to the state-of-the-art report on Neural Rendering at Eurographics 2020.
Feel free to re-use the slides! I just ask that you keep some form of attribution, either at the beginning of your presentation, or in the slide footer.
Build Your Own 3D Scanner: 3D Scanning with Swept-PlanesDouglas Lanman
Build Your Own 3D Scanner:
3D Scanning with Swept-Planes
http://mesh.brown.edu/byo3d/
SIGGRAPH 2009 Courses
Douglas Lanman and Gabriel Taubin
This course provides a beginner with the necessary mathematics, software, and practical details to leverage projector-camera systems in their own 3D scanning projects. An example-driven approach is used throughout; each new concept is illustrated using a practical scanner implemented with off-the-shelf parts. The course concludes by detailing how these new approaches are used in rapid prototyping, entertainment, cultural heritage, and web-based applications.
Build Your Own 3D Scanner: Surface ReconstructionDouglas Lanman
Build Your Own 3D Scanner:
Surface Reconstruction
http://mesh.brown.edu/byo3d/
SIGGRAPH 2009 Courses
Douglas Lanman and Gabriel Taubin
This course provides a beginner with the necessary mathematics, software, and practical details to leverage projector-camera systems in their own 3D scanning projects. An example-driven approach is used throughout; each new concept is illustrated using a practical scanner implemented with off-the-shelf parts. The course concludes by detailing how these new approaches are used in rapid prototyping, entertainment, cultural heritage, and web-based applications.
Edge detection is the name for a set of mathematical methods which aim at identifying points in a digital image at which the image brightness changes sharply or, more formally, has discontinuities.
3D Reconstruction from Multiple uncalibrated 2D Images of an ObjectAnkur Tyagi
3D reconstruction is the process of capturing the shape and appearance of real objects. In this project we are using passive methods which only use sensors to measure the radiance reflected or emitted by the objects surface to infer its 3D structure.
Build Your Own 3D Scanner:
Conclusion
http://mesh.brown.edu/byo3d/
SIGGRAPH 2009 Courses
Douglas Lanman and Gabriel Taubin
This course provides a beginner with the necessary mathematics, software, and practical details to leverage projector-camera systems in their own 3D scanning projects. An example-driven approach is used throughout; each new concept is illustrated using a practical scanner implemented with off-the-shelf parts. The course concludes by detailing how these new approaches are used in rapid prototyping, entertainment, cultural heritage, and web-based applications.
Two Dimensional Image Reconstruction Algorithmsmastersrihari
Convolution Back-Projection (CBP) Algorithm was used for the reconstruction of the image. The performance was compared by implementing the algorithm by using RAM- LAK filter, Shepp- Logan filter and also No filter being used.
DimEye Corp Presents Revolutionary VLS (Video Laser Scan) at SS IMMR 2013Patrick Raymond
DimEye Corp. Introduces the Revolutionary VLS (Video Laser Scan) to the Subsea Survey IMMR audience in Galveston Texas (November 2013).
VLS™ (Video Laser Scan) by DimEye Corp. is a revolution in Optical 3D Measurement. VLS Provides High Definition Visual Inspection, As-Built 3D Modeling of Industrial/Subsea Equipment, 3D High Density Mapping of Deformations and Defects (for example: Cracks, Dents, Bulges, Corrosion)
VLS™ is a unique combination of photogrammetry and Laser Techniques which provides the Advantages of both technologies without the disadvantages. VLS™ can also be operated by your existing technicians.
VLS™ is a High Accuracy Metrology Tool Linked to NIST (National Industry of Standards and Technology) that provides High Redundancy through volume of data collected (1000s of stills can be captured from HD video in seconds rather than individual photos taken manually at each location). VLS™ provides Reliable Accuracy Estimates (thanks to advanced processing and calibration algorithms developed by DimEye after years of industry experience in multiple measurement environments and scenarios).
Implementing Camshift on a Mobile Robot for Person Tracking and Pursuit_ICDMSoma Boubou
This is the slides for a paper presented in ICDM workshop in Vancouver-Canada 2011.
In the paper we describe a Camshift implementation on mobile robotic system for tracking and pursuing a moving person with a monocular camera. Camshift algorithm uses color distribution information to track moving object. It is computationally efficient for working in real-time applications and robust to image noise. It can deal well with illumination changes, shadows and irregular objects motion (linear/non-linear). We compared the Camshift with a HSV color based tracking and our results show that the Camshift method outperformed the HSV color based tracking. Moreover, the former method is much more robust against different illumination conditions.
Paper link:
http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=6137446&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D6137446
This paper proposed a facial expression recognition approach based on Gabor wavelet transform. Gabor wavelet filter is first used as pre-processing stage for extraction of the feature vector representation. Dimensionality of the feature vector is reduced using Principal Component Analysis and Local binary pattern (LBP) Algorithms. Experiments were carried out of The Japanese female facial expression (JAFFE) database. In all experiments conducted on JAFFE database, results obtained reveal that GW+LBP has outperformed other approaches in this paper with Average recognition rate of 90% under the same experimental setting.
A Lecture I gave to an Artificial Intelligence undergraduate class taught by Hien Nguyen, Ph.D. at the University of Wisconsin Whitewater in the fall of 2011
Goal location prediction based on deep learning using RGB-D camerajournalBEEI
In the navigation system, the desired destination position plays an essential role since the path planning algorithms takes a current location and goal location as inputs as well as the map of the surrounding environment. The generated path from path planning algorithm is used to guide a user to his final destination. This paper presents a proposed algorithm based on RGB-D camera to predict the goal coordinates in 2D occupancy grid map for visually impaired people navigation system. In recent years, deep learning methods have been used in many object detection tasks. So, the object detection method based on convolution neural network method is adopted in the proposed algorithm. The measuring distance between the current position of a sensor and the detected object depends on the depth data that is acquired from RGB-D camera. Both of the object detected coordinates and depth data has been integrated to get an accurate goal location in a 2D map. This proposed algorithm has been tested on various real-time scenarios. The experiments results indicate to the effectiveness of the proposed algorithm.
An Assessment of Image Matching Algorithms in Depth EstimationCSCJournals
Computer vision is often used with mobile robot for feature tracking, landmark sensing, and obstacle detection. Almost all high-end robotics systems are now equipped with pairs of cameras arranged to provide depth perception. In stereo vision application, the disparity between the stereo images allows depth estimation within a scene. Detecting conjugate pair in stereo images is a challenging problem known as the correspondence problem. The goal of this research is to assess the performance of SIFT, MSER, and SURF, the well known matching algorithms, in solving the correspondence problem and then in estimating the depth within the scene. The results of each algorithm are evaluated and presented. The conclusion and recommendations for future works, lead towards the improvement of these powerful algorithms to achieve a higher level of efficiency within the scope of their performance.
Intelligent indoor mobile robot navigation using stereo visionsipij
Majority of the existing robot navigation systems, which facilitate the use of laser range finders, sonar
sensors or artificial landmarks, has the ability to locate itself in an unknown environment and then build a
map of the corresponding environment. Stereo vision,while still being a rapidly developing technique in the
field of autonomous mobile robots, are currently less preferable due to its high implementation cost. This
paper aims at describing an experimental approach for the building of a stereo vision system that helps the
robots to avoid obstacles and navigate through indoor environments and at the same time remaining very
much cost effective. This paper discusses the fusion techniques of stereo vision and ultrasound sensors
which helps in the successful navigation through different types of complex environments. The data from
the sensor enables the robot to create the two dimensional topological map of unknown environments and
stereo vision systems models the three dimension model of the same environment.
Stereo Correspondence Algorithms for Robotic Applications Under Ideal And Non...CSCJournals
The use of visual information in real time applications such as in robotic pick, navigation, obstacle avoidance etc. has been widely used in many sectors for enabling them to interact with its environment. Robotics require computationally simpler and easy to implement stereo vision algorithms that will provide reliable and accurate results under real time constraint. Stereo vision is a less expensive, passive sensing technique, for inferring the three dimensional position of objects from two or more simultaneous views of a scene and there is no interference with other sensing devices if multiple robots are present in the same environment. Stereo correspondence aims at finding matching points in the stereo image pair based on Lambertian criteria to obtain disparity. The correspondence algorithm will provide high resolution disparity maps of the scene by comparing two views of the scene under the study. By using the principle of triangulation and with the help of camera parameters, depth information can be extracted from this disparity .Since the focus is on real-time application, only the local stereo correspondence algorithms are considered. A comparative study based on error and computational costs are done between two area based algorithms. Evaluation of Sum of absolute Difference algorithm, which is less computationally expensive, suitable for ideal lightening condition and a more accurate adaptive binary support window algorithm that can handle of non-ideal lighting conditions are taken for this study. To simplify the correspondence search, rectified stereo image pairs are used as inputs.
Conventional non-vision based navigation systems relying on purely Global Positioning System (GPS) or inertial sensors can provide the 3D position or orientation of the user. However GPS is often not available in forested regions and cannot be used indoors. Visual odometry provides an independent method to estimate position and orientation of the user/system based on the images captured by the moving user accurately. Vision based systems also provide information (e.g. images, 3D location of landmarks, detection of scene objects) about the scene that the user is looking at. In this project, a set of techniques are used for the accurate pose and position estimation of the moving vehicle for autonomous navigation using the images obtained from two cameras placed at two different locations of the same area on the top of the vehicle. These cases are referred to as stereo vision. Stereo vision provides a method for the 3D reconstruction of the environment which is required for pose and position estimation. Firstly, a set of images are captured. The Harris corner detector is utilized to automatically extract a set of feature points from the images and then feature matching is done using correlation based matching. Triangulation is applied on feature points to find the 3D co-ordinates. Next, a new set of images is captured. Then repeat the same technique for the new set of images too. Finally, by using the 3D feature points, obtained from the first set of images and the new set of images, the pose and position estimation of moving vehicle is done using QUEST algorithm.
An algorithm to quantify the swelling by reconstructing 3D model of the face with stereo images is presented. We
analyzed the primary problems in computational stereo, which include correspondence and depth calculation. Work has been carried out to determine suitable methods for depth estimation and standardizing volume estimations. Finally we designed software for reconstructing 3D images from 2D stereo images, which was built on Matlab and Visual C++. Utilizing
techniques from multi-view geometry, a 3D model of the face was constructed and refined. An explicit analysis of the stereo
disparity calculation methods and filter elimination disparity estimation for increasing reliability of the disparity map was
used. Minimizing variability in position by using more precise positioning techniques and resources will increase the accuracy of this technique and is a focus for future work
Visual Mapping and Collision Avoidance Dynamic Environments in Dynamic Enviro...Darius Burschka
How conventional vision is more appropriate for control since it provides also error analysis. There is a lot of information in the images that is lost when converting to 3D
Design of Shadow Detection and Removal Systemijsrd.com
Detection and removal of shadow forms a major usage in computer vision application. Presence of shadows causes object distortion. Shadow removal increases the quality of the video surveillance. Shadow detection and removal is carried out in three stages. Foreground image is detected in the first stage using frame differencing technique. Shadow part is detected in the second stage using the hue, saturation, and intensity of the moving object. Shadow removal is done in the third stage by replacing the shadow pixels with the background pixels. All the three modules are collectively implemented in Visual C++. Precision values in the range of 0.9923 to 0.9959 are obtained for different input videos.
1. A
SEMINAR
ON
“CONCEPT OF STEREO VISION BASED VIRTUAL
TOUCH SCREEN”
VIVEK R. CHAMORSHIKAR
2. WHAT IS STEREO VISION?
Stereo Vision is a by product of good binocular vision.
BINOCULAR: Involving both eyes at once.
BINOCULAR VISION: Here both eyes aim
simultaneously at the same visual target, vision in which
both eyes work together as a coordinated team equally
and accurately.
STEREO VISION:(stereopsis or stereoscopic vision)
Vision in which two separate images from two eyes are
successfully combined into one image in the brain.
How it works?
3. Why to use Stereo Vision?
Stereo Vision is related to stereopsis.
Stereopsis (stereo means “three-dimensional” or “solid”
and opsis means “sight” or “view”).
Basic Ability of Stereo Vision: The ability to infer
information on the 3-D structure and distance of a scene
from two or more images taken from two different
viewpoints.
Stereo vision is most cost efficient way, instead of using
the costly sensors.
4. Requirements for the system are as-
1. Mouse input should be replaced by touch input.
Create active/inactive spaces for interactions.
2. GUI applications should be designed to enable touch input
events.
Fig 1. Figure showing the efforts faced
between human and machine interaction.
5. 3. Two cameras are needed.
It helps to distinguish interactive parts of captured image .
Accurate and reliable 3-D image is captured.
Accurate dimensions are calculated.
4. Synchronization is needed by two cameras.
The image frames should be captured from two cameras
at the same time and also frame rate of two cameras
should be same.
5. Distance Calibration.
The calibration of distance of blob (object used for input)
should be nearest to the actual distance of screen for good
result.
6. PROBLEMS IN STEREO VISION
Problems to solve in stereo vision are:
1. Correspondence Problem
2. Calibration Problem
3. Synchronization Problem
4. Shadow Problem
5. Sunlight Problem
7. SOLUTION FOR CORRESPONDENCE
PROBLEM
Two algorithms to solve correspondences problem
Correlation-based Algorithm- Checking if one location in
one image looks/seems like another in another image.
Produce a DENSE set of correspondences.
Feature-based Algorithm - Finding features in the image
and seeing if the layout of a subset of features is similar in
the two images.
Produce a SPARSE set of correspondences.
8. APPLICATIONS OF STEREO VISION
1.People Tracking
2.Robotics
3.Random Bin Picking (RBP)
4.Surgeries
5.3-D Underwater Mosaicking
Stereo Vision has many Other Applications:
Driver assistance system
Forensics - Crime Scenes, Traffic Accidents
Mining - Mine face measurement
Civil Engineering - Structure monitoring
Collision Avoidance
Manufacturing- Process Monitoring
9. ADVANATAGES AND DISADVANTAGES OF
STEREO VISION
Advantages of Stereo Vision:
1. Robustness
2. Gives a very dense depth (or range) map.
3. Use to calculate shape of objects.
4. Human motion detection is possible instead of using sensors
for it.
Disadvantages of Stereo Vision:
1. The system must be pre-calibrated.
2. Has to be used in indoor environment
3. Shadow and sunlight present in the experimental area makes
difficult in distance calculation.
10. Tracking of Blob:
Novel algorithm is used for efficient motion detection and
calculating distance of blob.
Combining Blob:
After assigning all the labels to every pixel of the image we
count all the labels other than background labels (i.e. other
than 0) and store its corresponding (x, y) coordinates. The
pixels having same label is considered as a single object and
a box is drawn around it using the maximum and minimum x
and y coordinates.
Height Map:
In computer graphics, heightmap is a image used to store
values, such as surface elevation data, for display in 3D
Computer graphics.
11. STEREO RANGING:
Calculating the distance to objects by making a pair of
observations at different locations.
Range = (Focal length x Camera baseline) / Disparity
C0 - Left Camera
C1 -Right Camera
P -Observed feature point
F -focal length
B -baseline distance
D -distance to observed feature
point
c0, c1 -Pixel center of camera images
v0, p1 -pixel position of observed
feature point
v0, v1 -Pixel displacement of
observed feature point
Disparity (D) = v1-v0
Distance (D) = bf/d
13. CONCLUSION
Stereo vision
Applications
Requirements for the system to use stereo vision.
Advantages and Disadvantages of stereo vision.
The calculation of the distance of the blob from
two cameras.