Enhanced Algorithm for Obstacle Detection and Avoidance Using a Hybrid of Plane To Plane Homography, Image Segmentation, Corner and Edge Detection Techniques
This document presents an enhanced algorithm for obstacle detection and avoidance using a hybrid of plane-to-plane homography, image segmentation, corner detection, and edge detection techniques. The algorithm aims to improve upon previous methods by eliminating false positives, reducing unreliable corners and broken edges, providing depth perception without planar assumptions, and requiring less processing power. The key components of the algorithm include plane-to-plane homography, image segmentation, Canny edge detection, Harris corner detection, and the RANSAC sampling method for system analysis. Test results on sample images show the algorithm can accurately detect obstacles based on texture differences while reducing noise from ground plane textures.
The flow of baseline estimation using a single omnidirectional cameraTELKOMNIKA JOURNAL
Baseline is a distance between two cameras, but we cannot get information from a single camera. Baseline is one of the important parameters to find the depth of objects in stereo image triangulation. The flow of baseline is produced by moving the camera in horizontal axis from its original location. Using baseline estimation, we can determined the depth of an object by using only an omnidirectional camera. This research focus on determining the flow of baseline before calculating the disparity map. To estimate the flow and to tracking the object, we use three and four points in the surface of an object from two different data (panoramic image) that were already chosen. By moving the camera horizontally, we get the tracks of them. The obtained tracks are visually similar. Each track represent the coordinate of each tracking point. Two of four tracks have a graphical representation similar to second order polynomial.
A Detailed Analysis on Feature Extraction Techniques of Panoramic Image Stitc...IJEACS
Image stitching is a technique which is used for attaining a high resolution panoramic image. In this technique, distinct aesthetic images that are imaged from different view and angles are combined together to produce a panoramic image. In the field of computer graphics, photographic and computer vision, Image stitching techniques are considered as current research areas. For obtaining a stitched image it becomes mandatory that one should have the knowledge of geometric relations among multiple image co-ordinate system [1].First, image stitching will be done based on feature key point matches. Final image with seam will be blended with image blending technique. Hence in this paper we are going to address multiple distinct techniques like some invariant features as Scale Invariant Feature Transform and Speeded up Robust Transform and Corner techniques as Harris Corner Detection Technique that are useful in sorting out the issues related with stitching of images.
Conventional non-vision based navigation systems relying on purely Global Positioning System (GPS) or inertial sensors can provide the 3D position or orientation of the user. However GPS is often not available in forested regions and cannot be used indoors. Visual odometry provides an independent method to estimate position and orientation of the user/system based on the images captured by the moving user accurately. Vision based systems also provide information (e.g. images, 3D location of landmarks, detection of scene objects) about the scene that the user is looking at. In this project, a set of techniques are used for the accurate pose and position estimation of the moving vehicle for autonomous navigation using the images obtained from two cameras placed at two different locations of the same area on the top of the vehicle. These cases are referred to as stereo vision. Stereo vision provides a method for the 3D reconstruction of the environment which is required for pose and position estimation. Firstly, a set of images are captured. The Harris corner detector is utilized to automatically extract a set of feature points from the images and then feature matching is done using correlation based matching. Triangulation is applied on feature points to find the 3D co-ordinates. Next, a new set of images is captured. Then repeat the same technique for the new set of images too. Finally, by using the 3D feature points, obtained from the first set of images and the new set of images, the pose and position estimation of moving vehicle is done using QUEST algorithm.
Matching algorithm performance analysis for autocalibration method of stereo ...TELKOMNIKA JOURNAL
Stereo vision is one of the interesting research topics in the computer vision field. Two cameras are used to generate a disparity map, resulting in the depth estimation. Camera calibration is the most important step in stereo vision. The calibration step is used to generate an intrinsic parameter of each camera to get a better disparity map. In general, the calibration process is done manually by using a chessboard pattern, but this process is an exhausting task. Self-calibration is an important ability required to overcome this problem. Self-calibration required a robust and good matching algorithm to find the key feature between images as reference. The purpose of this paper is to analyze the performance of three matching algorithms for the autocalibration process. The matching algorithms used in this research are SIFT, SURF, and ORB. The result shows that SIFT performs better than other methods.
The flow of baseline estimation using a single omnidirectional cameraTELKOMNIKA JOURNAL
Baseline is a distance between two cameras, but we cannot get information from a single camera. Baseline is one of the important parameters to find the depth of objects in stereo image triangulation. The flow of baseline is produced by moving the camera in horizontal axis from its original location. Using baseline estimation, we can determined the depth of an object by using only an omnidirectional camera. This research focus on determining the flow of baseline before calculating the disparity map. To estimate the flow and to tracking the object, we use three and four points in the surface of an object from two different data (panoramic image) that were already chosen. By moving the camera horizontally, we get the tracks of them. The obtained tracks are visually similar. Each track represent the coordinate of each tracking point. Two of four tracks have a graphical representation similar to second order polynomial.
A Detailed Analysis on Feature Extraction Techniques of Panoramic Image Stitc...IJEACS
Image stitching is a technique which is used for attaining a high resolution panoramic image. In this technique, distinct aesthetic images that are imaged from different view and angles are combined together to produce a panoramic image. In the field of computer graphics, photographic and computer vision, Image stitching techniques are considered as current research areas. For obtaining a stitched image it becomes mandatory that one should have the knowledge of geometric relations among multiple image co-ordinate system [1].First, image stitching will be done based on feature key point matches. Final image with seam will be blended with image blending technique. Hence in this paper we are going to address multiple distinct techniques like some invariant features as Scale Invariant Feature Transform and Speeded up Robust Transform and Corner techniques as Harris Corner Detection Technique that are useful in sorting out the issues related with stitching of images.
Conventional non-vision based navigation systems relying on purely Global Positioning System (GPS) or inertial sensors can provide the 3D position or orientation of the user. However GPS is often not available in forested regions and cannot be used indoors. Visual odometry provides an independent method to estimate position and orientation of the user/system based on the images captured by the moving user accurately. Vision based systems also provide information (e.g. images, 3D location of landmarks, detection of scene objects) about the scene that the user is looking at. In this project, a set of techniques are used for the accurate pose and position estimation of the moving vehicle for autonomous navigation using the images obtained from two cameras placed at two different locations of the same area on the top of the vehicle. These cases are referred to as stereo vision. Stereo vision provides a method for the 3D reconstruction of the environment which is required for pose and position estimation. Firstly, a set of images are captured. The Harris corner detector is utilized to automatically extract a set of feature points from the images and then feature matching is done using correlation based matching. Triangulation is applied on feature points to find the 3D co-ordinates. Next, a new set of images is captured. Then repeat the same technique for the new set of images too. Finally, by using the 3D feature points, obtained from the first set of images and the new set of images, the pose and position estimation of moving vehicle is done using QUEST algorithm.
Matching algorithm performance analysis for autocalibration method of stereo ...TELKOMNIKA JOURNAL
Stereo vision is one of the interesting research topics in the computer vision field. Two cameras are used to generate a disparity map, resulting in the depth estimation. Camera calibration is the most important step in stereo vision. The calibration step is used to generate an intrinsic parameter of each camera to get a better disparity map. In general, the calibration process is done manually by using a chessboard pattern, but this process is an exhausting task. Self-calibration is an important ability required to overcome this problem. Self-calibration required a robust and good matching algorithm to find the key feature between images as reference. The purpose of this paper is to analyze the performance of three matching algorithms for the autocalibration process. The matching algorithms used in this research are SIFT, SURF, and ORB. The result shows that SIFT performs better than other methods.
An Enhanced Computer Vision Based Hand Movement Capturing System with Stereo ...CSCJournals
This framework is a hand movement capturing method which could be done in three different depth levels. The algorithm has the capability of capturing and identifying when the hand is moving up, down, right and left. From these captured movements four signals could be generated. Moreover, when these hand movements are done, 15cm-75cm, 75cm-100cm, 100cm- 200cm from the camera (3 depth levels), twelve different signals could be generated. These generated signals could be used for applications such as game controlling (gaming).The existing method uses an object area based method for depth analysis. The results of the proposed work shows it has high accuracy compared to the existing method when tested for depth analysis.
A research on measuring the distance of an object from an observer using a stereo camera has been carried out. The stages involved include camera calibration, image rectifying, disparity calculation, and three dimensional reconstruction. Distance of an object is determined using the central point of an object in a segemented bounding box. This distance is measured using the Euclidian distance measurement method to figure out the shortest distance from the center of the bounding box to both cameras. Results show that objects located 3 to 6 meters away are properly measured for their distances, with an average error of 4%.
The aim of this paper is to present the essential elements of the electro-optical imaging system EOIS for space applications and how these elements can affect its function. After designing a spacecraft for low orbiting missions during day time, the design of an electro-imaging system becomes an important part in the satellite because the satellite will be able to take images of the regions of interest. An example of an electro-optical satellite imaging system will be presented through this paper where some restrictions have to be considered during the design process. Based on the optics principals and ray tracing techniques the dimensions of lenses and CCD (Charge Coupled Device) detector are changed matching the physical satellite requirements. However, many experiments were done in the physics lab to prove that the resizing of the electro optical elements of the imaging system does not affect the imaging mission configuration. The procedures used to measure the field of view and ground resolution will be discussed through this work. Examples of satellite images will be illustrated to show the ground resolution effects.
Edge detection is one of the most frequent processes in digital image processing for various purposes, one of which is detecting road damage based on crack paths that can be checked using a Canny algorithm. This paper proposed a mobile application to detect cracks in the road and with customized threshold function in the requests to produce useful and accurate edge detection. The experimental results show that the use of threshold function in a canny algorithm can detect better damage in the road
The frame camera is used by users of digital single-lens reflex cameras (DSLRs) as a shorthand for an image sensor format which is the same size as 35mm format (36 mm × 24 mm) film.
Panoramic imagery is created either by digitally stitching together multiple images from the same position (left/right, up/down) or by rotating a camera with conventional optics, and an area or line sensor.
The aim of the study was to investigate the relationship between 2D gray scale pixels and 3D gray scale pixels of image reconstructions in computed tomography (CT). The 3D space image reconstruction from data projection was a challenging and difficult research problem. The image was normally reconstructed from the 2D data from CT data projection. In this descriptive study, a synthetics 3D Shepp-Logan phantom was used to simulate the actual data projection from a CT scanner. Real-time data projection of a human abdomen was also included in this study. Additionally, the Graphical User Interface (GUI) for the application was designed using Matlab Graphical User Interface Development Environment (GUIDE). The application was able to reconstruct 2D and 3D images in their respective spaces successfully.The image reconstruction for CT in 3D space was analyzedalong with 2D space in order to show their relationships and shared properties for the purpose of constructing these images.
An Accurate Scheme for Distance Measurement using an Ordinary Webcam Yayah Zakaria
Nowadays, image processing has become one of the widely used computer aided science. Two major branches of this scientific field are image enhancement and machine vision. Machine vision has many applications and demands in robotic and defense industries. Detecting distance of objects is
one of the extensive research in the defense industry and robotic industries that a lot of annual projects have been involved in this issue both inside and outside the country. So, in this paper, an accurate algorithm is presented for measuring the distance of the objects from a camera. In this method, a laser
transmitter is used alongside a regular webcam. The laser light is transmitted to the desired object and then the distance of the object is calculated using image processing methods and mathematical and geometric relations. The performance of the proposed algorithm was evaluated using MATLAB software. The accuracy rate of distance detection is up to 99.62%. The results
also has shown that the presented algorithms make the obstacle distance measurement more reliable. Finally, the performance of the proposed algorithm was compared with other methods from different literatures.
An Unsupervised Change Detection in Satellite IMAGES Using MRFFCM ClusteringEditor IJCATR
This paper presents a new approach for change detection in synthetic aperture radar images by incorporating Markov random field (MRF) within the framework of FCM. The objective is to partition the difference image which is generated from multitemporal satellite images into changed and unchanged regions. The difference image is generated from log ratio and mean ratio images by image fusion technique. The quality of difference image depends on image fusion technique. In the present work; we have proposed an image fusion method based on stationary wavelet transform. To process the difference image is to discriminate changed regions from unchanged regions using fuzzy clustering algorithms. The analysis of the DI is done using Markov random field (MRF) approach that exploits the interpixel class dependency in the spatial domain to improve the accuracy of the final change-detection areas. The experimental results on real synthetic aperture radar images demonstrate that change detection results obtained by the MRFFCM exhibits less error than previous approaches. The goodness of the proposed fusion algorithm by well-known image fusion measures and the percentage correct classifications are calculated and verified.
An Enhanced Computer Vision Based Hand Movement Capturing System with Stereo ...CSCJournals
This framework is a hand movement capturing method which could be done in three different depth levels. The algorithm has the capability of capturing and identifying when the hand is moving up, down, right and left. From these captured movements four signals could be generated. Moreover, when these hand movements are done, 15cm-75cm, 75cm-100cm, 100cm- 200cm from the camera (3 depth levels), twelve different signals could be generated. These generated signals could be used for applications such as game controlling (gaming).The existing method uses an object area based method for depth analysis. The results of the proposed work shows it has high accuracy compared to the existing method when tested for depth analysis.
A research on measuring the distance of an object from an observer using a stereo camera has been carried out. The stages involved include camera calibration, image rectifying, disparity calculation, and three dimensional reconstruction. Distance of an object is determined using the central point of an object in a segemented bounding box. This distance is measured using the Euclidian distance measurement method to figure out the shortest distance from the center of the bounding box to both cameras. Results show that objects located 3 to 6 meters away are properly measured for their distances, with an average error of 4%.
The aim of this paper is to present the essential elements of the electro-optical imaging system EOIS for space applications and how these elements can affect its function. After designing a spacecraft for low orbiting missions during day time, the design of an electro-imaging system becomes an important part in the satellite because the satellite will be able to take images of the regions of interest. An example of an electro-optical satellite imaging system will be presented through this paper where some restrictions have to be considered during the design process. Based on the optics principals and ray tracing techniques the dimensions of lenses and CCD (Charge Coupled Device) detector are changed matching the physical satellite requirements. However, many experiments were done in the physics lab to prove that the resizing of the electro optical elements of the imaging system does not affect the imaging mission configuration. The procedures used to measure the field of view and ground resolution will be discussed through this work. Examples of satellite images will be illustrated to show the ground resolution effects.
Edge detection is one of the most frequent processes in digital image processing for various purposes, one of which is detecting road damage based on crack paths that can be checked using a Canny algorithm. This paper proposed a mobile application to detect cracks in the road and with customized threshold function in the requests to produce useful and accurate edge detection. The experimental results show that the use of threshold function in a canny algorithm can detect better damage in the road
The frame camera is used by users of digital single-lens reflex cameras (DSLRs) as a shorthand for an image sensor format which is the same size as 35mm format (36 mm × 24 mm) film.
Panoramic imagery is created either by digitally stitching together multiple images from the same position (left/right, up/down) or by rotating a camera with conventional optics, and an area or line sensor.
The aim of the study was to investigate the relationship between 2D gray scale pixels and 3D gray scale pixels of image reconstructions in computed tomography (CT). The 3D space image reconstruction from data projection was a challenging and difficult research problem. The image was normally reconstructed from the 2D data from CT data projection. In this descriptive study, a synthetics 3D Shepp-Logan phantom was used to simulate the actual data projection from a CT scanner. Real-time data projection of a human abdomen was also included in this study. Additionally, the Graphical User Interface (GUI) for the application was designed using Matlab Graphical User Interface Development Environment (GUIDE). The application was able to reconstruct 2D and 3D images in their respective spaces successfully.The image reconstruction for CT in 3D space was analyzedalong with 2D space in order to show their relationships and shared properties for the purpose of constructing these images.
An Accurate Scheme for Distance Measurement using an Ordinary Webcam Yayah Zakaria
Nowadays, image processing has become one of the widely used computer aided science. Two major branches of this scientific field are image enhancement and machine vision. Machine vision has many applications and demands in robotic and defense industries. Detecting distance of objects is
one of the extensive research in the defense industry and robotic industries that a lot of annual projects have been involved in this issue both inside and outside the country. So, in this paper, an accurate algorithm is presented for measuring the distance of the objects from a camera. In this method, a laser
transmitter is used alongside a regular webcam. The laser light is transmitted to the desired object and then the distance of the object is calculated using image processing methods and mathematical and geometric relations. The performance of the proposed algorithm was evaluated using MATLAB software. The accuracy rate of distance detection is up to 99.62%. The results
also has shown that the presented algorithms make the obstacle distance measurement more reliable. Finally, the performance of the proposed algorithm was compared with other methods from different literatures.
An Unsupervised Change Detection in Satellite IMAGES Using MRFFCM ClusteringEditor IJCATR
This paper presents a new approach for change detection in synthetic aperture radar images by incorporating Markov random field (MRF) within the framework of FCM. The objective is to partition the difference image which is generated from multitemporal satellite images into changed and unchanged regions. The difference image is generated from log ratio and mean ratio images by image fusion technique. The quality of difference image depends on image fusion technique. In the present work; we have proposed an image fusion method based on stationary wavelet transform. To process the difference image is to discriminate changed regions from unchanged regions using fuzzy clustering algorithms. The analysis of the DI is done using Markov random field (MRF) approach that exploits the interpixel class dependency in the spatial domain to improve the accuracy of the final change-detection areas. The experimental results on real synthetic aperture radar images demonstrate that change detection results obtained by the MRFFCM exhibits less error than previous approaches. The goodness of the proposed fusion algorithm by well-known image fusion measures and the percentage correct classifications are calculated and verified.
PROJECT DESCRIPTION
DOWNLOAD
The main objective of this project is to develop a device for wireless power transfer. The concept of wireless power transfer was realized by Nikolas tesla. Wireless power transfer can make a remarkable change in the field of the electrical engineering which eliminates the use conventional copper cables and current carrying wires.
Based on this concept, the project is developed to transfer power within a small range. This project can be used for charging batteries those are physically not possible to be connected electrically such as pace makers (An electronic device that works in place of a defective heart valve) implanted in the body that runs on a battery.
The patient is required to be operated every year to replace the battery. This project is designed to charge a rechargeable battery wirelessly for the purpose. Since charging of the battery is not possible to be demonstrated, we are providing a DC fan that runs through wireless power.
This project is built upon using an electronic circuit which converts AC 230V 50Hz to AC 12V, High frequency. The output is fed to a tuned coil forming as primary of an air core transformer. The secondary coil develops a voltage of HF 12volt.
Thus the transfer of power is done by the primary(transmitter) to the secondary that is separated with a considerable distance(say 3cm). Therefore the transfer could be seen as the primary transmits and the secondary receives the power to run load.
Moreover this technique can be used in number of applications, like to charge a mobile phone, iPod, laptop battery, propeller clock wirelessly. And also this kind of charging provides a far lower risk of electrical shock as it would be galvanically isolated.
Similar to Enhanced Algorithm for Obstacle Detection and Avoidance Using a Hybrid of Plane To Plane Homography, Image Segmentation, Corner and Edge Detection Techniques
Leader Follower Formation Control of Ground Vehicles Using Dynamic Pixel Coun...ijma
This paper deals with leader-follower formations of non-holonomic mobile robots, introducing a formation
control strategy based on pixel counts using a commercial grade electro optics camera. Localization of the
leader for motions along line of sight as well as the obliquely inclined directions are considered based on
pixel variation of the images by referencing to two arbitrarily designated positions in the image frames.
Based on an established relationship between the displacement of the camera movement along the viewing
direction and the difference in pixel counts between reference points in the images, the range and the angle
estimate between the follower camera and the leader is calculated. The Inverse Perspective Transform is
used to account for non linear relationship between the height of vehicle in a forward facing image and its
distance from the camera. The formulation is validated with experiments.
AUTO LANDING PROCESS FOR AUTONOMOUS FLYING ROBOT BY USING IMAGE PROCESSING BA...csandit
In today’s technological life, everyone is quite familiar with the importance of security
measures in our lives. So in this regard, many attempts have been made by researchers and one
of them is flying robots technology. One well-known usage of flying robot, perhaps, is its
capability in security and care measurements which made this device extremely practical, not
only for its unmanned movement, but also for the unique manoeuvre during flight over the
arbitrary areas. In this research, the automatic landing of a flying robot is discussed. The
system is based on the frequent interruptions that is sent from main microcontroller to camera
module in order to take images; these images have been distinguished by image processing
system based on edge detection, after analysing the image the system can tell whether or not to
land on the ground. This method shows better performance in terms of precision as well as
experimentally.
Stereo Correspondence Algorithms for Robotic Applications Under Ideal And Non...CSCJournals
The use of visual information in real time applications such as in robotic pick, navigation, obstacle avoidance etc. has been widely used in many sectors for enabling them to interact with its environment. Robotics require computationally simpler and easy to implement stereo vision algorithms that will provide reliable and accurate results under real time constraint. Stereo vision is a less expensive, passive sensing technique, for inferring the three dimensional position of objects from two or more simultaneous views of a scene and there is no interference with other sensing devices if multiple robots are present in the same environment. Stereo correspondence aims at finding matching points in the stereo image pair based on Lambertian criteria to obtain disparity. The correspondence algorithm will provide high resolution disparity maps of the scene by comparing two views of the scene under the study. By using the principle of triangulation and with the help of camera parameters, depth information can be extracted from this disparity .Since the focus is on real-time application, only the local stereo correspondence algorithms are considered. A comparative study based on error and computational costs are done between two area based algorithms. Evaluation of Sum of absolute Difference algorithm, which is less computationally expensive, suitable for ideal lightening condition and a more accurate adaptive binary support window algorithm that can handle of non-ideal lighting conditions are taken for this study. To simplify the correspondence search, rectified stereo image pairs are used as inputs.
The Technology Research of Camera Calibration Based On LabVIEWIJRES Journal
The technology of camera calibration is most important part for machine vision detection and
location, the accuracy of calibration directly determines the processing accuracy of machine vision systems. In
this paper, we use LabVIEW and MATLAB to calibrate the internal and external parameters of the camera, at
the same time, we use dot calibration board, the circle edge is detected by Canny operator, then with the method
of circle fitting based on subpixel edge extraction, the information of dots image coordinate is extracted. The
present method reduces the difficulty of camera calibration and shortens the software development cycle, the
most important is that it has a high calibration accuracy, which can meet the actual industrial detection accuracy,
the results of experimental show that the method is feasible.
Distance Estimation to Image Objects Using Adapted Scaletheijes
Distance measurement is part of various robotic applications. There exist many methods for this purpose. In this paper, we introduce a new method to measure the distance from a digital camera to an arbitrary object by using its pose (X,Y pixel coordination and the angel of the camera). The method uses a pre-data that stores all the information about the relation between the pose and the distance of an object to the camera. This process designed for a robot that is a part of a robotic team participating in RoboCup KSL competition
Distance Estimation to Image Objects Using Adapted Scaletheijes
Distance measurement is part of various robotic applications. There exist many methods for this purpose. In this paper, we introduce a new method to measure the distance from a digital camera to an arbitrary object by using its pose (X,Y pixel coordination and the angel of the camera). The method uses a pre-data that stores all the information about the relation between the pose and the distance of an object to the camera. This process designed for a robot that is a part of a robotic team participating in RoboCup KSL competition.
An algorithm to quantify the swelling by reconstructing 3D model of the face with stereo images is presented. We
analyzed the primary problems in computational stereo, which include correspondence and depth calculation. Work has been carried out to determine suitable methods for depth estimation and standardizing volume estimations. Finally we designed software for reconstructing 3D images from 2D stereo images, which was built on Matlab and Visual C++. Utilizing
techniques from multi-view geometry, a 3D model of the face was constructed and refined. An explicit analysis of the stereo
disparity calculation methods and filter elimination disparity estimation for increasing reliability of the disparity map was
used. Minimizing variability in position by using more precise positioning techniques and resources will increase the accuracy of this technique and is a focus for future work
Gabor filter is a powerful way to enhance biometric images like fingerprint images in order to extract correct features from these images, Gabor filter used in extracting features directly asin iris images, and sometimes Gabor filter has been used for texture analysis. In fingerprint images The even symmetric Gabor filter is contextual filter or multi-resolution filter will be used to enhance fingerprint imageby filling small gaps (low-pass effect) in the direction of the ridge (black regions) and to increase the discrimination between ridge and valley (black and white regions) in the direction, orthogonal to the ridge, the proposed method in applying Gabor filter on fingerprint images depending on translated fingerprint image into binary image after applying some simple enhancing methods to partially overcome time consuming problem of the Gabor filter.
AUTOMATED MANAGEMENT OF POTHOLE RELATED DISASTERS USING IMAGE PROCESSING AND ...ijcsit
Potholes though seem inconsequential, may cause accidents resulting in loss of human life. In this paper, we present an automated system to efficiently manage the potholes in a ward by deploying geotagging and image processing techniques that overcomes the drawbacks associated with the existing
survey-oriented systems. Image processing is used for identification of target pothole regions in the 2D
images using edge detection and morphological image processing operations. A method is developed to
accurately estimate the dimensions of the potholes from their images, analyze their area and depth,estimate the quantity of filling material required and therefore enabling pothole attendance on a priority basis. This will further enable the government official to have a fully automated system for e f f e c t i v e l y ma n a g i ng pothole related disasters.
This paper presents an improved edge detection algorithm for facial and remotely sensed images using
vector order statistics. The developed algorithm processes coloured images directly without been converted
to grey scale. A number of the existing algorithms converts the coloured images into grey scale before
detection of edges. But this process leads to inaccurate precision of recognized edges, thus producing false
and broken edges in the output edge map. Facial and remotely sensed images consist of curved edge lines
which have to be detected continuously to prevent broken edges. In order to deal with this, a collection of
pixel approach is introduced with a view to minimizing the false and broken edges that exists in the
generated output edge map of facial and remotely sensed images.
Alternative Method for Determining the Elastic Modulus of ConcreteIJERA Editor
This paper presents the use of the technique of digital image correlation for obtaining the elasticity modulus of
concrete. The proposed system uses a USB microscope that captures images at a rate of five frames per second.
The stored data are correlated with the applied loads, and a stress-strain curve is generated to determine the
concrete compressive modulus of elasticity. Two different concretes were produced and tested using the
proposed system. The results were compared with the results obtained using a traditional strain gauge. It was
observed a difference in the range of 4% between the two methods, wherein this difference depends of a lot of
parameter in the case of the DIC results, as focal length and a video capture resolution, indicating that DIC
technique can be used to obtain mechanical properties of concrete.
Stereo matching based on absolute differences for multiple objects detectionTELKOMNIKA JOURNAL
This article presents a new algorithm for object detection using stereo camera system. The problem to get an accurate object detion using stereo camera is the imprecise of matching process between two scenes with the same viewpoint. Hence, this article aims to reduce the incorrect matching pixel with four stages. This new algorithm is the combination of continuous process of matching cost computation, aggregation, optimization and filtering. The first stage is matching cost computation to acquire preliminary result using an absolute differences method. Then the second stage known as aggregation step uses a guided filter with fixed window support size. After that, the optimization stage uses winner-takes-all (WTA) approach which selects the smallest matching differences value and normalized it to the disparity level. The last stage in the framework uses a bilateral filter. It is effectively further decrease the error on the disparity map which contains information of object detection and locations. The proposed work produces low errors (i.e., 12.11% and 14.01% nonocc and all errors) based on the KITTI dataset and capable to perform much better compared with before the proposed framework and competitive with some newly available methods.
Performance Evaluation of Image Edge Detection Techniques CSCJournals
The success of an image recognition procedure is related to the quality of the edges marked. The
aim of this research is to investigate and evaluate edge detection techniques when applied to
noisy images at different scales. Sobel, Prewitt, and Canny edge detection algorithms are
evaluated using artificially generated images and comparison criteria: edge quality (EQ) and map
quality (MQ). The results demonstrated that the use of these criteria can be utilized as an aid for
further analysis and arbitration to find the best edge detector for a given image.
An Efficient Algorithm for Edge Detection of Corroded SurfaceIJERA Editor
Inspection process in industrial applications plays a vital role as it directly hinders the outages of industry. Thereby the inspection especially in case of corroded surfaces is to be fast, precised and accurate. Visual inspection has been very liable to mistakes because of numerous facts. The automatic inspection systems remove subjective aspects and can provide fast and accurate inspection. Inspection of corroded surfaces is very important concern, thus it is required to detect corroded surfaces. A new algorithm is developed by certain changes in mask and thresholding selection to detect corroded surfaces. The paper is about how we can amend the weak edges of input images and discarding false edges to overcome the problem of traditional techniques in this field. Proposed operator also compared with two commonly used edge detection algorithms which are Canny and Sobel.
An Efficient Algorithm for Edge Detection of Corroded SurfaceIJERA Editor
Inspection process in industrial applications plays a vital role as it directly hinders the outages of industry. Thereby the inspection especially in case of corroded surfaces is to be fast, precised and accurate. Visual inspection has been very liable to mistakes because of numerous facts. The automatic inspection systems remove subjective aspects and can provide fast and accurate inspection. Inspection of corroded surfaces is very important concern, thus it is required to detect corroded surfaces. A new algorithm is developed by certain changes in mask and thresholding selection to detect corroded surfaces. The paper is about how we can amend the weak edges of input images and discarding false edges to overcome the problem of traditional techniques in this field. Proposed operator also compared with two commonly used edge detection algorithms which are Canny and Sobel.
Using Generic Image Processing Operations to Detect a Calibration GridJan Wedekind
Camera calibration is an important problem in 3D computer vision. The problem of determining the camera parameters has been studied extensively. However the algorithms for determining the required correspondences are either semi-automatic (i.e. they require user interaction) or they involve difficult to implement custom algorithms.
We present a robust algorithm for detecting the corners of a calibration grid and assigning the correct correspondences for calibration . The solution is based on generic image processing operations so that it can be implemented quickly. The algorithm is limited to distortion-free cameras but it could potentially be extended to deal with camera distortion as well. We also present a corner detector based on steerable filters. The corner detector is particularly suited for the problem of detecting the corners of a calibration grid.
- See more at: http://figshare.com/articles/Using_Generic_Image_Processing_Operations_to_Detect_a_Calibration_Grid/696880#sthash.EG8dWyTH.dpuf
Similar to Enhanced Algorithm for Obstacle Detection and Avoidance Using a Hybrid of Plane To Plane Homography, Image Segmentation, Corner and Edge Detection Techniques (20)
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)MdTanvirMahtab2
This presentation is about the working procedure of Shahjalal Fertilizer Company Limited (SFCL). A Govt. owned Company of Bangladesh Chemical Industries Corporation under Ministry of Industries.
Water scarcity is the lack of fresh water resources to meet the standard water demand. There are two type of water scarcity. One is physical. The other is economic water scarcity.
About
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Technical Specifications
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
Key Features
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface
• Compatible with MAFI CCR system
• Copatiable with IDM8000 CCR
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
Application
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Overview of the fundamental roles in Hydropower generation and the components involved in wider Electrical Engineering.
This paper presents the design and construction of hydroelectric dams from the hydrologist’s survey of the valley before construction, all aspects and involved disciplines, fluid dynamics, structural engineering, generation and mains frequency regulation to the very transmission of power through the network in the United Kingdom.
Author: Robbie Edward Sayers
Collaborators and co editors: Charlie Sims and Connor Healey.
(C) 2024 Robbie E. Sayers
Cosmetic shop management system project report.pdfKamal Acharya
Buying new cosmetic products is difficult. It can even be scary for those who have sensitive skin and are prone to skin trouble. The information needed to alleviate this problem is on the back of each product, but it's thought to interpret those ingredient lists unless you have a background in chemistry.
Instead of buying and hoping for the best, we can use data science to help us predict which products may be good fits for us. It includes various function programs to do the above mentioned tasks.
Data file handling has been effectively used in the program.
The automated cosmetic shop management system should deal with the automation of general workflow and administration process of the shop. The main processes of the system focus on customer's request where the system is able to search the most appropriate products and deliver it to the customers. It should help the employees to quickly identify the list of cosmetic product that have reached the minimum quantity and also keep a track of expired date for each cosmetic product. It should help the employees to find the rack number in which the product is placed.It is also Faster and more efficient way.
Enhanced Algorithm for Obstacle Detection and Avoidance Using a Hybrid of Plane To Plane Homography, Image Segmentation, Corner and Edge Detection Techniques
1. IOSR Journal of Electrical and Electronics Engineering (IOSR-JEEE)
e-ISSN: 2278-1676,p-ISSN: 2320-3331, Volume 6, Issue 6 (Jul. - Aug. 2013), PP 37-44
www.iosrjournals.org
www.iosrjournals.org 37 | Page
Enhanced Algorithm for Obstacle Detection and Avoidance Using
a Hybrid of Plane To Plane Homography, Image Segmentation,
Corner and Edge Detection Techniques
N.A. Ofodile1
, S. M. Sani2
PhD, MSc(Eng), BSc(Hons)Eng
1
(NAF Research and Development Centre, Air Force Institute of Technology, Nigeria)
2
(Department of Electrical and Electronics Engineering, Nigerian Defence Academy, Nigeria)
_____________________________________________________________________________________
Abstract: This paper presents the implementation as well as simulated results of the enhanced algorithm for
obstacle detection and avoidance using a hybrid of plane to plane homography, image segmentation, corner and
edge detection techniques. The key advantages of this algorithm over similar ones are:
(i) elimination of false positives obtained by the image segmentation technique as a result of which obstacle
detection becomes more efficient,
(ii) reduction in the presence of unreliable corners and broken edge lines in high resolution images which may
result in poor homography computation and image segmentation respectively,
(iii) elimination of lack of depth perception hence the system provides and evaluates depth and obstacle height
properly without planar assumptions,
(iv) significant reduction in processing power.
Keywords: obstacle detection and avoidance, plane to plane homography, image segmentation, corner
detection, edge detection.
I. Introduction
Obstacle avoidance is a fundamental requirement for autonomous mobile robots and vehicles. Due to
human error, the obstacles may not be detected on time or the divert signal meant to change a vehicle’s direction
may be interrupted deliberately by jamming and the vehicle could be destroyed as a result. The aim of this
system is to develop an optimized obstacle detection and avoidance system algorithm for use onboard an
unmanned ground vehicle. The system makes use of a camera (image acquisition device) placed on board a
UGV connected to an onboard processing unit. The processing unit will perform the functions of obstacle
detection, avoidance and motor control of the UGV. Specifically, the system will focus on the design and
implementation of the obstacle detection and avoidance based on the processed images obtained from the
cameras. The sensors of obstacle detection systems are built on different technologies. These technologies are
[1]:
i. infrared sensors,
ii. common Radio Detection and Ranging (radar) sensors,
iii. microwave-based radar,
iv. digital cameras,
v. laser detection and ranging (ladar).
Apart from digital cameras and ladar, the other technologies are based on electromagnetic radiations or radio
frequency signals which have the following impairments;
i. reduction in signal quality due to climatic and weather conditions,
ii. reduction in signal quality due to scattering nature of electromagnetic signals as they hit certain material
surfaces.
RostislavGoroshin[2] developed an Obstacle detection using a Monocular Camera focused basically on a
single algorithm. The algorithm processes video data captured by a single monocular camera mounted on the
UGV. They made the assumption that the UGV moves on a locally planar surface, representing the ground
plane. However the monocular camera could not provide and evaluate depth and obstacle height properly due to
lack of depth perception which is common with planar assumptions and multicolor images could not be properly
segmented since the original algorithm focused on segmentation techniques for less colored (single, dual or tri)
images.
SyedurRahman [3] worked on the development of Obstacle Detection for Mobile Robots Using Computer
Vision. The system used Multi-view relations on epipolar geometry and edge detection to find point
correspondences on edges between the images and then uses planar homography to compute the heights along
the contours thereby performing obstacle detection. However, several optimizations need to be made to enhance
2. Enhanced Algorithm For Obstacle Detection And Avoidance Using A Hybrid Of Plane To Plane
www.iosrjournals.org 38 | Page
the reliability of the method. For example, if an obstacle and the ground get segmented together, epipolar
geometry and contour height estimates could be used to detect where the ground ends and where the object
starts. A horizontal line can be drawn separating the obstacle and the ground marking them with their
appropriate heights. Also, images with better resolution resulted or lead to the presence of more unreliable
corners and broken edge lines which may make matters worse during the homography computation.
The processes involved in the design of this system include:
i. modeling the Video based Obstacle Detection and Avoidance System using SIMULINK/MATLAB
Modeling Software; an easy to use tool used for simulation and parameter optimization
ii. designing the Video Processing and Image Processing Algorithm for the Video stream and apply the
designed obstacle detection algorithm to the video stream.
iii. implementing the obstacle detection-avoidance system
II. System Implementation
a. PLANE TO PLANE HOMOGRAPHY
Plane to plane homography can be described as a relationship between two planes, such that any point
on one plane corresponds to one point in the other plane.Homography simply means an invertible
transformation from a projective space that maps straight lines to straight lines.In figure 1.1a, an image of a
scene is shown. It contains two points x1 and x2 that will be used to show how homography can be used to
obtain other point coordinate values on the same image or another image of the same scene.
Figure 1.1a: an image of a scene showing x1 and x2 image points
In order to get the width of the second plaque from the left, two homography matrices are computed
from 2 points with coordinates X1 and X2 on the scene and the coordinates x1 and x2 on the image. X1 and X2
were measured using a metre rule with the lower right corner as reference from the actual scene while x1and x2
were measured from the image in figure 1.1a. The measured values for X1, X2, x1 and x2 are given below. All
dimensions used in this computation are in centimeters (cm).
𝑋1 = 67.5,17.65,1
𝑋2 = 23.3,21.9,1
𝑥1 = 13,9.5,1
𝑥2 = 8.8,5.7,1
The homography matrices are computed using the formula MATLAB algorithm containing the formula 𝑋 = 𝐻𝑥
𝑋1 67.5 17.65 1 = 𝑥1[13 9.5 1]
1 2 3
4 5 6
7 8 9
𝑋2 23.3 21.9 1 = 𝑥2[8.8 5.7 1]
1 2 3
4 5 6
7 8 9
𝒙 𝟏
𝒙 𝟐
(1.1)
(1.2)
(1.3)
(1.4)
(1.5)
(1.6)
3. Enhanced Algorithm For Obstacle Detection And Avoidance Using A Hybrid Of Plane To Plane
www.iosrjournals.org 39 | Page
Therefore the homography matrices are;
𝐻1 =
2.5 1 0
3.2 0.3 0
1.6 1.8 1
𝐻2 =
1.5 1.9 0
1.2 0.7 0
3.3 1.2 1
Hence, if the width of the frame on the image in fig 1.1b is 2.5cm and its length is 3cm such that
𝑥3 = 2.5,3,1
Figure 1.1b: another image of the scene showing x3 image point
Then the estimated width and length of the plaque when computed using the homography matrix H2
will result in
𝑋3 = 𝑥3 2.5 3 1 𝐻3
1.5 1.9 0
1.2 0.7 0
3.3 1.2 1
𝑋3 = 10.65,8.05,1
The actual width and length of the plaque when measured on the actual scene using a metrerule is
10.9cm and 7.77cm respectively. The plane to plane homography completely depends on its structure to
determine relevant information but this project will combine the camera’s internal parameters and relative pose
that will use simple lens formula given as [4]:
1
𝑓𝑜𝑐𝑎𝑙 𝑙𝑒𝑛𝑔 𝑡
=
1
𝑂𝑏𝑗𝑒𝑐𝑡 𝑒𝑖𝑔 𝑡
+
1
𝐼𝑚𝑎𝑔𝑒 𝑒𝑖𝑔 𝑡
to compliment the homography computations where the homography may not be available.
b. Image Segmentation.
Image segmentation can also be called warping. While creating the warped image, the warped
coordinates xof each pixel is found using X = Hx. when given the coordinates Xon the first image and the
homography matrix H [5]. It is similar to the computation above. However this means that there may be pixels
on the warped image which are not warped for any pixels in the first image and there may be pixels that are
warped for more than one pixel from the first image. These problems are solved using interpolation. Blank
pixels are simply filled up by averaging the intensities of their non-blank neighbours. Pixels that are warped
positions for more than one pixel on the first image have the average intensities for all the corresponding pixels
from the first image.
Assuming the plane to which the homography corresponds to is the ground, the warped image and second image
should be identical except for parts of the scene that are above the ground plane (i.e. obstacles). The difference
between intensities of corresponding pixels between the warped image and second image is used to detect
objects or obstacles.
c. Canny Edge Detection.
2.5cm
3 cm
(1.7)
(1.8)
(1.10)
(1.11)
(1.9)
4. Enhanced Algorithm For Obstacle Detection And Avoidance Using A Hybrid Of Plane To Plane
www.iosrjournals.org 40 | Page
The Canny edge detection algorithm is also known as the optimal edge detector. In the paper written by
John Canny titled "A Computational Approach to Edge Detection" [6], he followed a list of criteria to improve
the three methods of edge detection listed earlier. The criteria are;
i. the first and most obvious is low error rate. It is important that edges occurring in images should not be
missed and that there be no responses to non-edges;
ii. the second criterion is that the edge points be well localized. In other words, the distance between the edge
pixels as found by the detector and the actual edge is to be at a minimum;
iii. a third criterion is to have only one response to a single edge. This was implemented because the first two
criteria were not substantial enough to completely eliminate the possibility of multiple responses to an edge.
Based on these criteria, the canny edge detector first smoothes the image to eliminate and noise. It then finds the
image gradient to highlight regions with high spatial derivatives. The algorithm then tracks along these regions
and suppresses any pixel that is not at the non maximum suppression regions. The gradient array is now further
reduced by hysteresis. Hysteresis is used to track along the remaining pixels that have not been suppressed.
Hysteresis uses two thresholds and if the magnitude is below the first threshold, it is set to zero (made a non
edge). If the magnitude is above the high threshold, it has made an edge but if the magnitude is between the 2
thresholds, then it is set to zero unless there is a path from this pixel to a pixel with a gradient above the second
threshold.The canny edge detector is used for this work because of its optimal characteristic and the solution it
gives to the three criteria earlier stated.
d. THE HARRIS CORNER DETECTOR.
This was chosen because the Harris corner detector provides variation in intensity of all pixels based on
orientation in the image causing a great reduction in the noise response obtained. It is also well suited for a
video image analysis application (i.e. real-time) using minimum processing requirement.
Here, the local structure matrix is smoothed by a Gaussian iteration. That is,
𝐶 𝐻𝑎 𝑟𝑟𝑖𝑠 = 𝑤 𝐺 𝜎
𝜕𝑦
𝜕𝑥
2 𝜕𝑦
𝜕𝑥
𝜕𝑦
𝜕𝑥
𝜕𝑦
𝜕𝑥
𝜕𝑦
𝜕𝑥
𝜕𝑦
𝜕𝑥
2
where𝑤 𝐺(𝜎)is an isotropic Gaussian iteration, standard deviation σ, and the operation denotes convolution.
A measure of the corner response at each pixel coordinates (x,y) is then defined by
where k is an adjustable constant and CHarris (x,y) is the 2 by 2 local structure matrix at coordinates (x,
y). To prevent too many corner features lumping together closely, a non-maximal suppression process on the
corner response image is usually carried out to suppress weak corners around the stronger ones. This is then
followed by a thresholding process. Altogether, the Harris corner detector requires three additional parameters
to be specified: the constant k, the radius, d, of the neighbourhood region for suppressing weak corners, and the
threshold value t [7]. The corner response r can be written as
OR 𝑟 = 𝑘 ≤
𝛼
1+ 𝛼 2
Larger value of k corresponds to a less sensitive detector and yields less corners; smaller value of k
corresponds to a more sensitive detector and yields more corners.
The system analysis using the four methods/algorithms mentioned above is done using the sampling method
Random Sampling Consensus (RANSAC). The RANSAC (Random Sampling Consensus) algorithm is
aniterative technique that generates candidate solutions by using the minimum number observations (data
points) required to estimate the underlying model parameters [8]
III. Results And Analysis
This involves testing the algorithm written for the obstacle detection using two images. The test carried
out here determined;
i. the accuracy of each obstacle detection method used in the design of the obstacle detection system and the
accuracy of the obstacle detection system designed,
ii. the effect of certain factors (object and floor textures) on the obstacle detection system developed.
Two images of the same scene are presented as shown in figure 1.2.
(1.12)
(1.13)
(1.14)
5. Enhanced Algorithm For Obstacle Detection And Avoidance Using A Hybrid Of Plane To Plane
www.iosrjournals.org 41 | Page
Figure 1.2a Image with stack of papers Figure 1.2b image with single paper
One of the images contains a stack of papers which represents an obstacle while the other contains
single papers placed on the floor which does not constitute an obstacle. The images were processed using Video
Image Processing toolbox of Simulink with each of the obstacle detection methods as well as with the combined
obstacle detection system developed.
The results obtained from the effects of the individual obstacle detection methods are outlined below:
i. Corner detection and Edge detection:
The Harris corner detector was used to perform the initial task of locating corners. The results obtained
showed that a large number of corners could be found. Also, even when the ground plane and the object were
very similar in colour, the Harris detector was able to locate the corners. It was observed that the Canny edge
detector detected edges based on the texture of the materials that make up the image. The more detail in the
material texture, the more edges that will be obtained. Figure 1.2a shows the full edge detection image of figure
1.2b.
Figure 1.2a Full edge detection Figure 1.2b Threshold edge detection
The texture of the material representing the ground plane in this image is detailed. The full edge
detection image will not be useful for detecting any obstacles on the image. Therefore a threshold scaling factor
is introduced such that the original image is compressed to have little detail and the resulting edge detection
image as shown in figure 1.2b will leave only edges from objects that are evident on the image.
ii. Plane to Plane homography:
The computation of homography matrix H is entirely dependent on the location of point correspondences
between the images. The homography matrix is obtained at the same time as the estimated heights of corners
and points along edges. If the computation of the H matrix is accurate, then the computed heights using the
algorithm of points relative to the ground on an image would be consistent with their real heights. Figure 1.3a
shows selected points on the image in figure 1.2a.
6. Enhanced Algorithm For Obstacle Detection And Avoidance Using A Hybrid Of Plane To Plane
www.iosrjournals.org 42 | Page
Figure 1.3a Points on the image
Table 1 in the appendix shows the computed heights and their actual heights for 25 different points on
the image in figure 1.3a. The real heights were measured with a metre rule while the computed heights were
obtained from the computations using the homography matrix
A graph of actual heights and computed heights for the 25 different points on the image of figure 1.3a
is plotted and shown in figure 1.3b.
Figure 1.3b Actual heights and Computed heights
The black line in the graph in figure 1.3b is the regression line. A regression is a statistical analysis
assessing the association between two variables. It is used to find the relationship between two variables for the
purpose of predicting one variable when given the other. This is expressed as an equation [9]:
𝑌 = 𝑎 + 𝑏𝑋 (1.1)
b = (N(ΣXY) − (ΣX)(ΣY)) / (N(ΣX2
) − (ΣX)2
) (1.2)
a = (ΣY − b(ΣX)) / N (1.3)
where
X and Y are the variables.
b = The slope of the regression line
a = The intercept point of the regression line and the y axis.
N = Number of values or elements
There are two broad types of regression models; the linear regression and non linear or robust
regression. The Linear regression implements a statistical model where relationships between the independent
variables and the dependent variable are almost linear and the Non Linear Regression shows regression as a
model of conditional expectation with the conditional distribution of the independent variables given with the
more than one dependent variable in the presence of an error term especially when the values used are non-
numeric.The linear regression model is used here because the independent variable and dependent variable
differences are approximately equal to the average difference which shows that the values are almost linear.
Also the values are simply numeric.
Table 2 in the appendix contains the analysis for parameters required to obtain the regression equation.
The Slope “b”
∆
7. Enhanced Algorithm For Obstacle Detection And Avoidance Using A Hybrid Of Plane To Plane
www.iosrjournals.org 43 | Page
b = (N(ΣXY) - (ΣX)(ΣY)) / (N(ΣX2
)- (ΣX)2
)
b = (26*(4317.6) – (247.4)* (325)) / (26 * (3418.06) – (247.4 *247.4)) (1.4)
b = 1.151459722 (1.5)
The Intercept “a”
a = (ΣY - b(ΣX)) / N
a = (325 – 1.151459722*(247.4)) / 26 (1.6)
a = 1.543417875 (1.7)
Therefore, the regression equation becomes;
Y = 1.54 + 1.15X (1.8)
The cause of the ∆ between the perfect computation line and the regression line represented in equation
1.8 could not be determined in this project. Further works on this project can be done to determine the root
cause of the difference obtained.
The regression line (black line) is seen to be quite close to the perfect computation line (blue line). The
estimated error or deviation for all the computed heights from the actual heights is about 10% of the actual
heights. This deviation and the regression equationis factored back into the main algorithm to recalculate the
computed heights and take a tolerance of ±10% during the obstacle avoidance phase (minimum avoidance
distance computation). This shows how well the homography of the scene has been computed. Inaccurate point
correspondences on the ground plane of an image obtained as a result of too much texture on it, may lead to
poor computations of the H matrix. As a result, heights estimated will not be accurate too.
iii. Image segmentation and warping:
The effect of the image segmentation and warping on the two images in figure 1.2 showed that the
obstacle (stack of papers) in figure 1.2a was detected. It also detected the singular papers scattered in the image
in figure 1.2b as an obstacle. This is however a false positive detection because the papers are on the ground
plane and can easily be walked over.
The results obtained from the combined obstacle detection system developed, shows that the false
positives detected by the image segmentation and warping is corrected by the algorithm which superimposes the
processed image which contains the calculated corner and edge heights onto the homography image obtained
from image segmentation and warping and the resultant image is passed through the plane to plane homography
phase again so as to eliminate the area of the image with false obstacles leaving only the true obstacles. It also
calculates all point correspondences with great accuracy due to the combined effect of the corner, edge detection
in the homography matrix computation.
IV. Conclusion
This system used the attendant advantages of corner and edge detection to make up for the
shortcomings associated with the homography and image segmentation and warping techniques. The combined
system made the obstacle detection system a robust and optimized system.
It is recommended that further work be done on the algorithm of the Program to improve the system detection to
perform obstacle detection by reducing estimated error or deviation for all the computed heights from the actual
heights. This will remove the tolerance of ±10% during the obstacle avoidance phase (minimum avoidance
distance computation).
References
[1]. F Kruse; “Multi Sensor System for Obstacle Detection” Department of Computer Science, Umea University, 2001.
http://www8.cs.umu.se/research/ifor/dl/OBSTACLE%20DETECTION-
AVOIDANCE/Multi%20Sensor%20System%20for%20Obstacle%20Detection%20in%20Train%20applications.pdfRetrieved
2012-04-16
[2]. RostislavGoroshin; “OBSTACLE DETECTION USING A MONOCULAR CAMERA” School of Electrical and Computer
Engineering Georgia Institute Of Technology, 2008
http://smartech.gatech.edu/jspui/1/goroshin_rostislav_200808_mast.pdfRetrieved 2012-03-13.
[3]. SyedurRahman, the development of Obstacle Detection for Mobile Robots Using Computer Vision , 8th European Conference on
Speech Communication and Technology, pp. 2197-2200, Geneva, Switzerland, Sept. 1-4, 2003.
http://www.wcl.ee.upatras.gr/ai/papers/potamit is11.pdf.
[4]. You +1'd this publicly. Undo
[5]. R. Hartley and A. Zisserman: “Multiple View Geometry in Computer Vision”, Cambridge University Press, 2000.SimilarYou +1'd
this publicly. Undo
[6]. S. Teller: “Image Segmentation”, Masachussetts Institute of Technology, Computer Science and Artificial Intelligence Laboratory,
1996You +1'd this publicly. Undo
[7]. J. Canny: “A Computational Approach to Edge Detection”, IEEE Transactions on Pattern Analysis and Machine Intelligence, 1986
[8]. C.J. Harris and M. Stephens: “A combined corner and edge detector”, 4th Alvey Vision Conf., Manchester, 1988 SimilarYou +1'd
this publicly. Undo
8. Enhanced Algorithm For Obstacle Detection And Avoidance Using A Hybrid Of Plane To Plane
www.iosrjournals.org 44 | Page
[9]. Z. Chen: “K-means clustering” and “RANSAC normalised 8 point compuatation of F” MATLAB codes, Computer Vision
Research, University of York, Department of Computer Science, 2001.
[10]. Ciubotaru-Petrescu, B., Chiciudean, D., Cioarga, R., &Stanescu, D. (2006). Wireless Solutions for Telemetry in Civil Equipment
and Infrastructure Monitoring.3rd Romanian-Hungarian Joint Symposium on Applied Computational Intelligence (SACI) May 25-
26, 2006. http://www.bmf.hu/conferences/saci2006/Ciubotaru.pdf.
APPENDIX
Table 1: Computed heights and Actual heights measured from points on the figure 1.1a image
Points on the Image Computed heights
(cm)
Actual heights
(cm)
Point 1 12 16
Point 2 21 25
Point 3 10.5 15
Point 4 11 17
Point 5 4 6
Point 6 7 10
Point 7 0.5 1
Point 8 9 14
Point 9 7.5 12
Point 10 19 23
Point 11 5.9 9
Point 12 18 22
Point 13 3 3
Point 14 4 7
Point 15 8 13
Point 16 5 8
Point 17 17 20
Point 18 0 0
Point 19 4 4
Point 20 3.5 5
Point 21 24.5 24
Point 22 8 11
Point 23 13 18
Point 24 15 19
Point 25 2 2
Point 26 18 21
Table 2: Analysis for computation of Regression equation
Points on the Image Computed heights (X) Actual heights
(Y)
X*Y X2
Point 1 12 16 192 144
Point 2 21 25 525 441
Point 3 10.5 15 157.5 110.25
Point 4 11 17 187 121
Point 5 4 6 24 16
Point 6 7 10 70 49
Point 7 0.5 1 0.5 0.25
Point 8 9 14 126 81
Point 9 7.5 12 90 56.25
Point 10 19 23 437 361
Point 11 5.9 9 53.1 31.81
Point 12 18 22 396 324
Point 13 3 3 9 9
Point 14 4 7 28 16
Point 15 8 13 104 64
Point 16 5 8 40 25
Point 17 17 20 340 289
Point 18 0 0 0 0
Point 19 4 4 16 16
Point 20 3.5 5 17.5 12.25
Point 21 21.5 24 516 462.25
Point 22 8 11 88 64
Point 23 13 18 234 169
Point 24 15 19 285 225
Point 25 2 2 4 4
Point 26 18 21 378 324
TOTAL N= 26 247.4 325 4317.6 3418.06