This paper explains how the process of reading the data object detection results with a certain color. In this case, the object is an orange tennis ball. We use a Pixy CMUcam5 connecting to the Arduino Nano with microcontroller ATmega328-based. Then through the USB port, data from Arduino nano re-read and displayed. It’s to ensure weather an orange object is detected or not. By this process, it will be exactly known how many blocks object detected, including the X and Y coordinates of the object. Finally, it will be explained the complexity of the algorithms used in the process of reading the results of the detection orange object.
This paper proposes a semi-automatic approach to extract roads from high-resolution remote sensing images. The approach uses Canny edge detection to initially segment roads. Adjacent segments are then merged using Full Lambda Schedule. Support vector machine (SVM) classification is performed on the image to extract roads. Morphological operators like dilation and erosion are finally applied to remove noise and improve road detection accuracy. The method is evaluated on WorldView, QuickBird, and UltraCam images and shows high accuracy for extracting different road types.
Learning to Recognize Distance to Stop Signs Using the Virtual World of Grand...Artur Filipowicz
This paper examines the use of a convolutional neural network and a virtual environment to detect stop signs and estimate distances to them based on individual images. To train the network, we develop a method to automatically collect labeled data from Grand Theft Auto 5, a video game. Using this method, we collect a dataset of 1.4 million images with and without stop signs across different environments, weather conditions, and times of day. Convolutional neural network trained and tested on this data can detect 95.5% of the stops signs within 20 meters of the vehicle with a false positive rate of 5.6% and an average error in distance of 1.2m to 2.4m on video game data. We also discovered that the performance our approach is limited in distance to about 20m. The applicability of these results to real world driving is tested, appears promising and must be studied further.
Design and Implementation of Spatial Localization Based on Six -axis MEMS SensorIJRES Journal
This paper focuses on the 3-axis MEMS gyroscope, 3-axis MEMS accelerometer study spatial
orientation. In order to avoid the influence of the environment on the positioning of the text based on physical
principles established sports model, combining coordinate transformation method, the microcontroller STM32
platform with integrated 3-axis MEMS gyroscope, 3-axis MEMS accelerometer chip MPU60x0 designed a new
space positioning system, and using I2C protocol to transfer information. The system is highly integrated, simple
circuit, small size, low power consumption, easy expansion, easy maintenance, etc., can be used as an adjunct to
a wireless network based positioning, improve positioning accuracy, precision can also be positioned relatively
low areas applications.
Conventional non-vision based navigation systems relying on purely Global Positioning System (GPS) or inertial sensors can provide the 3D position or orientation of the user. However GPS is often not available in forested regions and cannot be used indoors. Visual odometry provides an independent method to estimate position and orientation of the user/system based on the images captured by the moving user accurately. Vision based systems also provide information (e.g. images, 3D location of landmarks, detection of scene objects) about the scene that the user is looking at. In this project, a set of techniques are used for the accurate pose and position estimation of the moving vehicle for autonomous navigation using the images obtained from two cameras placed at two different locations of the same area on the top of the vehicle. These cases are referred to as stereo vision. Stereo vision provides a method for the 3D reconstruction of the environment which is required for pose and position estimation. Firstly, a set of images are captured. The Harris corner detector is utilized to automatically extract a set of feature points from the images and then feature matching is done using correlation based matching. Triangulation is applied on feature points to find the 3D co-ordinates. Next, a new set of images is captured. Then repeat the same technique for the new set of images too. Finally, by using the 3D feature points, obtained from the first set of images and the new set of images, the pose and position estimation of moving vehicle is done using QUEST algorithm.
Abstract - Positioning is a fundamental component of human life to make meaningful interpretations of the environment. Without knowledge of position, human beings are like machines and have very limited capabilities to interact with the environment. Even machines in today’s world can be made smarter if positioning information is made available to them. Indoor positioning of pedestrians is the broad area considered in this thesis. A foot mounted pedestrian tracking device has been studied for this purpose. Systems which utilize foot mounted inertial navigation system has been in the literature for more than two decades. However very few real time implementations have been possible. The purpose of this thesis is to benchmark and improve the performance of one such implementation.
POSITION ESTIMATION OF AUTONOMOUS UNDERWATER SENSORS USING THE VIRTUAL LONG B...ijwmn
This document summarizes a study on using the virtual long baseline (VLBL) method for positioning autonomous underwater sensors. The study describes a mathematical model for the VLBL positioning system and experimental setup. In field experiments, an autonomous underwater vehicle measured distances from deployed responder beacons at different locations to estimate their positions. Results showed the VLBL method could accurately position the beacons within 1 meter using a preliminary one-dimensional optimization approach to avoid false minima in position estimation.
This document discusses various machine vision techniques used to estimate the self-position of mobile robots in industries. It describes techniques such as GPS, vision-based localization using cameras and image processing, 2D and 3D map-based localization, and memory-based localization using image autocorrelation. Vision-based techniques analyze camera images to detect landmarks, match detected landmarks to a database, and calculate the robot's position. Map-based localization uses 2D overhead maps captured by laser sensors or 3D models to localize the robot. Memory-based localization generates unique autocorrelation images from camera views and matches them to stored images to estimate the position.
This paper proposes a semi-automatic approach to extract roads from high-resolution remote sensing images. The approach uses Canny edge detection to initially segment roads. Adjacent segments are then merged using Full Lambda Schedule. Support vector machine (SVM) classification is performed on the image to extract roads. Morphological operators like dilation and erosion are finally applied to remove noise and improve road detection accuracy. The method is evaluated on WorldView, QuickBird, and UltraCam images and shows high accuracy for extracting different road types.
Learning to Recognize Distance to Stop Signs Using the Virtual World of Grand...Artur Filipowicz
This paper examines the use of a convolutional neural network and a virtual environment to detect stop signs and estimate distances to them based on individual images. To train the network, we develop a method to automatically collect labeled data from Grand Theft Auto 5, a video game. Using this method, we collect a dataset of 1.4 million images with and without stop signs across different environments, weather conditions, and times of day. Convolutional neural network trained and tested on this data can detect 95.5% of the stops signs within 20 meters of the vehicle with a false positive rate of 5.6% and an average error in distance of 1.2m to 2.4m on video game data. We also discovered that the performance our approach is limited in distance to about 20m. The applicability of these results to real world driving is tested, appears promising and must be studied further.
Design and Implementation of Spatial Localization Based on Six -axis MEMS SensorIJRES Journal
This paper focuses on the 3-axis MEMS gyroscope, 3-axis MEMS accelerometer study spatial
orientation. In order to avoid the influence of the environment on the positioning of the text based on physical
principles established sports model, combining coordinate transformation method, the microcontroller STM32
platform with integrated 3-axis MEMS gyroscope, 3-axis MEMS accelerometer chip MPU60x0 designed a new
space positioning system, and using I2C protocol to transfer information. The system is highly integrated, simple
circuit, small size, low power consumption, easy expansion, easy maintenance, etc., can be used as an adjunct to
a wireless network based positioning, improve positioning accuracy, precision can also be positioned relatively
low areas applications.
Conventional non-vision based navigation systems relying on purely Global Positioning System (GPS) or inertial sensors can provide the 3D position or orientation of the user. However GPS is often not available in forested regions and cannot be used indoors. Visual odometry provides an independent method to estimate position and orientation of the user/system based on the images captured by the moving user accurately. Vision based systems also provide information (e.g. images, 3D location of landmarks, detection of scene objects) about the scene that the user is looking at. In this project, a set of techniques are used for the accurate pose and position estimation of the moving vehicle for autonomous navigation using the images obtained from two cameras placed at two different locations of the same area on the top of the vehicle. These cases are referred to as stereo vision. Stereo vision provides a method for the 3D reconstruction of the environment which is required for pose and position estimation. Firstly, a set of images are captured. The Harris corner detector is utilized to automatically extract a set of feature points from the images and then feature matching is done using correlation based matching. Triangulation is applied on feature points to find the 3D co-ordinates. Next, a new set of images is captured. Then repeat the same technique for the new set of images too. Finally, by using the 3D feature points, obtained from the first set of images and the new set of images, the pose and position estimation of moving vehicle is done using QUEST algorithm.
Abstract - Positioning is a fundamental component of human life to make meaningful interpretations of the environment. Without knowledge of position, human beings are like machines and have very limited capabilities to interact with the environment. Even machines in today’s world can be made smarter if positioning information is made available to them. Indoor positioning of pedestrians is the broad area considered in this thesis. A foot mounted pedestrian tracking device has been studied for this purpose. Systems which utilize foot mounted inertial navigation system has been in the literature for more than two decades. However very few real time implementations have been possible. The purpose of this thesis is to benchmark and improve the performance of one such implementation.
POSITION ESTIMATION OF AUTONOMOUS UNDERWATER SENSORS USING THE VIRTUAL LONG B...ijwmn
This document summarizes a study on using the virtual long baseline (VLBL) method for positioning autonomous underwater sensors. The study describes a mathematical model for the VLBL positioning system and experimental setup. In field experiments, an autonomous underwater vehicle measured distances from deployed responder beacons at different locations to estimate their positions. Results showed the VLBL method could accurately position the beacons within 1 meter using a preliminary one-dimensional optimization approach to avoid false minima in position estimation.
This document discusses various machine vision techniques used to estimate the self-position of mobile robots in industries. It describes techniques such as GPS, vision-based localization using cameras and image processing, 2D and 3D map-based localization, and memory-based localization using image autocorrelation. Vision-based techniques analyze camera images to detect landmarks, match detected landmarks to a database, and calculate the robot's position. Map-based localization uses 2D overhead maps captured by laser sensors or 3D models to localize the robot. Memory-based localization generates unique autocorrelation images from camera views and matches them to stored images to estimate the position.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
IRJET- Road Recognition from Remote Sensing Imagery using Machine LearningIRJET Journal
This document discusses a framework for recognizing roads from remote sensing imagery using machine learning. It proposes using a convolutional neural network (CNN) trained on large amounts of reference data to initially classify imagery into classification maps, then refining the CNN on a small amount of accurately labeled data. A series of experiments in MATLAB show this two-step training approach considers a large amount of context to provide fine-grained classification maps while addressing issues with imperfect training data. The document also reviews several existing methods for road extraction from remote sensing imagery and their limitations.
Robot Pose Estimation: A Vertical Stereo Pair Versus a Horizontal Oneijcsit
In this paper, we study the effect of the layout of multiple cameras placed on top of an autonomous mobile
robot. The idea is to study the effect of camera layout on the accuracy of estimated pose parameters.
Particularly, we compare the performance of a vertical-stereo-pair put on the robot at the axis of rotation
to that of a horizontal-stereo-pair. The motivation behind this comparison is that the robot rotation causes
only a change of orientation to the cameras on the axis of rotation. On the other hand, off-axis cameras
encounter additional translation beside the change of orientation. In this work, we show that for a stereo
pair encountering sequences of large rotations, at least a reference camera should be put on the axis of
rotation. Otherwise, the obtained translations have to be corrected based on the location of the rotation
axis. This finding will help robot designers to develop vision systems that are capable of obtaining accurate
pose for navigation control. An extensive set of simulations and real experiments have been carried out to
investigate the performance of the studied camera layouts encountering different motion patterns. As the
problem at hand is a real-time application, the extended Kalman filter (EKF) is used as a recursive
estimator.
This document summarizes research on using image stitching and optical flow to generate panoramic views from video frames in real-time. Key aspects include:
1) Features are detected in frames using Shi-Tomasi corner detection and tracked between frames using optical flow.
2) A key frame is selected when less than half of features from the previous frame are successfully tracked, allowing sufficient rotation for homography calculation.
3) Homographies relating key frames are estimated and used to stitch and map frames to a cylindrical panorama for 3D visualization by a teleoperator.
4) Experimental results found the Shi-Tomasi/optical flow method was over 10x faster than SIFT/
A survey on road extraction from color image using vectorizationeSAT Journals
Abstract Road extraction can be defined as extracting the roads from an image by accessing it through the features of road. Road information can be extracted from images in three ways: manual extraction, semi-automated extraction and fully automated detection. In this paper we review various research papers of road extraction methods. Most of the work focused on road extraction from grayscale images. Our proposed approach is to extract road from color image using vectorization. Vectorization is performed using canny edge detection and CDT (Constrained Delaunay Triangulation) and then grouped resulting triangles into polygons to make up vector image. A sequence of pre-processing steps is applied to get better quality of image and to produce extended road network before vectorization. At the end, skeletonization is used to transform the components of digital image into original components.
Keywords: vectorization, constrained Delaunay triangulation, canny edge detection, skeletonization,
Despite being around for almost two decades, footmounted inertial navigation only has gotten a limited spread. Contributing factors to this are lack of suitable hardware platforms and difficult system integration. As a solution to this, we present an open-source wireless foot-mounted inertial navigation module with an intuitive and significantly simplified dead reckoning interface. The interface is motivated from statistical properties of the underlying aided inertial navigation and argued to give negligible information loss. The module consists of both a hardware platform and embedded software. Details of the platform and the software are described, and a summarizing description of how to reproduce the module are given. System integration of the module is outlined and finally, we provide a basic performance assessment of the module. In summary, the module provides a modularization of the foot-mounted inertial navigation and makes the technology significantly easier to use.
Automatic Segmentation of Brachial Artery based on Fuzzy C-Means Pixel Clust...IJECEIAES
Automatic extraction of brachial artery and measuring associated indices such as flow-mediated dilatation and Intima-media thickness are important for early detection of cardiovascular disease and other vascular endothelial malfunctions. In this paper, we propose the basic but important component of such decision-assisting medical software development – noise tolerant fully automatic segmentation of brachial artery from ultrasound images. Pixel clustering with Fuzzy C-Means algorithm in the quantization process is the key component of that segmentation with various image processing algorithms involved. This algorithm could be an alternative choice of segmentation process that can replace speckle noise-suffering edge detection procedures in this application domain.
Enhanced Algorithm for Obstacle Detection and Avoidance Using a Hybrid of Pla...IOSR Journals
This document presents an enhanced algorithm for obstacle detection and avoidance using a hybrid of plane-to-plane homography, image segmentation, corner detection, and edge detection techniques. The algorithm aims to improve upon previous methods by eliminating false positives, reducing unreliable corners and broken edges, providing depth perception without planar assumptions, and requiring less processing power. The key components of the algorithm include plane-to-plane homography, image segmentation, Canny edge detection, Harris corner detection, and the RANSAC sampling method for system analysis. Test results on sample images show the algorithm can accurately detect obstacles based on texture differences while reducing noise from ground plane textures.
Objects detection and tracking using fast principle component purist and kalm...IJECEIAES
The detection and tracking of moving objects attracted a lot of concern because of the vast computer vision applications. This paper proposes a new algorithm based on several methods for identifying, detecting, and tracking an object in order to develop an effective and efficient system in several applications. This algorithm has three main parts: the first part for background modeling and foreground extraction, the second part for smoothing, filtering and detecting moving objects within the video frame and the last part includes tracking and prediction of detected objects. In this proposed work, a new algorithm to detect moving objects from video data is designed by the Fast Principle Component Purist (FPCP). Then we used an optimal filter that performs well to reduce noise through the median filter. The Fast Regionconvolution neural networks (Fast- RCNN) is used to add smoothness to the spatial identification of objects and their areas. Then the detected object is tracked by Kalman Filter. Experimental results show that our algorithm adapts to different situations and outperforms many existing algorithms.
Evolution of a shoe-mounted multi-IMU pedestrian dead reckoning PDR sensoroblu.io
Shoe-mounted inertial navigation systems, aka pedestrian dead reckoning or PDR sensors, are being preferred for pedestrian navigation because of the accuracy offered by them. Such shoe sensors are, for example, the obvious choice for real time location systems of first responders. The opensource platform OpenShoe has reported application of multiple IMUs in shoe-mounted PDR sensors to enhance noise performance. In this paper, we present an experimental study of the noise performance and the operating clocks based power consumption of multi-IMU platforms. The noise performances of a multi-IMU system with different combinations of IMUs are studied. It is observed that four-IMU system is best optimized for cost, area and power. Experiments with varying operating clocks frequency are performed on an in-house four-IMU shoe-mounted inertial navigation module (the Oblu module). Based on the outcome, power-optimized operating clock frequencies are obtained. Thus the overall study suggests that by selecting a well-designed operating point, a multi-IMU system can be made cost, size and power efficient without practically affecting its superior positioning performance.
Modern surveying techniques provide accurate measurements for civil engineering projects. Techniques like total stations, digital levels, GPS, and data collectors allow surveyors to measure horizontal and vertical angles as well as distances electronically. This improves accuracy and efficiency over traditional methods. Specific applications in civil engineering include road, tunnel, and bridge alignment, infrastructure mapping for urban planning, and construction site layout and monitoring. The summarized data can be imported into CAD software for modeling and analysis. Overall, modern surveying techniques enhance accuracy and productivity for civil engineering design and construction.
This document summarizes research on using smartphones as inertial navigation systems (INS). It discusses how accelerometers and gyroscopes in smartphones can measure position, orientation, and velocity without external devices like GPS. However, smartphone INS have limitations like drift, bias, noise, and interference that filters can help address. The document reviews literature on INS applications in vehicles and pedestrians, as well as methods to integrate GPS and correct errors, such as Kalman and Markov filters. The goal of this research is to experiment with smartphone INS accuracy in various motions and implement filtering to optimize data collection for potential classroom or app applications.
Obstacle detection for autonomous systems using stereoscopic images and bacte...IJECEIAES
This paper presents a low cost strategy for real-time estimation of the position of ob- stacles in an unknown environment for autonomous robots. The strategy was intended for use in autonomous service robots, which navigate in unknown and dynamic indoor environments. In addition to human interaction, these environments are characterized by a design created for the human being, which is why our developments seek morphological and functional similarity equivalent to the human model. We use a pair of cameras on our robot to achieve a stereoscopic vision of the environment, and we analyze this information to determine the distance to obstacles using an algorithm that mimics bacterial behavior. The algorithm was evaluated on our robotic platform demonstrating high performance in the location of obstacles and real-time operation.
Intelligent two axis dual-ccd image-servo shooting platform designeSAT Publishing House
This document describes the design of an intelligent two-axis dual-CCD image-servo shooting platform. It uses two cameras and image processing techniques to dynamically track targets in 3D space. The system calculates the precise 3D spatial coordinate of the target based on pixel differences between the dual camera images. Experimental results showed the system could hit dynamic targets with precision of 5mm.
Automated Motion Detection from space in sea surveillanceLiza Charalambous
This document summarizes research on automated motion detection of vessels from space using satellite imagery. The researchers used ALOS PRISM satellite triplets to detect vessel movement in ports in Cyprus. Through image segmentation, pattern extraction and description, and proximity searching between images, the method detected vessel movement, speed, and direction. It achieved over 90% detection rates but struggled with small vessels. Combining results from multiple image sets improved reliability. The researchers conclude motion detection from satellites can provide critical maritime security information when combined with other data sources.
This document appears to be a research paper submitted by Anutam Majumder for their M.Sc. degree. The paper discusses estimation of camera trajectory using visual odometry. It provides an introduction to visual odometry and discusses challenges in the field. The paper then reviews relevant literature on visual SLAM and deep learning based methods. It proposes a methodology using a CNN-RNN network for visual odometry estimation and discusses the network architecture, feature extraction, sequential modeling, and optimization. Results are presented on a dataset and the paper concludes with an evaluation comparing the method to other approaches.
Performance of Phase Congruency and Linear Feature Extraction for Satellite I...IOSR Journals
This document summarizes research on extracting linear features from satellite images. It introduces using a phase congruency and linear feature extraction model combined with an adaptive smoothing algorithm. The paper aims to evaluate the advantages and limitations of this approach when applied to satellite image feature extraction. It also describes other common feature extraction methods, such as using mathematical morphology operations like dilation and erosion. Overall, the document reviews techniques for automated linear feature extraction from satellite imagery.
Application of Vision based Techniques for Position EstimationIRJET Journal
This document summarizes research on using vision-based techniques to estimate the position of an unmanned aerial vehicle (UAV) from images captured by an onboard camera. Two algorithms - normalized cross-correlation and RANSAC feature detection - are tested and compared. Normalized cross-correlation is faster but less accurate when there is noise, scaling or rotation differences between images. RANSAC is more robust to distortions but slower. Both methods are able to estimate position to within 10 meters of accuracy under ideal conditions. RANSAC fails with high noise levels while normalized cross-correlation requires additional preprocessing to handle noise, reducing its speed advantage.
LOCAL REGION PSEUDO-ZERNIKE MOMENT- BASED FEATURE EXTRACTION FOR FACIAL RECOG...aciijournal
In the domain of image processing, face recognition is one of the most well-known research field. When
humans have very similar biometric properties, such as identical twins, the face recognition system is
considered as a challengeable problem. In this paper, the AdaBoost method is utilized to detect the
facial area of input image.
Inertial Navigation for Quadrotor Using Kalman Filter with Drift Compensation IJECEIAES
The main disadvantage of an Inertial Navigation System is a low accuracy due to noise, bias, and drift error in the inertial sensor. This research aims to develop the accelerometer and gyroscope sensor for quadrotor navigation system, bias compensation, and Zero Velocity Compensation (ZVC). Kalman Filter is designed to reduce the noise on the sensor while bias compensation and ZVC are designed to eliminate the bias and drift error in the sensor data. Test results showed the Kalman Filter design is acceptable to reduce the noise in the sensor data. Moreover, the bias compensation and ZVC can reduce the drift error due to integration process as well as improve the position estimation accuracy of the quadrotor. At the time of testing, the system provided the accuracy above 90 % when it tested indoor.
Path Planning for Mobile Robot Navigation Using Voronoi Diagram and Fast Marc...Waqas Tariq
For navigation in complex environments, a robot needs to reach a compromise between the need for having efficient and optimized trajectories and the need for reacting to unexpected events. This paper presents a new sensor-based Path Planner which results in a fast local or global motion planning able to incorporate the new obstacle information. In the first step the safest areas in the environment are extracted by means of a Voronoi Diagram. In the second step the Fast Marching Method is applied to the Voronoi extracted areas in order to obtain the path. The method combines map-based and sensor-based planning operations to provide a reliable motion plan, while it operates at the sensor frequency. The main characteristics are speed and reliability, since the map dimensions are reduced to an almost unidimensional map and this map represents the safest areas in the environment for moving the robot. In addition, the Voronoi Diagram can be calculated in open areas, and with all kind of shaped obstacles, which allows to apply the proposed planning method in complex environments where other methods of planning based on Voronoi do not work.
Speed Determination of Moving Vehicles using Lucas- Kanade AlgorithmEditor IJCATR
This paper presents a novel velocity estimation method for ground vehicles. The task here is to automatically estimate
vehicle speed from video sequences acquired with a fixed mounted camera. The vehicle motion is detected and tracked along the
frames using Lucas-Kanade algorithm. The distance traveled by the vehicle is calculated using the movement of the centroid over the
frames and the speed of the vehicle is estimated. The average speed of cars is determined from various frames. The application is
developed using MATLAB and SIMULINK.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
IRJET- Road Recognition from Remote Sensing Imagery using Machine LearningIRJET Journal
This document discusses a framework for recognizing roads from remote sensing imagery using machine learning. It proposes using a convolutional neural network (CNN) trained on large amounts of reference data to initially classify imagery into classification maps, then refining the CNN on a small amount of accurately labeled data. A series of experiments in MATLAB show this two-step training approach considers a large amount of context to provide fine-grained classification maps while addressing issues with imperfect training data. The document also reviews several existing methods for road extraction from remote sensing imagery and their limitations.
Robot Pose Estimation: A Vertical Stereo Pair Versus a Horizontal Oneijcsit
In this paper, we study the effect of the layout of multiple cameras placed on top of an autonomous mobile
robot. The idea is to study the effect of camera layout on the accuracy of estimated pose parameters.
Particularly, we compare the performance of a vertical-stereo-pair put on the robot at the axis of rotation
to that of a horizontal-stereo-pair. The motivation behind this comparison is that the robot rotation causes
only a change of orientation to the cameras on the axis of rotation. On the other hand, off-axis cameras
encounter additional translation beside the change of orientation. In this work, we show that for a stereo
pair encountering sequences of large rotations, at least a reference camera should be put on the axis of
rotation. Otherwise, the obtained translations have to be corrected based on the location of the rotation
axis. This finding will help robot designers to develop vision systems that are capable of obtaining accurate
pose for navigation control. An extensive set of simulations and real experiments have been carried out to
investigate the performance of the studied camera layouts encountering different motion patterns. As the
problem at hand is a real-time application, the extended Kalman filter (EKF) is used as a recursive
estimator.
This document summarizes research on using image stitching and optical flow to generate panoramic views from video frames in real-time. Key aspects include:
1) Features are detected in frames using Shi-Tomasi corner detection and tracked between frames using optical flow.
2) A key frame is selected when less than half of features from the previous frame are successfully tracked, allowing sufficient rotation for homography calculation.
3) Homographies relating key frames are estimated and used to stitch and map frames to a cylindrical panorama for 3D visualization by a teleoperator.
4) Experimental results found the Shi-Tomasi/optical flow method was over 10x faster than SIFT/
A survey on road extraction from color image using vectorizationeSAT Journals
Abstract Road extraction can be defined as extracting the roads from an image by accessing it through the features of road. Road information can be extracted from images in three ways: manual extraction, semi-automated extraction and fully automated detection. In this paper we review various research papers of road extraction methods. Most of the work focused on road extraction from grayscale images. Our proposed approach is to extract road from color image using vectorization. Vectorization is performed using canny edge detection and CDT (Constrained Delaunay Triangulation) and then grouped resulting triangles into polygons to make up vector image. A sequence of pre-processing steps is applied to get better quality of image and to produce extended road network before vectorization. At the end, skeletonization is used to transform the components of digital image into original components.
Keywords: vectorization, constrained Delaunay triangulation, canny edge detection, skeletonization,
Despite being around for almost two decades, footmounted inertial navigation only has gotten a limited spread. Contributing factors to this are lack of suitable hardware platforms and difficult system integration. As a solution to this, we present an open-source wireless foot-mounted inertial navigation module with an intuitive and significantly simplified dead reckoning interface. The interface is motivated from statistical properties of the underlying aided inertial navigation and argued to give negligible information loss. The module consists of both a hardware platform and embedded software. Details of the platform and the software are described, and a summarizing description of how to reproduce the module are given. System integration of the module is outlined and finally, we provide a basic performance assessment of the module. In summary, the module provides a modularization of the foot-mounted inertial navigation and makes the technology significantly easier to use.
Automatic Segmentation of Brachial Artery based on Fuzzy C-Means Pixel Clust...IJECEIAES
Automatic extraction of brachial artery and measuring associated indices such as flow-mediated dilatation and Intima-media thickness are important for early detection of cardiovascular disease and other vascular endothelial malfunctions. In this paper, we propose the basic but important component of such decision-assisting medical software development – noise tolerant fully automatic segmentation of brachial artery from ultrasound images. Pixel clustering with Fuzzy C-Means algorithm in the quantization process is the key component of that segmentation with various image processing algorithms involved. This algorithm could be an alternative choice of segmentation process that can replace speckle noise-suffering edge detection procedures in this application domain.
Enhanced Algorithm for Obstacle Detection and Avoidance Using a Hybrid of Pla...IOSR Journals
This document presents an enhanced algorithm for obstacle detection and avoidance using a hybrid of plane-to-plane homography, image segmentation, corner detection, and edge detection techniques. The algorithm aims to improve upon previous methods by eliminating false positives, reducing unreliable corners and broken edges, providing depth perception without planar assumptions, and requiring less processing power. The key components of the algorithm include plane-to-plane homography, image segmentation, Canny edge detection, Harris corner detection, and the RANSAC sampling method for system analysis. Test results on sample images show the algorithm can accurately detect obstacles based on texture differences while reducing noise from ground plane textures.
Objects detection and tracking using fast principle component purist and kalm...IJECEIAES
The detection and tracking of moving objects attracted a lot of concern because of the vast computer vision applications. This paper proposes a new algorithm based on several methods for identifying, detecting, and tracking an object in order to develop an effective and efficient system in several applications. This algorithm has three main parts: the first part for background modeling and foreground extraction, the second part for smoothing, filtering and detecting moving objects within the video frame and the last part includes tracking and prediction of detected objects. In this proposed work, a new algorithm to detect moving objects from video data is designed by the Fast Principle Component Purist (FPCP). Then we used an optimal filter that performs well to reduce noise through the median filter. The Fast Regionconvolution neural networks (Fast- RCNN) is used to add smoothness to the spatial identification of objects and their areas. Then the detected object is tracked by Kalman Filter. Experimental results show that our algorithm adapts to different situations and outperforms many existing algorithms.
Evolution of a shoe-mounted multi-IMU pedestrian dead reckoning PDR sensoroblu.io
Shoe-mounted inertial navigation systems, aka pedestrian dead reckoning or PDR sensors, are being preferred for pedestrian navigation because of the accuracy offered by them. Such shoe sensors are, for example, the obvious choice for real time location systems of first responders. The opensource platform OpenShoe has reported application of multiple IMUs in shoe-mounted PDR sensors to enhance noise performance. In this paper, we present an experimental study of the noise performance and the operating clocks based power consumption of multi-IMU platforms. The noise performances of a multi-IMU system with different combinations of IMUs are studied. It is observed that four-IMU system is best optimized for cost, area and power. Experiments with varying operating clocks frequency are performed on an in-house four-IMU shoe-mounted inertial navigation module (the Oblu module). Based on the outcome, power-optimized operating clock frequencies are obtained. Thus the overall study suggests that by selecting a well-designed operating point, a multi-IMU system can be made cost, size and power efficient without practically affecting its superior positioning performance.
Modern surveying techniques provide accurate measurements for civil engineering projects. Techniques like total stations, digital levels, GPS, and data collectors allow surveyors to measure horizontal and vertical angles as well as distances electronically. This improves accuracy and efficiency over traditional methods. Specific applications in civil engineering include road, tunnel, and bridge alignment, infrastructure mapping for urban planning, and construction site layout and monitoring. The summarized data can be imported into CAD software for modeling and analysis. Overall, modern surveying techniques enhance accuracy and productivity for civil engineering design and construction.
This document summarizes research on using smartphones as inertial navigation systems (INS). It discusses how accelerometers and gyroscopes in smartphones can measure position, orientation, and velocity without external devices like GPS. However, smartphone INS have limitations like drift, bias, noise, and interference that filters can help address. The document reviews literature on INS applications in vehicles and pedestrians, as well as methods to integrate GPS and correct errors, such as Kalman and Markov filters. The goal of this research is to experiment with smartphone INS accuracy in various motions and implement filtering to optimize data collection for potential classroom or app applications.
Obstacle detection for autonomous systems using stereoscopic images and bacte...IJECEIAES
This paper presents a low cost strategy for real-time estimation of the position of ob- stacles in an unknown environment for autonomous robots. The strategy was intended for use in autonomous service robots, which navigate in unknown and dynamic indoor environments. In addition to human interaction, these environments are characterized by a design created for the human being, which is why our developments seek morphological and functional similarity equivalent to the human model. We use a pair of cameras on our robot to achieve a stereoscopic vision of the environment, and we analyze this information to determine the distance to obstacles using an algorithm that mimics bacterial behavior. The algorithm was evaluated on our robotic platform demonstrating high performance in the location of obstacles and real-time operation.
Intelligent two axis dual-ccd image-servo shooting platform designeSAT Publishing House
This document describes the design of an intelligent two-axis dual-CCD image-servo shooting platform. It uses two cameras and image processing techniques to dynamically track targets in 3D space. The system calculates the precise 3D spatial coordinate of the target based on pixel differences between the dual camera images. Experimental results showed the system could hit dynamic targets with precision of 5mm.
Automated Motion Detection from space in sea surveillanceLiza Charalambous
This document summarizes research on automated motion detection of vessels from space using satellite imagery. The researchers used ALOS PRISM satellite triplets to detect vessel movement in ports in Cyprus. Through image segmentation, pattern extraction and description, and proximity searching between images, the method detected vessel movement, speed, and direction. It achieved over 90% detection rates but struggled with small vessels. Combining results from multiple image sets improved reliability. The researchers conclude motion detection from satellites can provide critical maritime security information when combined with other data sources.
This document appears to be a research paper submitted by Anutam Majumder for their M.Sc. degree. The paper discusses estimation of camera trajectory using visual odometry. It provides an introduction to visual odometry and discusses challenges in the field. The paper then reviews relevant literature on visual SLAM and deep learning based methods. It proposes a methodology using a CNN-RNN network for visual odometry estimation and discusses the network architecture, feature extraction, sequential modeling, and optimization. Results are presented on a dataset and the paper concludes with an evaluation comparing the method to other approaches.
Performance of Phase Congruency and Linear Feature Extraction for Satellite I...IOSR Journals
This document summarizes research on extracting linear features from satellite images. It introduces using a phase congruency and linear feature extraction model combined with an adaptive smoothing algorithm. The paper aims to evaluate the advantages and limitations of this approach when applied to satellite image feature extraction. It also describes other common feature extraction methods, such as using mathematical morphology operations like dilation and erosion. Overall, the document reviews techniques for automated linear feature extraction from satellite imagery.
Application of Vision based Techniques for Position EstimationIRJET Journal
This document summarizes research on using vision-based techniques to estimate the position of an unmanned aerial vehicle (UAV) from images captured by an onboard camera. Two algorithms - normalized cross-correlation and RANSAC feature detection - are tested and compared. Normalized cross-correlation is faster but less accurate when there is noise, scaling or rotation differences between images. RANSAC is more robust to distortions but slower. Both methods are able to estimate position to within 10 meters of accuracy under ideal conditions. RANSAC fails with high noise levels while normalized cross-correlation requires additional preprocessing to handle noise, reducing its speed advantage.
LOCAL REGION PSEUDO-ZERNIKE MOMENT- BASED FEATURE EXTRACTION FOR FACIAL RECOG...aciijournal
In the domain of image processing, face recognition is one of the most well-known research field. When
humans have very similar biometric properties, such as identical twins, the face recognition system is
considered as a challengeable problem. In this paper, the AdaBoost method is utilized to detect the
facial area of input image.
Inertial Navigation for Quadrotor Using Kalman Filter with Drift Compensation IJECEIAES
The main disadvantage of an Inertial Navigation System is a low accuracy due to noise, bias, and drift error in the inertial sensor. This research aims to develop the accelerometer and gyroscope sensor for quadrotor navigation system, bias compensation, and Zero Velocity Compensation (ZVC). Kalman Filter is designed to reduce the noise on the sensor while bias compensation and ZVC are designed to eliminate the bias and drift error in the sensor data. Test results showed the Kalman Filter design is acceptable to reduce the noise in the sensor data. Moreover, the bias compensation and ZVC can reduce the drift error due to integration process as well as improve the position estimation accuracy of the quadrotor. At the time of testing, the system provided the accuracy above 90 % when it tested indoor.
Path Planning for Mobile Robot Navigation Using Voronoi Diagram and Fast Marc...Waqas Tariq
For navigation in complex environments, a robot needs to reach a compromise between the need for having efficient and optimized trajectories and the need for reacting to unexpected events. This paper presents a new sensor-based Path Planner which results in a fast local or global motion planning able to incorporate the new obstacle information. In the first step the safest areas in the environment are extracted by means of a Voronoi Diagram. In the second step the Fast Marching Method is applied to the Voronoi extracted areas in order to obtain the path. The method combines map-based and sensor-based planning operations to provide a reliable motion plan, while it operates at the sensor frequency. The main characteristics are speed and reliability, since the map dimensions are reduced to an almost unidimensional map and this map represents the safest areas in the environment for moving the robot. In addition, the Voronoi Diagram can be calculated in open areas, and with all kind of shaped obstacles, which allows to apply the proposed planning method in complex environments where other methods of planning based on Voronoi do not work.
Speed Determination of Moving Vehicles using Lucas- Kanade AlgorithmEditor IJCATR
This paper presents a novel velocity estimation method for ground vehicles. The task here is to automatically estimate
vehicle speed from video sequences acquired with a fixed mounted camera. The vehicle motion is detected and tracked along the
frames using Lucas-Kanade algorithm. The distance traveled by the vehicle is calculated using the movement of the centroid over the
frames and the speed of the vehicle is estimated. The average speed of cars is determined from various frames. The application is
developed using MATLAB and SIMULINK.
This document presents a method for estimating the speed of moving vehicles using video sequences and the Lucas-Kanade algorithm. The method tracks vehicle motion across video frames using optical flow to calculate pixel movement. It then determines the centroid coordinates of the vehicle in each frame. The centroid coordinates are converted from image to world coordinates using camera calibration parameters. The distance traveled by the vehicle between frames is calculated from the centroid coordinates. This total distance traveled is divided by the time elapsed between frames to estimate the vehicle's speed in centimeters per second. The method was implemented in MATLAB and Simulink.
Three-dimensional structure from motion recovery of a moving object with nois...IJECEIAES
In this paper, a Nonlinear Unknown Input Observer (NLUIO) based approach is proposed for three-dimensional (3-D) structure from motion identification. Unlike the previous studies that require prior knowledge of either the motion parameters or scene geometry, the proposed approach assumes that the object motion is imperfectly known and considered as an unknown input to the perspective dynamical system. The reconstruction of the 3-D structure of the moving objects can be achieved using just twodimensional (2-D) images of a monocular vision system. The proposed scheme is illustrated with a numerical example in the presence of measurement noise for both static and dynamic scenes. Those results are used to clearly demonstrate the advantages of the proposed NLUIO.
Tiny-YOLO distance measurement and object detection coordination system for t...IJECEIAES
A humanoid robot called BarelangFC was designed to take part in the Kontes Robot Indonesia (KRI) competition, in the robot coordination division. In this division, each robot is expected to recognize its opponents and to pass the ball towards a team member to establish coordination between the robots. In order to achieve this team coordination, a fast and accurate system is needed to detect and estimate the other robot’s position in real time. Moreover, each robot has to estimate its team members’ locations based on its camera reading, so that the ball can be passed without error. This research proposes a Tiny-YOLO deep learning method to detect the location of a team member robot and presents a real-time coordination system using a ZED camera. To establish the coordinate system, the distance between the robots was estimated using a trigonometric equation to ensure that the robot was able to pass the ball towards another robot. To verify our method, real-time experiments was carried out using an NVDIA Jetson NX Xavier, and the results showed that the robot could estimate the distance correctly before passing the ball toward another robot.
Autonomous Path Planning and Navigation of a Mobile Robot with Multi-Sensors ...CSCJournals
The mobile robot is applied widely and investigated deeply in industrial fields, meanwhile, mobile robot autonomous path planning and navigation algorithm is a hot research topic. In this paper, firstly mobile robot is introduced, the general path planning and navigation algorithms of the mobile robot are reviewed, then a fuzzy logic with filter smoothing is proposed based on the data from the laser scan sensor and GPS module, which is useful for mobile robot to find the best path to the destination automatically according to the position and size of the gaps between the obstacles in the dynamic environment, finally our designed mobile robot and corresponding Android APP are introduced, the path planning and navigation algorithms are tested on this mobile robot, the testing result shows that this algorithm is globally optimized, quickly responded, battery power and hardware cost saved compared with other algorithms, it is suitable for the mobile robot that is running on the embedded system and it can satisfy our design requirement.
A Method for Predicting Vehicles Motion Based on Road Scene Reconstruction an...ITIIIndustries
The suggested method helps predicting vehicles movement in order to give the driver more time to react and avoid collisions on roads. The algorithm is dynamically modelling the road scene around the vehicle based on the data from the onboard camera. All moving objects are monitored and represented by the dynamic model on a 2D map. After analyzing every object’s movement, the algorithm predicts its possible behavior.
Leader follower formation control of ground vehicles using camshift based gui...ijma
Autonomous ground vehicles have been designed for the purpose of that relies on ranging and bearing
information received from forward looking camera on the Formation control . A visual guidance control
algorithm is designed where real time image processing is used to provide feedback signals. The vision
subsystem and control subsystem work in parallel to accomplish formation control. A proportional
navigation and line of sight guidance laws are used to estimate the range and bearing information from the
leader vehicle using the vision subsystem. The algorithms for vision detection and localization used here
are similar to approaches for many computer vision tasks such as face tracking and detection that are
based color-and texture based features, and non-parametric Continuously Adaptive Mean-shift algorithms
to keep track of the leader. This is being proposed for the first time in the leader follower framework. The
algorithms are simple but effective for real time and provide an alternate approach to traditional based
approaches like the Viola Jones algorithm. Further to stabilize the follower to the leader trajectory, the
sliding mode controller is used to dynamically track the leader. The performance of the results is
demonstrated in simulation and in practical experiments.
Leader Follower Formation Control of Ground Vehicles Using Dynamic Pixel Coun...ijma
This paper deals with leader-follower formations of non-holonomic mobile robots, introducing a formation
control strategy based on pixel counts using a commercial grade electro optics camera. Localization of the
leader for motions along line of sight as well as the obliquely inclined directions are considered based on
pixel variation of the images by referencing to two arbitrarily designated positions in the image frames.
Based on an established relationship between the displacement of the camera movement along the viewing
direction and the difference in pixel counts between reference points in the images, the range and the angle
estimate between the follower camera and the leader is calculated. The Inverse Perspective Transform is
used to account for non linear relationship between the height of vehicle in a forward facing image and its
distance from the camera. The formulation is validated with experiments.
A Path Planning Technique For Autonomous Mobile Robot Using Free-Configuratio...CSCJournals
This paper presents the implementation of a novel technique for sensor based path planning of autonomous mobile robots. The proposed method is based on finding free-configuration eigen spaces (FCE) in the robot actuation area. Using the FCE technique to find optimal paths for autonomous mobile robots, the underlying hypothesis is that in the low-dimensional manifolds of laser scanning data, there lies an eigenvector which corresponds to the free-configuration space of the higher order geometric representation of the environment. The vectorial combination of all these eigenvectors at discrete time scan frames manifests a trajectory, whose sum can be treated as a robot path or trajectory. The proposed algorithm was tested on two different test bed data, real data obtained from Navlab SLAMMOT and data obtained from the real-time robotics simulation program Player/Stage. Performance analysis of FCE technique was done with existing four path planning algorithms under certain working parameters, namely computation time needed to find a solution, the distance travelled and the amount of turning required by the autonomous mobile robot. This study will enable readers to identify the suitability of path planning algorithm under the working parameters, which needed to be optimized. All the techniques were tested in the real-time robotic software Player/Stage. Further analysis was done using MATLAB mathematical computation software.
Goal location prediction based on deep learning using RGB-D camerajournalBEEI
In the navigation system, the desired destination position plays an essential role since the path planning algorithms takes a current location and goal location as inputs as well as the map of the surrounding environment. The generated path from path planning algorithm is used to guide a user to his final destination. This paper presents a proposed algorithm based on RGB-D camera to predict the goal coordinates in 2D occupancy grid map for visually impaired people navigation system. In recent years, deep learning methods have been used in many object detection tasks. So, the object detection method based on convolution neural network method is adopted in the proposed algorithm. The measuring distance between the current position of a sensor and the detected object depends on the depth data that is acquired from RGB-D camera. Both of the object detected coordinates and depth data has been integrated to get an accurate goal location in a 2D map. This proposed algorithm has been tested on various real-time scenarios. The experiments results indicate to the effectiveness of the proposed algorithm.
Motion Object Detection Using BGS TechniqueMangaiK4
Abstract--- The detection of moving object is an important in many applications such as a vehicle identification in a traffic monitoring system,human detection in a crime branch.In this paper we identify a vehicle in a video sequence.This paper briefly explain the detection of moving vehicle in a video.We introduce a new algorithm BGS for idntifying vehicle in a video sequence.First, we differentiate the foreground from background in frames by learning the background.Then, the image is divided into many small nonoverlapped frames. The candidates of the vehicle part can be found from the frames if there is some change in gray level between the current image and the background.The extracted background subtraction method is used in subsequent analysis to detect a vehicle and classify moving vehicle
Motion Object Detection Using BGS TechniqueMangaiK4
Abstract--- The detection of moving object is an important in many applications such as a vehicle identification in a traffic monitoring system,human detection in a crime branch.In this paper we identify a vehicle in a video sequence.This paper briefly explain the detection of moving vehicle in a video.We introduce a new algorithm BGS for idntifying vehicle in a video sequence.First, we differentiate the foreground from background in frames by learning the background.Then, the image is divided into many small nonoverlapped frames. The candidates of the vehicle part can be found from the frames if there is some change in gray level between the current image and the background.The extracted background subtraction method is used in subsequent analysis to detect a vehicle and classify moving vehicle.
LocalizationandMappingforAutonomousNavigationin OutdoorTerrains: A StereoVisi...Minh Quan Nguyen
The document discusses localization and mapping techniques for autonomous navigation using stereo vision. It describes using efficient stereo algorithms to build local maps, visual odometry for precise registration of robot motion and maps, and integrating local maps into a globally consistent map. The approach is tested in outdoor environments, where it is able to build accurate maps in real-time and outperform other teams in validation tests.
International Journal of Computational Engineering Research (IJCER)ijceronline
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
The document summarizes a study that compares the performance of camera-only, IMU-only, and combined camera-IMU systems for visual inertial navigation (VisNav) and simultaneous localization and mapping (SLAM). Data was collected from a camera and IMU mounted on a rolling platform traversing a test course. The camera performed poorly around corners while the IMU had offset errors, but combining the sensors was superior to either individually. Statistical analysis showed the mean error was significantly lower for the camera-IMU system compared to the camera-only and IMU-only systems. The study was limited but integrating the sensors helped compensate their individual weaknesses.
Proposed Multi-object Tracking Algorithm Using Sobel Edge Detection operatorQUESTJOURNAL
ABSTRACT:Tracking of moving objects that is called video tracking is used for measuring motion parameters and obtaining a visual record of the moving objects, it is an important area of application in image processing. In general there are two different approaches to obtain object tracking: the first is Recognition-based Tracking, and the second is the Motion-based Tracking. Video tracking system raises a wide possibility in today’s society. This system is used in various applications such as military, security, monitoring, robotic, and nowadays in dayto-day applications. However the video tracking systems still have many open problems and various research activities in a video tracking system are explores. This paper presents an algorithm for video tracking of any moving targets with the uses of contour based detection technique that depends on the sobel operator. The proposed system is suitable for indoor and outdoor applications. Our approach has the advantage of extending the applicability of tracking system and also, as presented here improves the performance of the tracker making feasible high frame rate video tracking. The goal of the tracking system is to analyze the video frames and estimate the position of a part of the input video frame (usually a moving object), our approach can detect, tracked any object more than one object and calculate the position of the moving objects. Therefore, the aim of this paper is to construct a motion tracking system for moving objects. Where, at the end of this paper, the detail outcome and result are discussed using experiments results of the proposed technique
Vehicle positioning in urban environments using particle filtering-based glob...IJECEIAES
This article presents a new method for land vehicle navigation using global positioning system (GPS), dead reckoning sensor (DR), and digital road map information, particularly in urban environments where GPS failures can occur. The odometer sensors and map measure can be used to provide continuous navigation and correct the vehicle location in the presence of GPS masking. To solve this estimation problem for vehicle navigation, we propose to use particle filtering for GPS/odometer/map integration. The particle filter is a method based on the Bayesian estimation technique and the Monte Carlo method, which deals with non-linear models and is not limited to Gaussian statistics. When the GPS sensor cannot provide a location due to the number of satellites in view, the filter fuses the limited GPS pseudo-range data to enhance the vehicle positioning. The developed filter is then tested in a transportation network scenario in the presence of GPS failures, which shows the advantages of the proposed approach for vehicle location compared to the extended Kalman filter.
Monitoring traffic in urban areas is an important task for intelligent transport applications to alleviate the traffic problems like traffic jams and long trip times. The traffic flow in urban areas is more complicated than the traffic flow in highway, due to the slow movement of vehicles and crowded traffic flows in urban areas. In this paper, a vehicle detection and classification system at intersections is proposed. The system consists of three main phases: vehicle detection, vehicle tracking and vehicle classification. In the vehicle detection, the background subtraction is utilized to detect the moving vehicles by employing mixture of Gaussians (MoGs) algorithm, and then the removal shadow algorithm is developed to improve the detection phase and eliminate the undesired detected region (shadows). After the vehicle detection phase, the vehicles are tracked until they reach the classification line. Then the vehicle dimensions are utilized to classify the vehicles into three classes (cars, bikes, and trucks). In this system, there are three counters; one counter for each class. When the vehicle is classified to a specific class, the class counter is incremented by one. The counting results can be used to estimate the traffic density at intersections, and adjust the timing of traffic light for the next light cycle. The system is applied to videos obtained by stationary cameras. The results obtained demonstrate the robustness and accuracy of the proposed system.
Vehicle detection and tracking techniques a concise reviewsipij
Vehicle detection and tracking applications play an important role for civilian and military applications
such as in highway traffic surveillance control, management and urban traffic planning. Vehicle detection
process on road are used for vehicle tracking, counts, average speed of each individual vehicle, traffic
analysis and vehicle categorizing objectives and may be implemented under different environments
changes. In this review, we present a concise overview of image processing methods and analysis tools
which used in building these previous mentioned applications that involved developing traffic surveillance
systems. More precisely and in contrast with other reviews, we classified the processing methods under
three categories for more clarification to explain the traffic systems.
Similar to Distance Estimation based on Color-Block: A Simple Big-O Analysis (20)
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...IJECEIAES
Medical image analysis has witnessed significant advancements with deep learning techniques. In the domain of brain tumor segmentation, the ability to
precisely delineate tumor boundaries from magnetic resonance imaging (MRI)
scans holds profound implications for diagnosis. This study presents an ensemble convolutional neural network (CNN) with transfer learning, integrating
the state-of-the-art Deeplabv3+ architecture with the ResNet18 backbone. The
model is rigorously trained and evaluated, exhibiting remarkable performance
metrics, including an impressive global accuracy of 99.286%, a high-class accuracy of 82.191%, a mean intersection over union (IoU) of 79.900%, a weighted
IoU of 98.620%, and a Boundary F1 (BF) score of 83.303%. Notably, a detailed comparative analysis with existing methods showcases the superiority of
our proposed model. These findings underscore the model’s competence in precise brain tumor localization, underscoring its potential to revolutionize medical
image analysis and enhance healthcare outcomes. This research paves the way
for future exploration and optimization of advanced CNN models in medical
imaging, emphasizing addressing false positives and resource efficiency.
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
Neural network optimizer of proportional-integral-differential controller par...IJECEIAES
Wide application of proportional-integral-differential (PID)-regulator in industry requires constant improvement of methods of its parameters adjustment. The paper deals with the issues of optimization of PID-regulator parameters with the use of neural network technology methods. A methodology for choosing the architecture (structure) of neural network optimizer is proposed, which consists in determining the number of layers, the number of neurons in each layer, as well as the form and type of activation function. Algorithms of neural network training based on the application of the method of minimizing the mismatch between the regulated value and the target value are developed. The method of back propagation of gradients is proposed to select the optimal training rate of neurons of the neural network. The neural network optimizer, which is a superstructure of the linear PID controller, allows increasing the regulation accuracy from 0.23 to 0.09, thus reducing the power consumption from 65% to 53%. The results of the conducted experiments allow us to conclude that the created neural superstructure may well become a prototype of an automatic voltage regulator (AVR)-type industrial controller for tuning the parameters of the PID controller.
An improved modulation technique suitable for a three level flying capacitor ...IJECEIAES
This research paper introduces an innovative modulation technique for controlling a 3-level flying capacitor multilevel inverter (FCMLI), aiming to streamline the modulation process in contrast to conventional methods. The proposed
simplified modulation technique paves the way for more straightforward and
efficient control of multilevel inverters, enabling their widespread adoption and
integration into modern power electronic systems. Through the amalgamation of
sinusoidal pulse width modulation (SPWM) with a high-frequency square wave
pulse, this controlling technique attains energy equilibrium across the coupling
capacitor. The modulation scheme incorporates a simplified switching pattern
and a decreased count of voltage references, thereby simplifying the control
algorithm.
A review on features and methods of potential fishing zoneIJECEIAES
This review focuses on the importance of identifying potential fishing zones in seawater for sustainable fishing practices. It explores features like sea surface temperature (SST) and sea surface height (SSH), along with classification methods such as classifiers. The features like SST, SSH, and different classifiers used to classify the data, have been figured out in this review study. This study underscores the importance of examining potential fishing zones using advanced analytical techniques. It thoroughly explores the methodologies employed by researchers, covering both past and current approaches. The examination centers on data characteristics and the application of classification algorithms for classification of potential fishing zones. Furthermore, the prediction of potential fishing zones relies significantly on the effectiveness of classification algorithms. Previous research has assessed the performance of models like support vector machines, naïve Bayes, and artificial neural networks (ANN). In the previous result, the results of support vector machine (SVM) were 97.6% more accurate than naive Bayes's 94.2% to classify test data for fisheries classification. By considering the recent works in this area, several recommendations for future works are presented to further improve the performance of the potential fishing zone models, which is important to the fisheries community.
Electrical signal interference minimization using appropriate core material f...IJECEIAES
As demand for smaller, quicker, and more powerful devices rises, Moore's law is strictly followed. The industry has worked hard to make little devices that boost productivity. The goal is to optimize device density. Scientists are reducing connection delays to improve circuit performance. This helped them understand three-dimensional integrated circuit (3D IC) concepts, which stack active devices and create vertical connections to diminish latency and lower interconnects. Electrical involvement is a big worry with 3D integrates circuits. Researchers have developed and tested through silicon via (TSV) and substrates to decrease electrical wave involvement. This study illustrates a novel noise coupling reduction method using several electrical involvement models. A 22% drop in electrical involvement from wave-carrying to victim TSVs introduces this new paradigm and improves system performance even at higher THz frequencies.
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...IJECEIAES
Climate change's impact on the planet forced the United Nations and governments to promote green energies and electric transportation. The deployments of photovoltaic (PV) and electric vehicle (EV) systems gained stronger momentum due to their numerous advantages over fossil fuel types. The advantages go beyond sustainability to reach financial support and stability. The work in this paper introduces the hybrid system between PV and EV to support industrial and commercial plants. This paper covers the theoretical framework of the proposed hybrid system including the required equation to complete the cost analysis when PV and EV are present. In addition, the proposed design diagram which sets the priorities and requirements of the system is presented. The proposed approach allows setup to advance their power stability, especially during power outages. The presented information supports researchers and plant owners to complete the necessary analysis while promoting the deployment of clean energy. The result of a case study that represents a dairy milk farmer supports the theoretical works and highlights its advanced benefits to existing plants. The short return on investment of the proposed approach supports the paper's novelty approach for the sustainable electrical system. In addition, the proposed system allows for an isolated power setup without the need for a transmission line which enhances the safety of the electrical network
Bibliometric analysis highlighting the role of women in addressing climate ch...IJECEIAES
Fossil fuel consumption increased quickly, contributing to climate change
that is evident in unusual flooding and draughts, and global warming. Over
the past ten years, women's involvement in society has grown dramatically,
and they succeeded in playing a noticeable role in reducing climate change.
A bibliometric analysis of data from the last ten years has been carried out to
examine the role of women in addressing the climate change. The analysis's
findings discussed the relevant to the sustainable development goals (SDGs),
particularly SDG 7 and SDG 13. The results considered contributions made
by women in the various sectors while taking geographic dispersion into
account. The bibliometric analysis delves into topics including women's
leadership in environmental groups, their involvement in policymaking, their
contributions to sustainable development projects, and the influence of
gender diversity on attempts to mitigate climate change. This study's results
highlight how women have influenced policies and actions related to climate
change, point out areas of research deficiency and recommendations on how
to increase role of the women in addressing the climate change and
achieving sustainability. To achieve more successful results, this initiative
aims to highlight the significance of gender equality and encourage
inclusivity in climate change decision-making processes.
Voltage and frequency control of microgrid in presence of micro-turbine inter...IJECEIAES
The active and reactive load changes have a significant impact on voltage
and frequency. In this paper, in order to stabilize the microgrid (MG) against
load variations in islanding mode, the active and reactive power of all
distributed generators (DGs), including energy storage (battery), diesel
generator, and micro-turbine, are controlled. The micro-turbine generator is
connected to MG through a three-phase to three-phase matrix converter, and
the droop control method is applied for controlling the voltage and
frequency of MG. In addition, a method is introduced for voltage and
frequency control of micro-turbines in the transition state from gridconnected mode to islanding mode. A novel switching strategy of the matrix
converter is used for converting the high-frequency output voltage of the
micro-turbine to the grid-side frequency of the utility system. Moreover,
using the switching strategy, the low-order harmonics in the output current
and voltage are not produced, and consequently, the size of the output filter
would be reduced. In fact, the suggested control strategy is load-independent
and has no frequency conversion restrictions. The proposed approach for
voltage and frequency regulation demonstrates exceptional performance and
favorable response across various load alteration scenarios. The suggested
strategy is examined in several scenarios in the MG test systems, and the
simulation results are addressed.
Enhancing battery system identification: nonlinear autoregressive modeling fo...IJECEIAES
Precisely characterizing Li-ion batteries is essential for optimizing their
performance, enhancing safety, and prolonging their lifespan across various
applications, such as electric vehicles and renewable energy systems. This
article introduces an innovative nonlinear methodology for system
identification of a Li-ion battery, employing a nonlinear autoregressive with
exogenous inputs (NARX) model. The proposed approach integrates the
benefits of nonlinear modeling with the adaptability of the NARX structure,
facilitating a more comprehensive representation of the intricate
electrochemical processes within the battery. Experimental data collected
from a Li-ion battery operating under diverse scenarios are employed to
validate the effectiveness of the proposed methodology. The identified
NARX model exhibits superior accuracy in predicting the battery's behavior
compared to traditional linear models. This study underscores the
importance of accounting for nonlinearities in battery modeling, providing
insights into the intricate relationships between state-of-charge, voltage, and
current under dynamic conditions.
Smart grid deployment: from a bibliometric analysis to a surveyIJECEIAES
Smart grids are one of the last decades' innovations in electrical energy.
They bring relevant advantages compared to the traditional grid and
significant interest from the research community. Assessing the field's
evolution is essential to propose guidelines for facing new and future smart
grid challenges. In addition, knowing the main technologies involved in the
deployment of smart grids (SGs) is important to highlight possible
shortcomings that can be mitigated by developing new tools. This paper
contributes to the research trends mentioned above by focusing on two
objectives. First, a bibliometric analysis is presented to give an overview of
the current research level about smart grid deployment. Second, a survey of
the main technological approaches used for smart grid implementation and
their contributions are highlighted. To that effect, we searched the Web of
Science (WoS), and the Scopus databases. We obtained 5,663 documents
from WoS and 7,215 from Scopus on smart grid implementation or
deployment. With the extraction limitation in the Scopus database, 5,872 of
the 7,215 documents were extracted using a multi-step process. These two
datasets have been analyzed using a bibliometric tool called bibliometrix.
The main outputs are presented with some recommendations for future
research.
Use of analytical hierarchy process for selecting and prioritizing islanding ...IJECEIAES
One of the problems that are associated to power systems is islanding
condition, which must be rapidly and properly detected to prevent any
negative consequences on the system's protection, stability, and security.
This paper offers a thorough overview of several islanding detection
strategies, which are divided into two categories: classic approaches,
including local and remote approaches, and modern techniques, including
techniques based on signal processing and computational intelligence.
Additionally, each approach is compared and assessed based on several
factors, including implementation costs, non-detected zones, declining
power quality, and response times using the analytical hierarchy process
(AHP). The multi-criteria decision-making analysis shows that the overall
weight of passive methods (24.7%), active methods (7.8%), hybrid methods
(5.6%), remote methods (14.5%), signal processing-based methods (26.6%),
and computational intelligent-based methods (20.8%) based on the
comparison of all criteria together. Thus, it can be seen from the total weight
that hybrid approaches are the least suitable to be chosen, while signal
processing-based methods are the most appropriate islanding detection
method to be selected and implemented in power system with respect to the
aforementioned factors. Using Expert Choice software, the proposed
hierarchy model is studied and examined.
Enhancing of single-stage grid-connected photovoltaic system using fuzzy logi...IJECEIAES
The power generated by photovoltaic (PV) systems is influenced by
environmental factors. This variability hampers the control and utilization of
solar cells' peak output. In this study, a single-stage grid-connected PV
system is designed to enhance power quality. Our approach employs fuzzy
logic in the direct power control (DPC) of a three-phase voltage source
inverter (VSI), enabling seamless integration of the PV connected to the
grid. Additionally, a fuzzy logic-based maximum power point tracking
(MPPT) controller is adopted, which outperforms traditional methods like
incremental conductance (INC) in enhancing solar cell efficiency and
minimizing the response time. Moreover, the inverter's real-time active and
reactive power is directly managed to achieve a unity power factor (UPF).
The system's performance is assessed through MATLAB/Simulink
implementation, showing marked improvement over conventional methods,
particularly in steady-state and varying weather conditions. For solar
irradiances of 500 and 1,000 W/m2
, the results show that the proposed
method reduces the total harmonic distortion (THD) of the injected current
to the grid by approximately 46% and 38% compared to conventional
methods, respectively. Furthermore, we compare the simulation results with
IEEE standards to evaluate the system's grid compatibility.
Enhancing photovoltaic system maximum power point tracking with fuzzy logic-b...IJECEIAES
Photovoltaic systems have emerged as a promising energy resource that
caters to the future needs of society, owing to their renewable, inexhaustible,
and cost-free nature. The power output of these systems relies on solar cell
radiation and temperature. In order to mitigate the dependence on
atmospheric conditions and enhance power tracking, a conventional
approach has been improved by integrating various methods. To optimize
the generation of electricity from solar systems, the maximum power point
tracking (MPPT) technique is employed. To overcome limitations such as
steady-state voltage oscillations and improve transient response, two
traditional MPPT methods, namely fuzzy logic controller (FLC) and perturb
and observe (P&O), have been modified. This research paper aims to
simulate and validate the step size of the proposed modified P&O and FLC
techniques within the MPPT algorithm using MATLAB/Simulink for
efficient power tracking in photovoltaic systems.
Adaptive synchronous sliding control for a robot manipulator based on neural ...IJECEIAES
Robot manipulators have become important equipment in production lines, medical fields, and transportation. Improving the quality of trajectory tracking for
robot hands is always an attractive topic in the research community. This is a
challenging problem because robot manipulators are complex nonlinear systems
and are often subject to fluctuations in loads and external disturbances. This
article proposes an adaptive synchronous sliding control scheme to improve trajectory tracking performance for a robot manipulator. The proposed controller
ensures that the positions of the joints track the desired trajectory, synchronize
the errors, and significantly reduces chattering. First, the synchronous tracking
errors and synchronous sliding surfaces are presented. Second, the synchronous
tracking error dynamics are determined. Third, a robust adaptive control law is
designed,the unknown components of the model are estimated online by the neural network, and the parameters of the switching elements are selected by fuzzy
logic. The built algorithm ensures that the tracking and approximation errors
are ultimately uniformly bounded (UUB). Finally, the effectiveness of the constructed algorithm is demonstrated through simulation and experimental results.
Simulation and experimental results show that the proposed controller is effective with small synchronous tracking errors, and the chattering phenomenon is
significantly reduced.
Remote field-programmable gate array laboratory for signal acquisition and de...IJECEIAES
A remote laboratory utilizing field-programmable gate array (FPGA) technologies enhances students’ learning experience anywhere and anytime in embedded system design. Existing remote laboratories prioritize hardware access and visual feedback for observing board behavior after programming, neglecting comprehensive debugging tools to resolve errors that require internal signal acquisition. This paper proposes a novel remote embeddedsystem design approach targeting FPGA technologies that are fully interactive via a web-based platform. Our solution provides FPGA board access and debugging capabilities beyond the visual feedback provided by existing remote laboratories. We implemented a lab module that allows users to seamlessly incorporate into their FPGA design. The module minimizes hardware resource utilization while enabling the acquisition of a large number of data samples from the signal during the experiments by adaptively compressing the signal prior to data transmission. The results demonstrate an average compression ratio of 2.90 across three benchmark signals, indicating efficient signal acquisition and effective debugging and analysis. This method allows users to acquire more data samples than conventional methods. The proposed lab allows students to remotely test and debug their designs, bridging the gap between theory and practice in embedded system design.
Detecting and resolving feature envy through automated machine learning and m...IJECEIAES
Efficiently identifying and resolving code smells enhances software project quality. This paper presents a novel solution, utilizing automated machine learning (AutoML) techniques, to detect code smells and apply move method refactoring. By evaluating code metrics before and after refactoring, we assessed its impact on coupling, complexity, and cohesion. Key contributions of this research include a unique dataset for code smell classification and the development of models using AutoGluon for optimal performance. Furthermore, the study identifies the top 20 influential features in classifying feature envy, a well-known code smell, stemming from excessive reliance on external classes. We also explored how move method refactoring addresses feature envy, revealing reduced coupling and complexity, and improved cohesion, ultimately enhancing code quality. In summary, this research offers an empirical, data-driven approach, integrating AutoML and move method refactoring to optimize software project quality. Insights gained shed light on the benefits of refactoring on code quality and the significance of specific features in detecting feature envy. Future research can expand to explore additional refactoring techniques and a broader range of code metrics, advancing software engineering practices and standards.
Smart monitoring technique for solar cell systems using internet of things ba...IJECEIAES
Rapidly and remotely monitoring and receiving the solar cell systems status parameters, solar irradiance, temperature, and humidity, are critical issues in enhancement their efficiency. Hence, in the present article an improved smart prototype of internet of things (IoT) technique based on embedded system through NodeMCU ESP8266 (ESP-12E) was carried out experimentally. Three different regions at Egypt; Luxor, Cairo, and El-Beheira cities were chosen to study their solar irradiance profile, temperature, and humidity by the proposed IoT system. The monitoring data of solar irradiance, temperature, and humidity were live visualized directly by Ubidots through hypertext transfer protocol (HTTP) protocol. The measured solar power radiation in Luxor, Cairo, and El-Beheira ranged between 216-1000, 245-958, and 187-692 W/m 2 respectively during the solar day. The accuracy and rapidity of obtaining monitoring results using the proposed IoT system made it a strong candidate for application in monitoring solar cell systems. On the other hand, the obtained solar power radiation results of the three considered regions strongly candidate Luxor and Cairo as suitable places to build up a solar cells system station rather than El-Beheira.
An efficient security framework for intrusion detection and prevention in int...IJECEIAES
Over the past few years, the internet of things (IoT) has advanced to connect billions of smart devices to improve quality of life. However, anomalies or malicious intrusions pose several security loopholes, leading to performance degradation and threat to data security in IoT operations. Thereby, IoT security systems must keep an eye on and restrict unwanted events from occurring in the IoT network. Recently, various technical solutions based on machine learning (ML) models have been derived towards identifying and restricting unwanted events in IoT. However, most ML-based approaches are prone to miss-classification due to inappropriate feature selection. Additionally, most ML approaches applied to intrusion detection and prevention consider supervised learning, which requires a large amount of labeled data to be trained. Consequently, such complex datasets are impossible to source in a large network like IoT. To address this problem, this proposed study introduces an efficient learning mechanism to strengthen the IoT security aspects. The proposed algorithm incorporates supervised and unsupervised approaches to improve the learning models for intrusion detection and mitigation. Compared with the related works, the experimental outcome shows that the model performs well in a benchmark dataset. It accomplishes an improved detection accuracy of approximately 99.21%.
Home security is of paramount importance in today's world, where we rely more on technology, home
security is crucial. Using technology to make homes safer and easier to control from anywhere is
important. Home security is important for the occupant’s safety. In this paper, we came up with a low cost,
AI based model home security system. The system has a user-friendly interface, allowing users to start
model training and face detection with simple keyboard commands. Our goal is to introduce an innovative
home security system using facial recognition technology. Unlike traditional systems, this system trains
and saves images of friends and family members. The system scans this folder to recognize familiar faces
and provides real-time monitoring. If an unfamiliar face is detected, it promptly sends an email alert,
ensuring a proactive response to potential security threats.
Supermarket Management System Project Report.pdfKamal Acharya
Supermarket management is a stand-alone J2EE using Eclipse Juno program.
This project contains all the necessary required information about maintaining
the supermarket billing system.
The core idea of this project to minimize the paper work and centralize the
data. Here all the communication is taken in secure manner. That is, in this
application the information will be stored in client itself. For further security the
data base is stored in the back-end oracle and so no intruders can access it.
Software Engineering and Project Management - Introduction, Modeling Concepts...Prakhyath Rai
Introduction, Modeling Concepts and Class Modeling: What is Object orientation? What is OO development? OO Themes; Evidence for usefulness of OO development; OO modeling history. Modeling
as Design technique: Modeling, abstraction, The Three models. Class Modeling: Object and Class Concept, Link and associations concepts, Generalization and Inheritance, A sample class model, Navigation of class models, and UML diagrams
Building the Analysis Models: Requirement Analysis, Analysis Model Approaches, Data modeling Concepts, Object Oriented Analysis, Scenario-Based Modeling, Flow-Oriented Modeling, class Based Modeling, Creating a Behavioral Model.
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODELijaia
As digital technology becomes more deeply embedded in power systems, protecting the communication
networks of Smart Grids (SG) has emerged as a critical concern. Distributed Network Protocol 3 (DNP3)
represents a multi-tiered application layer protocol extensively utilized in Supervisory Control and Data
Acquisition (SCADA)-based smart grids to facilitate real-time data gathering and control functionalities.
Robust Intrusion Detection Systems (IDS) are necessary for early threat detection and mitigation because
of the interconnection of these networks, which makes them vulnerable to a variety of cyberattacks. To
solve this issue, this paper develops a hybrid Deep Learning (DL) model specifically designed for intrusion
detection in smart grids. The proposed approach is a combination of the Convolutional Neural Network
(CNN) and the Long-Short-Term Memory algorithms (LSTM). We employed a recent intrusion detection
dataset (DNP3), which focuses on unauthorized commands and Denial of Service (DoS) cyberattacks, to
train and test our model. The results of our experiments show that our CNN-LSTM method is much better
at finding smart grid intrusions than other deep learning algorithms used for classification. In addition,
our proposed approach improves accuracy, precision, recall, and F1 score, achieving a high detection
accuracy rate of 99.50%.
Accident detection system project report.pdfKamal Acharya
The Rapid growth of technology and infrastructure has made our lives easier. The
advent of technology has also increased the traffic hazards and the road accidents take place
frequently which causes huge loss of life and property because of the poor emergency facilities.
Many lives could have been saved if emergency service could get accident information and
reach in time. Our project will provide an optimum solution to this draw back. A piezo electric
sensor can be used as a crash or rollover detector of the vehicle during and after a crash. With
signals from a piezo electric sensor, a severe accident can be recognized. According to this
project when a vehicle meets with an accident immediately piezo electric sensor will detect the
signal or if a car rolls over. Then with the help of GSM module and GPS module, the location
will be sent to the emergency contact. Then after conforming the location necessary action will
be taken. If the person meets with a small accident or if there is no serious threat to anyone’s
life, then the alert message can be terminated by the driver by a switch provided in order to
avoid wasting the valuable time of the medical rescue team.
2. ISSN: 2088-8708
IJECE Vol. 7, No. 4, August 2017 : 2169 – 2175
2170
metric map-building systems all over the map itself built and used in the next phase of navigation.
Furthermore, other systems that are in this category is a system that can perform self-localize on the
environment simultaneously performed during map construction purposes [5]. And other types of map-
building navigation systems are encountered, e.g. visual sonar-based systems or local folder-based systems.
Both of these systems collect environmental data when navigating, and build a local folder that is used to
support in order to be an appropriate navigational purpose. As for the local map includes mapping the barrier
and space, and this is usually a function of the angle of view of the camera [10]. The last system, namely: a
topological map-based, which builds a topology map that consists of nodes are connected by a line, where the
vertices represent the place/specific positions on the environment, and links represent the distance or travel
time between the two nodes [10], [11].
The next navigation system is mapless navigation systems which mostly include the reactive
technique that uses visual cues are built from image segmentation, optical flow, or the search process features
between image frames obtained. There is no representation of the environment on these systems, and
environments seen/perceived to navigate the system, recognize objects, or browse landmark [5]. One of the
most important parts of the process of vision-based robot navigation is how to process the data obtained from
the camera sensor used into useful information for a particular robot to navigate toward a specific point on
the environment or certain specified arena. On this occasion, the navigation is done by a humanoid robot
soccer players is how to object orange balls [12], using data obtained readings from sensors used CMUcam5
pixy camera. Then it will be analyzed more specifically the complexity of the algorithms used to read the
results of the detection of the object [13].
Visual-based navigation method divided into three categories, i.e. map-based, map building, and
maples navigation. One of map-based navigation method was described in [14] that uses a stereo camera on a
wheeled robot as a golf ball collector. It’s associated with a wide-angle camera mounted on top of the golf
course with a viewing area covers area 20m x 20m, with a viewing angle of 80 vertical and horizontal
viewing angle of 55 with the camera installation height is 7m above the ground. Resolution capable captured
by this camera is equal to 1280x780 pixels. The camera catches the image is then processed by a computer
server which then builds navigation map grid-shaped or there is a call later with questions of occupancy
(occupancy folder) that can indicate where to position the robot makers ball on the golf course, and where the
ball position to be taken off the map, is determined by the server and the robot can move to a certain position
based on the information sent wirelessly [14], [15]. The next category of navigation method is map building
navigation described in [11], [12]. A new algorithm that was developed using a stereo camera [16]. The aim
is to estimate the free space of the vehicle (car) which is in front of where the system is run, particularly for
navigation on the highway. By utilizing the difference map (disparity map) [17], it will obtain information on
whether the condition of the vehicle in front of him was 'solid' or 'rarely'.
Further to the development of scores folder, because the sensor used is a stereo camera, which is
carried out next is a stereo matching process based on image data that is passed longitudinal road surface.
Detection of free space is done by extracting the road surface without any obstacle. The last category is
mapless navigation method. Optical flow is one of the best methods that mimics how the visual behavior of
animals, bees, which are determined by the basic robot movement speed difference between the image seen
by the eye (camera) and the right image seen by the eye left, and the robot will move to the side of the speed
change is the smallest image [5]. This method is widely used to detect an obstacle, and following this method
developed further in many algorithms, including the Hom-Sehunek algorithm (HSA) and Lucas-Kanade
algorithm (LKA). In the HSA, proposed an Equation of the optical flow method that meets the constraints
and smoothness constraints globally. While LKA proposed quadrat Equation of optical flow with the smallest
weight [18-20].
Furthermore, the optical flow is the instantaneous velocity of each pixel in the image at a certain
point. Optical flow field is described as a field of gray of all pixels in the image of the observed field. This
optical flow field reflects the relationship between the time domain and the position of adjacent frames of the
same pixel position [18]. Inconsistencies between the directions of optical flow field and optical flow and
major movements can be used to detect an obstacle. Uses optical flow field computed multi-image obtained
from the same camera at different times, and the main movement of the camera in the estimation based on
OFF. Optical flow can be generated from two camera movements that flower and flowerpot, if the distance of
the object in the field of view and the camera is d and the angle between the object and the direction of
translational movement is θ, then the camera will move with translational velocity (TV) v and angular
velocity (AV) ω. Next optic flow generated by the object can be calculated by Equation (1) [18]:
( ) (1)
3. IJECE ISSN: 2088-8708
Distance Estimation based on Color-Block: A Simple Big-O Analysis (Budi Rahmani)
2171
Where:
F = The amount of optical flow
v = velocity or displacement translation (translational velocity)
d = object in the field of view of the camera
ω = angular velocity
θ = the direction of translational movement
2. RESEARCH METHOD
Here is the algorithm which will detect the presence of objects with specific color by using the camera
pixy CMUcam5 whose output is fed to the controller (Arduino Nano with ATmega328) via serial
communication lines. The program code is executed by the controller continuously read serial data, the
existence of an object with a specific color (in signature orange 1), if the object in question then it is detected
as the 'blocks' of those colors. Every block is detected by the camera will be sent the data to the controller
with a speed of 50 frames per second [21]. Hence the controller will also create delays the process of reading
that much, this is done for asynchronous data sent by the camera and the data received by the controller. The
controller will then communicate back the received data to the user via the universal serial bus (USB) port to
the user that the information it contains 'blocks' color of the object detected by the camera, then its position in
the field of x and y, as well as the size of the width and height of the block The. Diagram of a system built
presented in Figure 1 fit the above description.
Figure 1. Block diagram system
Below is shown an algorithm that reads the data of the detection object orange balls [21], [22].
void loop()
{
static int i = 0; // initializing the variable y is zero with static data type integer
int j; // j variable declaration
uint16_t blocks; // blocks is the number of pixels of color of the object to be detected
char buf[32]; // buffer variable declaration with 32 array characters type
// 'blocks' variable declaration to accommodate the results of the function pixy.getBlocks()
blocks = pixy.getBlocks();
// If the blocks or object are found
// Print the results of blocks every 50 frames
// And subsequently fed to arduino if i modulo 50 = zero
if (blocks)
{
i++;
if (i%50==0)
{
sprintf(buf, "Detected %d:n", blocks);
Serial.print(buf);
for (j=0; j<blocks; j++)
{
sprintf(buf, " block %d: ", j);
Serial.print(buf);
Single
Camera
Controller_
1
Motion
Controller_
2
Tilt servo
18 DOF
body servo
4. ISSN: 2088-8708
IJECE Vol. 7, No. 4, August 2017 : 2169 – 2175
2172
pixy.blocks[j].print();
}
}
}
}
3. RESULTS AND ANALYSIS (10 PT)
Here are the results display on the personal computer (PC) over the data that is read by the controller
shown in Figure 2.
Figure 2. Results of colored object detection (via USB)
The above algorithm will be analyzed one by one in a way that is [23]:
a. Number of instruction
The number of instruction from the algorithm is:
static int i = 0; 1 instruction
int j; 1 instruction
uint16_t blocks; 1 instruction
char buf[32]; 1 instruction
blocks = pixy.getBlocks(); 2 instruction
if (blocks) 1 instruction
{
i++; 1 instruction
if (i%50==0) 1 instruction
{
sprintf(buf, "Detected %d:n", blocks);
Serial.print(buf);
for (j=0; j<blocks; j++) 2 instruction in loop
{
sprintf(buf, " block %d: ", j);
Serial.print(buf);
pixy.blocks[j].print();
}
}
}
b. Determine function of above algorithm
If the instruction before the loop for calculated, it would amount to approximately 9 instructions,
therefore its function is f(n)=9+2n. From the function can be seen that it is a linear function.
ignored
5. IJECE ISSN: 2088-8708
Distance Estimation based on Color-Block: A Simple Big-O Analysis (Budi Rahmani)
2173
c. Determine the complexity of the function obtained
Based on the function obtained from the above algorithm, we found the complexity of the function
( ) with is ( ).
To ensure that the algorithm analysis previously conducted in accordance with the results of
experimental measurements of the distance the ball and robots in real-time, then the experiment measuring
the distance the ball and robot re-done. The sketch prototype of robots used and the results obtained, shown
in Figure 3 and Table 1, respectively.
In Table 1 are shown the results of experimental measurements of the distance the ball to the robot in
real-time. Based on experiments conducted found a trend of rising pixel size color blocks the ball, at a
distance of the ball on the robot is reduced. Based on experiments conducted the closest distance the ball and
robots that can be detected is approximately 51 cm, and the greatest distance is about 210 cm. This condition
is based on the construction of a camera mounted on a robot according to the prototype used and shown in
Figure 4.
Figure 3. Sketch of prototype robot [2] Figure 4. Linear graphic of color block size
toward Robot-ball real distance
Table 1. Color block size toward Robot-ball real distance
Distance information
Height
(pixel)
Width
(pixel)
Color Ball block size
(pixel)
Real-time robot-ball distance
(centimeter)
Farthest distance 15 16 240 210
18 19 342 190
21 20 420 170
23 23 529 150
24 26 624 145
28 29 812 140
29 29 841 135
30 30 900 130
31 30 930 126
31 31 961 76
Nearest distance 32 31 992 51
4. CONCLUSION
Based on the analysis of the algorithm used to detect orange tennis ball on the field or arena, colored
dark green base, found a mathematical function ( ) with the complexity of using the is
( ). Its means that the complexity of the algorithm used is linear, and so, when compared with the
results of experimental measurements of the distance the robot and the ball physically. This is shown on the
graph of the results distances tend to fall with an increase in the size of the color blocks of balls that were
6. ISSN: 2088-8708
IJECE Vol. 7, No. 4, August 2017 : 2169 – 2175
2174
detected by the camera used. Hence the algorithms used analysis method is quite effective in analyzing an
algorithm.
REFERENCES
[1] W. L. Fehlman-II, M. K. Hinders, "Mobile Robot Navigation with Intelligent Infrared Image Interpretation",
London: Springer-Verlag London, 2009.
[2] B. Rahmani, A. Harjoko, T. K. Priyambodo, H. Aprilianto, “Research of Smart Real-time Robot Navigation
System”, in The 7th SEAMS-UGM Conference 2015, 2015, pp. 1–8.
[3] B. Rahmani, A. E. Putra, A. Harjoko, T. K. Priyambodo, “Review of Vision-Based Robot Navigation Method”,
IAES Int. J. Robot. Autom., vol. 4, no. 4, pp. 31–38, 2015.
[4] P. Alves, H. Costelha, C. Neves, “Localization and navigation of a mobile robot in an office-like environment”, in
2013 13th International Conference on Autonomous Robot Systems, 2013, pp. 1–6.
[5] A. Chatterjee, A. Rakshit, and N. N. Singh, Vision Based Autonomous Robot Navigation. Springer US, 2013.
[6] I. Iswanto, O. Wahyunggoro, A. Imam Cahyadi, “Path Planning Based on Fuzzy Decision Trees and Potential
Field,” Int. J. Electr. Comput. Eng., vol. 6, no. 1, p. 212, 2016.
[7] A. M. Pinto, A. P. Moreira, P. G. Costa, “A Localization Method Based on Map-Matching and Particle Swarm
Optimization,” J. Intell. Robot. Syst., pp. 313–326, 2013.
[8] R. Strydom, S. Thurrowgood, M. V Srinivasan, “Visual Odometry : Autonomous UAV Navigation using Optic
Flow and Stereo,” Australas. Conf. Robot. Autom., pp. 2–4, 2014.
[9] J. Cao, X. Liao, E. L. Hall, “Reactive Navigation for Autonomous Guided Vehicle Using the Neuro-fuzzy
Techniques,” in Proc. SPIE 3837, Intelligent Robots and Computer Vision XVIII: Algorithms, Techniques, and
Active Vision, 1999, pp. 108–117.
[10] F. Bonin-Font, A. Ortiz, G. Oliver, “Visual Navigation for Mobile Robots : A Survey,” J. Intell. Robot Syst., vol.
53, pp. 263–296, 2008.
[11] M. S. Rahman and K. Kim, “Indoor Positioning by LED Visible Light Communication and Image Sensors,” Int. J.
Electr. Comput. Eng., vol. 1, no. 2, pp. 161–170, 2011.
[12] F. Maleki and Z. Farhoudi, “Making Humanoid Robots More Acceptable Based on the Study of Robot Characters
in Animation,” vol. 4, no. 1, pp. 63–72, 2015.
[13] B. Earl, “Pixy Pet Robot - Color Vision Follower using Pixycam,” Adafruits Learning System, 2015. [Online].
Available: https://learn.adafruit.com/pixy-pet-robot-color-vision-follower-usingpixycam.
[14] C. H. Yun, Y. S. Moon, N. Y. Ko, “Vision Based Navigation for Golf Ball Collecting Mobile Robot,” Int. Conf.
Control. Autom. Syst., no. Iccas, pp. 201–203, 2013.
[15] R. L. Klaser, F. S. Osorio, D. Wolf, “Vision-Based Autonomous Navigation with a Probabilistic Occupancy Map
on Unstructured Scenarios,” 2014 Jt. Conf. Robot. SBR-LARS Robot. Symp. Rob., pp. 146–150, 2014.
[16] K.-Y. Lee, J.-M. Park, J.-W. Lee, “Estimation of Longitudinal Profile of Road Surface from Stereo Disparity Using
Dijkstra Algorithm,” Int. J. Control. Autom. Syst., vol. 12, no. 4, pp. 895–903, 2014.
[17] S. A. R. Magrabi, “Simulation of Collision Avoidance by Navigation Assistance using Stereo Vision,” Spaces,
pp. 58–61, 2015.
[18] Q. Wu, J. Wei, X. Li, “Research Progress of Obstacle Detection Based on Monocular Vision,” in 2014 Tenth
International Conference on Intelligent Information Hiding and Multimedia Signal Processing, 2014, pp. 195–198.
[19] N. Ohnishi, A. Imiya, “Appearance-based navigation and homing for autonomous mobile robot,” Image Vis.
Comput., vol. 31, no. 6–7, pp. 511–532, 2013.
[20] M. L. Wang, J. R. Wu, L. W. Kao, H. Y. Lin, “Development of a vision system and a strategy simulator for middle
size soccer robot,” 2013 Int. Conf. Adv. Robot. Intell. Syst. ARIS 2013 - Conf. Proc., pp. 54–58, 2013.
[21] A. Rowe, R. LeGrand, S. Robinson, “An Overview of CMUcam5 Pixy,” 2015. [Online]. Available:
http://www.cmucam.org/. [Accessed: 10-Jun-2015].
[22] A. J. R. Neves, A. J. Pinho, D. a. Martins, B. Cunha, “An efficient omnidirectional vision system for soccer robots:
From calibration to object detection,” Mechatronics, vol. 21, no. 2, pp. 399–410, 2011.
[23] D. Zindros, “A Gentle Introduction to Algorithm Complexity Analysis,” 2012. [Online]. Available:
http://www.discrete.gr/complexity/. [Accessed: 10-Apr-2015].
BIOGRAPHIES OF AUTHORS
Budi Rahmani received his bachelor’s of Electrical Engineering, and Master of Informatics from
Yogyakarta State University and Dian Nuswantoro University in 2003 and 2010 respectively. Currently
he is a doctoral student at Department of Computer Science and Electronics, Universitas Gadjah Mada
since 2014. He was interesting in Embedded System and Robotics. Currently his research focused is
computer vision and control system for robot. His other research interests include decision support
system using artificial neural network. He can be contacted by email: budirahmani@gmail.com
http://budirahmani.wordpress.com
7. IJECE ISSN: 2088-8708
Distance Estimation based on Color-Block: A Simple Big-O Analysis (Budi Rahmani)
2175
Hugo Aprilianto received his bachelor’s of Informatics, and Master of Informatics from Universitas Dr
Soetomo Surabaya and Sekolah Tinggi Teknik Surabaya in 1998 and 2007 respectively. Currently he is a
lecturer at STMIK Banjarbaru. He was interesting in Computer Architecture, Embedded System, and
Neural network. He can be contacted by email: hugo.aprilianto@gmail.com
Heru Ismanto received his bachelor’s of Informatics, and Master of Computer Science from Universitas
Padjadjaran and Universitas Gadjah Mada in 1996 and 2009 respectively. Currently he is a lecturer at
Department of Informatics Engineering, Musamus University, Merauke, Papua and his a doctoral
student at Department of Computer Science and Electronics, Universitas Gadjah Mada since 2014. He
was interesting in Decision Support System, GIS, e-Government System. He can be contacted by email:
heru.ismanto@mail.ugm.ac.id or heru.ismanto31@yahoo.com
Hamdani received bachelor’s of Informatics from Universitas Ahmad Dahlan, Yogyakarta, Indonesia in
2002, received Master of Computer Science from Universitas Gadjah Mada, Yogyakarta, Indonesia in
2009. Currently, he is a lecturer at Department of Computer Science in Universitas Mulawarman,
Samarinda, East Kalimantan, Indonesia and candidate his Ph.D. program in Computer Science at
Department of Computer Sciences & Electronics in Universitas Gadjah Mada, Yogyakarta, Indonesia.
His research areas of interest are group decision support system/decision model, social networks
analysis, GIS and web engineering. Email: hamdani@fkti.unmul.ac.id and URL:
http://daniunmul.weebly.com