The document summarizes a study that compares the performance of camera-only, IMU-only, and combined camera-IMU systems for visual inertial navigation (VisNav) and simultaneous localization and mapping (SLAM). Data was collected from a camera and IMU mounted on a rolling platform traversing a test course. The camera performed poorly around corners while the IMU had offset errors, but combining the sensors was superior to either individually. Statistical analysis showed the mean error was significantly lower for the camera-IMU system compared to the camera-only and IMU-only systems. The study was limited but integrating the sensors helped compensate their individual weaknesses.
A ROS IMPLEMENTATION OF THE MONO-SLAM ALGORITHMcsandit
Computer vision approaches are increasingly used in mobile robotic systems, since they allow
to obtain a very good representation of the environment by using low-power and cheap sensors.
In particular it has been shown that they can compete with standard solutions based on laser
range scanners when dealing with the problem of simultaneous localization and mapping
(SLAM), where the robot has to explore an unknown environment while building a map of it and
localizing in the same map. We present a package for simultaneous localization and mapping in
ROS (Robot Operating System) using a monocular camera sensor only. Experimental results in
real scenarios as well as on standard datasets show that the algorithm is able to track the
trajectory of the robot and build a consistent map of small environments, while running in near
real-time on a standard PC.
Abstract - Positioning is a fundamental component of human life to make meaningful interpretations of the environment. Without knowledge of position, human beings are like machines and have very limited capabilities to interact with the environment. Even machines in today’s world can be made smarter if positioning information is made available to them. Indoor positioning of pedestrians is the broad area considered in this thesis. A foot mounted pedestrian tracking device has been studied for this purpose. Systems which utilize foot mounted inertial navigation system has been in the literature for more than two decades. However very few real time implementations have been possible. The purpose of this thesis is to benchmark and improve the performance of one such implementation.
Inertial Navigation for Quadrotor Using Kalman Filter with Drift Compensation IJECEIAES
The main disadvantage of an Inertial Navigation System is a low accuracy due to noise, bias, and drift error in the inertial sensor. This research aims to develop the accelerometer and gyroscope sensor for quadrotor navigation system, bias compensation, and Zero Velocity Compensation (ZVC). Kalman Filter is designed to reduce the noise on the sensor while bias compensation and ZVC are designed to eliminate the bias and drift error in the sensor data. Test results showed the Kalman Filter design is acceptable to reduce the noise in the sensor data. Moreover, the bias compensation and ZVC can reduce the drift error due to integration process as well as improve the position estimation accuracy of the quadrotor. At the time of testing, the system provided the accuracy above 90 % when it tested indoor.
Evolution of a shoe-mounted multi-IMU pedestrian dead reckoning PDR sensoroblu.io
Shoe-mounted inertial navigation systems, aka pedestrian dead reckoning or PDR sensors, are being preferred for pedestrian navigation because of the accuracy offered by them. Such shoe sensors are, for example, the obvious choice for real time location systems of first responders. The opensource platform OpenShoe has reported application of multiple IMUs in shoe-mounted PDR sensors to enhance noise performance. In this paper, we present an experimental study of the noise performance and the operating clocks based power consumption of multi-IMU platforms. The noise performances of a multi-IMU system with different combinations of IMUs are studied. It is observed that four-IMU system is best optimized for cost, area and power. Experiments with varying operating clocks frequency are performed on an in-house four-IMU shoe-mounted inertial navigation module (the Oblu module). Based on the outcome, power-optimized operating clock frequencies are obtained. Thus the overall study suggests that by selecting a well-designed operating point, a multi-IMU system can be made cost, size and power efficient without practically affecting its superior positioning performance.
Despite being around for almost two decades, footmounted inertial navigation only has gotten a limited spread. Contributing factors to this are lack of suitable hardware platforms and difficult system integration. As a solution to this, we present an open-source wireless foot-mounted inertial navigation module with an intuitive and significantly simplified dead reckoning interface. The interface is motivated from statistical properties of the underlying aided inertial navigation and argued to give negligible information loss. The module consists of both a hardware platform and embedded software. Details of the platform and the software are described, and a summarizing description of how to reproduce the module are given. System integration of the module is outlined and finally, we provide a basic performance assessment of the module. In summary, the module provides a modularization of the foot-mounted inertial navigation and makes the technology significantly easier to use.
AUTO LANDING PROCESS FOR AUTONOMOUS FLYING ROBOT BY USING IMAGE PROCESSING BA...csandit
In today’s technological life, everyone is quite familiar with the importance of security
measures in our lives. So in this regard, many attempts have been made by researchers and one
of them is flying robots technology. One well-known usage of flying robot, perhaps, is its
capability in security and care measurements which made this device extremely practical, not
only for its unmanned movement, but also for the unique manoeuvre during flight over the
arbitrary areas. In this research, the automatic landing of a flying robot is discussed. The
system is based on the frequent interruptions that is sent from main microcontroller to camera
module in order to take images; these images have been distinguished by image processing
system based on edge detection, after analysing the image the system can tell whether or not to
land on the ground. This method shows better performance in terms of precision as well as
experimentally.
An Experimental Study on a Pedestrian Tracking Deviceoblu.io
The implemented navigational algorithm of an inertial
navigation system (INS), along with the hardware configuration, decides its tracking performance. Besides, operating conditions also influence its tracking performance. The aim of this study is to demonstrate robust performance of a multiple Inertial Measurement Units (IMUs) based foot-mounted INS, The Osmium MIMU22BTP, under varying operating conditions. The device, which performs zero-velocity-update (ZUPT) aided navigation, is subjected to different conditions which could potentially influence gait of its wearer, its hardware configuration etc. The gait-influencing factors chosen for study are shoe type, walking surface, path profile and walking speed. Besides, the tracking performance of the device is also studied for different number of on-board IMUs and the ambient temperature. The tracking performance of MIMU22BTP is reported for all these factors and benchmarked using identified performance metrics. We observe very robust tracking performance of MIMU22BTP. The average relative errors are less than 3 to 4% under all the conditions, with respect to drift, distance and height, indicating a potential for a variety of location based services based on foot mounted inertial sensing and dead reckoning.
A ROS IMPLEMENTATION OF THE MONO-SLAM ALGORITHMcsandit
Computer vision approaches are increasingly used in mobile robotic systems, since they allow
to obtain a very good representation of the environment by using low-power and cheap sensors.
In particular it has been shown that they can compete with standard solutions based on laser
range scanners when dealing with the problem of simultaneous localization and mapping
(SLAM), where the robot has to explore an unknown environment while building a map of it and
localizing in the same map. We present a package for simultaneous localization and mapping in
ROS (Robot Operating System) using a monocular camera sensor only. Experimental results in
real scenarios as well as on standard datasets show that the algorithm is able to track the
trajectory of the robot and build a consistent map of small environments, while running in near
real-time on a standard PC.
Abstract - Positioning is a fundamental component of human life to make meaningful interpretations of the environment. Without knowledge of position, human beings are like machines and have very limited capabilities to interact with the environment. Even machines in today’s world can be made smarter if positioning information is made available to them. Indoor positioning of pedestrians is the broad area considered in this thesis. A foot mounted pedestrian tracking device has been studied for this purpose. Systems which utilize foot mounted inertial navigation system has been in the literature for more than two decades. However very few real time implementations have been possible. The purpose of this thesis is to benchmark and improve the performance of one such implementation.
Inertial Navigation for Quadrotor Using Kalman Filter with Drift Compensation IJECEIAES
The main disadvantage of an Inertial Navigation System is a low accuracy due to noise, bias, and drift error in the inertial sensor. This research aims to develop the accelerometer and gyroscope sensor for quadrotor navigation system, bias compensation, and Zero Velocity Compensation (ZVC). Kalman Filter is designed to reduce the noise on the sensor while bias compensation and ZVC are designed to eliminate the bias and drift error in the sensor data. Test results showed the Kalman Filter design is acceptable to reduce the noise in the sensor data. Moreover, the bias compensation and ZVC can reduce the drift error due to integration process as well as improve the position estimation accuracy of the quadrotor. At the time of testing, the system provided the accuracy above 90 % when it tested indoor.
Evolution of a shoe-mounted multi-IMU pedestrian dead reckoning PDR sensoroblu.io
Shoe-mounted inertial navigation systems, aka pedestrian dead reckoning or PDR sensors, are being preferred for pedestrian navigation because of the accuracy offered by them. Such shoe sensors are, for example, the obvious choice for real time location systems of first responders. The opensource platform OpenShoe has reported application of multiple IMUs in shoe-mounted PDR sensors to enhance noise performance. In this paper, we present an experimental study of the noise performance and the operating clocks based power consumption of multi-IMU platforms. The noise performances of a multi-IMU system with different combinations of IMUs are studied. It is observed that four-IMU system is best optimized for cost, area and power. Experiments with varying operating clocks frequency are performed on an in-house four-IMU shoe-mounted inertial navigation module (the Oblu module). Based on the outcome, power-optimized operating clock frequencies are obtained. Thus the overall study suggests that by selecting a well-designed operating point, a multi-IMU system can be made cost, size and power efficient without practically affecting its superior positioning performance.
Despite being around for almost two decades, footmounted inertial navigation only has gotten a limited spread. Contributing factors to this are lack of suitable hardware platforms and difficult system integration. As a solution to this, we present an open-source wireless foot-mounted inertial navigation module with an intuitive and significantly simplified dead reckoning interface. The interface is motivated from statistical properties of the underlying aided inertial navigation and argued to give negligible information loss. The module consists of both a hardware platform and embedded software. Details of the platform and the software are described, and a summarizing description of how to reproduce the module are given. System integration of the module is outlined and finally, we provide a basic performance assessment of the module. In summary, the module provides a modularization of the foot-mounted inertial navigation and makes the technology significantly easier to use.
AUTO LANDING PROCESS FOR AUTONOMOUS FLYING ROBOT BY USING IMAGE PROCESSING BA...csandit
In today’s technological life, everyone is quite familiar with the importance of security
measures in our lives. So in this regard, many attempts have been made by researchers and one
of them is flying robots technology. One well-known usage of flying robot, perhaps, is its
capability in security and care measurements which made this device extremely practical, not
only for its unmanned movement, but also for the unique manoeuvre during flight over the
arbitrary areas. In this research, the automatic landing of a flying robot is discussed. The
system is based on the frequent interruptions that is sent from main microcontroller to camera
module in order to take images; these images have been distinguished by image processing
system based on edge detection, after analysing the image the system can tell whether or not to
land on the ground. This method shows better performance in terms of precision as well as
experimentally.
An Experimental Study on a Pedestrian Tracking Deviceoblu.io
The implemented navigational algorithm of an inertial
navigation system (INS), along with the hardware configuration, decides its tracking performance. Besides, operating conditions also influence its tracking performance. The aim of this study is to demonstrate robust performance of a multiple Inertial Measurement Units (IMUs) based foot-mounted INS, The Osmium MIMU22BTP, under varying operating conditions. The device, which performs zero-velocity-update (ZUPT) aided navigation, is subjected to different conditions which could potentially influence gait of its wearer, its hardware configuration etc. The gait-influencing factors chosen for study are shoe type, walking surface, path profile and walking speed. Besides, the tracking performance of the device is also studied for different number of on-board IMUs and the ambient temperature. The tracking performance of MIMU22BTP is reported for all these factors and benchmarked using identified performance metrics. We observe very robust tracking performance of MIMU22BTP. The average relative errors are less than 3 to 4% under all the conditions, with respect to drift, distance and height, indicating a potential for a variety of location based services based on foot mounted inertial sensing and dead reckoning.
Indoor localisation and dead reckoning using Sensor Tag™ BLE.Abhishek Madav
The mobile application uses readings of the Accelerometer and Gyroscope from the Sensor Tag to describe details of motion in a planar mode. The project has been implemented as a part of the EECS 221 coursework at University of California, Irvine.
Real Time Object Identification for Intelligent Video Surveillance ApplicationsEditor IJCATR
Intelligent video surveillance system has emerged as a very important research topic in the computer vision field in the
recent years. It is well suited for a broad range of applications such as to monitor activities at traffic intersections for detecting
congestions and predict the traffic flow. Object classification in the field of video surveillance is a key component of smart
surveillance software. Two robust methodology and algorithms adopted for people and object classification for automated surveillance
systems is proposed in this paper. First method uses background subtraction model for detecting the object motion. The background
subtraction and image segmentation based on morphological transformation for tracking and object classification on highways is
proposed. This algorithm uses erosion followed by dilation on various frames. Proposed algorithm in first method, segments the image
by preserving important edges which improves the adaptive background mixture model and makes the system learn faster and more
accurately. The system used in second method adopts the object detection method without background subtraction because of the static
object detection. Segmentation is done by the bounding box registration technique. Then the classification is done with the multiclass
SVM using the edge histogram as features. The edge histograms are calculated for various bin values in different environment. The
result obtained demonstrates the effectiveness of the proposed approach.
Massive Sensors Array for Precision Sensingoblu.io
More than a billion smartphones being sold annually and growing with CAGR of 16%, the smartphone industry has become a driving force in the development of ultralow-cost inertial sensors. Unfortunately, these ultra low-cost sensors do not yet meet the needs of more demanding applications like inertial navigation and biomedical motion tracking systems. However, by adapting a wisdom of the crowd’s thinking and design arrays consisting of hundreds of sensing elements, one can capitalize on the decreasing cost, size, and power-consumption of the sensors to construct virtual high-performance low-cost inertial sensors. Team at KTH, Sweden and WUSTL, USA share findings and challenges.
Inertial Sensor Array Calibration Made Easy !oblu.io
Ultra-low-cost single-chip inertial measurement units (IMUs) combined into IMU arrays are opening up new possibilities for inertial sensing. However, to make these systems practical, calibration and misalignment compensation of low-cost IMU arrays are necessary and a simple calibration procedure that aligns the sensitivity axes of the sensors in the array is needed. Team at KTH suggests a novel mechanical-rotation-rig-free calibration procedure based on blind system identification and a platonic solid (Icosahedron) printable by a contemporary 3D-printer. Matlab-scripts for the parameter estimation and production files for the calibration device are made available.
Disparity map generation based on trapezoidal camera architecture for multi v...ijma
Visual content acquisition is a strategic functional block of any visual system. Despite its wide possibilities,
the arrangement of cameras for the acquisition of good quality visual content for use in multi-view video
remains a huge challenge. This paper presents the mathematical description of trapezoidal camera
architecture and relationships which facilitate the determination of camera position for visual content
acquisition in multi-view video, and depth map generation. The strong point of Trapezoidal Camera
Architecture is that it allows for adaptive camera topology by which points within the scene, especially the
occluded ones can be optically and geometrically viewed from several different viewpoints either on the
edge of the trapezoid or inside it. The concept of maximum independent set, trapezoid characteristics, and
the fact that the positions of cameras (with the exception of few) differ in their vertical coordinate
description could very well be used to address the issue of occlusion which continues to be a major
problem in computer vision with regards to the generation of depth map.
This paper proposes a way of recognition foreground of moving object by Foreground Extraction
algorithm by Pan-Tilt-Zoom camera. It presents the combined process of Foreground Extraction and local
histogram process. Background images are often modeled as multiple frames and their corresponding
camera pan and tilt angles are determined. Initially have got to work out the foremost matchedbackground
from sequence of input frames based on camera pose information. The method is more continued by
compensating the matched background image with this image. Then Background Subtraction is completed
between modeled background and current image. Finally before local histogram process is completed,
noises are often removed by morphological operators. As a result correct foreground moving objects are
successfully extracted by implementing these four steps.
Humans have evolved to better survive and have evolved their invention. In today’s age, a
large number of robots are placed in many areas replacing manpower in severe or dangerous
workplaces. Moreover, the most important thing is to take care of this technology for developing
robots progresses. This paper proposes an autonomous moving system which automatically finds its
target from a scene, lock it and approach towards its target and hits through a shooting mechanism.
The main objective is to provide reliable, cost effective and accurate technique to destroy an unusual
threat in the environment using image processing.
Simultaneous Mapping and Navigation For Rendezvous in Space ApplicationsNandakishor Jahagirdar
To design and develop an image processing algorithm that can identify the target spacecraft docking station as well as the distance, location and angle of the docking station with respect to the chaser vehicle. Making a use of the image from single camera.
A METHOD OF TARGET TRACKING AND PREDICTION BASED ON GEOMAGNETIC SENSOR TECHNO...cscpconf
In view of the inherent defects in current airport surface surveillance system, this paper
proposes an asynchronous target-perceiving-event driven surface target surveillance scheme
based on the geomagnetic sensor technology. Furthermore, a surface target tracking and
prediction algorithm based on I-IMM is given, which is improved on the basis of IMM
algorithm in the following aspects: Weighted sum is performed on the mean of residual errors
and model probabilistic likelihood function is reconstructed, thus increasing the identification
of a true motion model; Fixed model transition probability is updated with model posterior
information, thus accelerating model switching as well as increasing the identification of a
model. In the period when a target is non-perceptible, prediction of target trajectories can be
implemented through the target motion model identified with I-IMM algorithm. Simulation
results indicate that I-IMM algorithm is more effective and advantageous in comparison with
the standard IMM algorithm.
AN EFFICIENT SYSTEM FOR FORWARD COLLISION AVOIDANCE USING LOW COST CAMERA & E...aciijournal
Forward Collision Avoidance (FCA) systems in automobiles is an essential part of Advanced Driver Assistance System (ADAS) and autonomous vehicles. These devices currently use, radars as the main sensor. The increasing resolution of camera sensors, processing capability of hardware chipsets and advances in image processing algorithms, have been pushing the camera based features recently. Monocular cameras face the challenge of accurate scale estimation which limits it use as a stand-alone sensor for this application. This paper proposes an efficient system which can perform multi scale object
detection which is being patent granted and efficient 3D reconstruction using structure from motion (SFM)
framework. While the algorithms need to be accurate it also needs to operate real time in low cost
embedded hardware. The focus of the paper is to discuss how the proposed algorithms are designed in such
a way that it can be provide real time performance on low cost embedded CPU’s which makes use of only Digital Signal processors (DSP) and vector processing cores.
A Fast Single-Pixel Laser Imager for VR/AR Headset TrackingPing Hsu
In this work we demonstrate a highly flexible laser imaging system for 3D sensing applications such as in tracking of VR/AR headsets, hands and gestures. The system uses a MEMS mirror scan module to transmit low power laser pulses over programmable areas within a field of view and uses a single photodiode to measure the reflected light...
Design and Implementation of Spatial Localization Based on Six -axis MEMS SensorIJRES Journal
This paper focuses on the 3-axis MEMS gyroscope, 3-axis MEMS accelerometer study spatial
orientation. In order to avoid the influence of the environment on the positioning of the text based on physical
principles established sports model, combining coordinate transformation method, the microcontroller STM32
platform with integrated 3-axis MEMS gyroscope, 3-axis MEMS accelerometer chip MPU60x0 designed a new
space positioning system, and using I2C protocol to transfer information. The system is highly integrated, simple
circuit, small size, low power consumption, easy expansion, easy maintenance, etc., can be used as an adjunct to
a wireless network based positioning, improve positioning accuracy, precision can also be positioned relatively
low areas applications.
Intelligent indoor mobile robot navigation using stereo visionsipij
Majority of the existing robot navigation systems, which facilitate the use of laser range finders, sonar
sensors or artificial landmarks, has the ability to locate itself in an unknown environment and then build a
map of the corresponding environment. Stereo vision,while still being a rapidly developing technique in the
field of autonomous mobile robots, are currently less preferable due to its high implementation cost. This
paper aims at describing an experimental approach for the building of a stereo vision system that helps the
robots to avoid obstacles and navigate through indoor environments and at the same time remaining very
much cost effective. This paper discusses the fusion techniques of stereo vision and ultrasound sensors
which helps in the successful navigation through different types of complex environments. The data from
the sensor enables the robot to create the two dimensional topological map of unknown environments and
stereo vision systems models the three dimension model of the same environment.
Indoor localisation and dead reckoning using Sensor Tag™ BLE.Abhishek Madav
The mobile application uses readings of the Accelerometer and Gyroscope from the Sensor Tag to describe details of motion in a planar mode. The project has been implemented as a part of the EECS 221 coursework at University of California, Irvine.
Real Time Object Identification for Intelligent Video Surveillance ApplicationsEditor IJCATR
Intelligent video surveillance system has emerged as a very important research topic in the computer vision field in the
recent years. It is well suited for a broad range of applications such as to monitor activities at traffic intersections for detecting
congestions and predict the traffic flow. Object classification in the field of video surveillance is a key component of smart
surveillance software. Two robust methodology and algorithms adopted for people and object classification for automated surveillance
systems is proposed in this paper. First method uses background subtraction model for detecting the object motion. The background
subtraction and image segmentation based on morphological transformation for tracking and object classification on highways is
proposed. This algorithm uses erosion followed by dilation on various frames. Proposed algorithm in first method, segments the image
by preserving important edges which improves the adaptive background mixture model and makes the system learn faster and more
accurately. The system used in second method adopts the object detection method without background subtraction because of the static
object detection. Segmentation is done by the bounding box registration technique. Then the classification is done with the multiclass
SVM using the edge histogram as features. The edge histograms are calculated for various bin values in different environment. The
result obtained demonstrates the effectiveness of the proposed approach.
Massive Sensors Array for Precision Sensingoblu.io
More than a billion smartphones being sold annually and growing with CAGR of 16%, the smartphone industry has become a driving force in the development of ultralow-cost inertial sensors. Unfortunately, these ultra low-cost sensors do not yet meet the needs of more demanding applications like inertial navigation and biomedical motion tracking systems. However, by adapting a wisdom of the crowd’s thinking and design arrays consisting of hundreds of sensing elements, one can capitalize on the decreasing cost, size, and power-consumption of the sensors to construct virtual high-performance low-cost inertial sensors. Team at KTH, Sweden and WUSTL, USA share findings and challenges.
Inertial Sensor Array Calibration Made Easy !oblu.io
Ultra-low-cost single-chip inertial measurement units (IMUs) combined into IMU arrays are opening up new possibilities for inertial sensing. However, to make these systems practical, calibration and misalignment compensation of low-cost IMU arrays are necessary and a simple calibration procedure that aligns the sensitivity axes of the sensors in the array is needed. Team at KTH suggests a novel mechanical-rotation-rig-free calibration procedure based on blind system identification and a platonic solid (Icosahedron) printable by a contemporary 3D-printer. Matlab-scripts for the parameter estimation and production files for the calibration device are made available.
Disparity map generation based on trapezoidal camera architecture for multi v...ijma
Visual content acquisition is a strategic functional block of any visual system. Despite its wide possibilities,
the arrangement of cameras for the acquisition of good quality visual content for use in multi-view video
remains a huge challenge. This paper presents the mathematical description of trapezoidal camera
architecture and relationships which facilitate the determination of camera position for visual content
acquisition in multi-view video, and depth map generation. The strong point of Trapezoidal Camera
Architecture is that it allows for adaptive camera topology by which points within the scene, especially the
occluded ones can be optically and geometrically viewed from several different viewpoints either on the
edge of the trapezoid or inside it. The concept of maximum independent set, trapezoid characteristics, and
the fact that the positions of cameras (with the exception of few) differ in their vertical coordinate
description could very well be used to address the issue of occlusion which continues to be a major
problem in computer vision with regards to the generation of depth map.
This paper proposes a way of recognition foreground of moving object by Foreground Extraction
algorithm by Pan-Tilt-Zoom camera. It presents the combined process of Foreground Extraction and local
histogram process. Background images are often modeled as multiple frames and their corresponding
camera pan and tilt angles are determined. Initially have got to work out the foremost matchedbackground
from sequence of input frames based on camera pose information. The method is more continued by
compensating the matched background image with this image. Then Background Subtraction is completed
between modeled background and current image. Finally before local histogram process is completed,
noises are often removed by morphological operators. As a result correct foreground moving objects are
successfully extracted by implementing these four steps.
Humans have evolved to better survive and have evolved their invention. In today’s age, a
large number of robots are placed in many areas replacing manpower in severe or dangerous
workplaces. Moreover, the most important thing is to take care of this technology for developing
robots progresses. This paper proposes an autonomous moving system which automatically finds its
target from a scene, lock it and approach towards its target and hits through a shooting mechanism.
The main objective is to provide reliable, cost effective and accurate technique to destroy an unusual
threat in the environment using image processing.
Simultaneous Mapping and Navigation For Rendezvous in Space ApplicationsNandakishor Jahagirdar
To design and develop an image processing algorithm that can identify the target spacecraft docking station as well as the distance, location and angle of the docking station with respect to the chaser vehicle. Making a use of the image from single camera.
A METHOD OF TARGET TRACKING AND PREDICTION BASED ON GEOMAGNETIC SENSOR TECHNO...cscpconf
In view of the inherent defects in current airport surface surveillance system, this paper
proposes an asynchronous target-perceiving-event driven surface target surveillance scheme
based on the geomagnetic sensor technology. Furthermore, a surface target tracking and
prediction algorithm based on I-IMM is given, which is improved on the basis of IMM
algorithm in the following aspects: Weighted sum is performed on the mean of residual errors
and model probabilistic likelihood function is reconstructed, thus increasing the identification
of a true motion model; Fixed model transition probability is updated with model posterior
information, thus accelerating model switching as well as increasing the identification of a
model. In the period when a target is non-perceptible, prediction of target trajectories can be
implemented through the target motion model identified with I-IMM algorithm. Simulation
results indicate that I-IMM algorithm is more effective and advantageous in comparison with
the standard IMM algorithm.
AN EFFICIENT SYSTEM FOR FORWARD COLLISION AVOIDANCE USING LOW COST CAMERA & E...aciijournal
Forward Collision Avoidance (FCA) systems in automobiles is an essential part of Advanced Driver Assistance System (ADAS) and autonomous vehicles. These devices currently use, radars as the main sensor. The increasing resolution of camera sensors, processing capability of hardware chipsets and advances in image processing algorithms, have been pushing the camera based features recently. Monocular cameras face the challenge of accurate scale estimation which limits it use as a stand-alone sensor for this application. This paper proposes an efficient system which can perform multi scale object
detection which is being patent granted and efficient 3D reconstruction using structure from motion (SFM)
framework. While the algorithms need to be accurate it also needs to operate real time in low cost
embedded hardware. The focus of the paper is to discuss how the proposed algorithms are designed in such
a way that it can be provide real time performance on low cost embedded CPU’s which makes use of only Digital Signal processors (DSP) and vector processing cores.
A Fast Single-Pixel Laser Imager for VR/AR Headset TrackingPing Hsu
In this work we demonstrate a highly flexible laser imaging system for 3D sensing applications such as in tracking of VR/AR headsets, hands and gestures. The system uses a MEMS mirror scan module to transmit low power laser pulses over programmable areas within a field of view and uses a single photodiode to measure the reflected light...
Design and Implementation of Spatial Localization Based on Six -axis MEMS SensorIJRES Journal
This paper focuses on the 3-axis MEMS gyroscope, 3-axis MEMS accelerometer study spatial
orientation. In order to avoid the influence of the environment on the positioning of the text based on physical
principles established sports model, combining coordinate transformation method, the microcontroller STM32
platform with integrated 3-axis MEMS gyroscope, 3-axis MEMS accelerometer chip MPU60x0 designed a new
space positioning system, and using I2C protocol to transfer information. The system is highly integrated, simple
circuit, small size, low power consumption, easy expansion, easy maintenance, etc., can be used as an adjunct to
a wireless network based positioning, improve positioning accuracy, precision can also be positioned relatively
low areas applications.
Intelligent indoor mobile robot navigation using stereo visionsipij
Majority of the existing robot navigation systems, which facilitate the use of laser range finders, sonar
sensors or artificial landmarks, has the ability to locate itself in an unknown environment and then build a
map of the corresponding environment. Stereo vision,while still being a rapidly developing technique in the
field of autonomous mobile robots, are currently less preferable due to its high implementation cost. This
paper aims at describing an experimental approach for the building of a stereo vision system that helps the
robots to avoid obstacles and navigate through indoor environments and at the same time remaining very
much cost effective. This paper discusses the fusion techniques of stereo vision and ultrasound sensors
which helps in the successful navigation through different types of complex environments. The data from
the sensor enables the robot to create the two dimensional topological map of unknown environments and
stereo vision systems models the three dimension model of the same environment.
Leader Follower Formation Control of Ground Vehicles Using Dynamic Pixel Coun...ijma
This paper deals with leader-follower formations of non-holonomic mobile robots, introducing a formation
control strategy based on pixel counts using a commercial grade electro optics camera. Localization of the
leader for motions along line of sight as well as the obliquely inclined directions are considered based on
pixel variation of the images by referencing to two arbitrarily designated positions in the image frames.
Based on an established relationship between the displacement of the camera movement along the viewing
direction and the difference in pixel counts between reference points in the images, the range and the angle
estimate between the follower camera and the leader is calculated. The Inverse Perspective Transform is
used to account for non linear relationship between the height of vehicle in a forward facing image and its
distance from the camera. The formulation is validated with experiments.
Speed Determination of Moving Vehicles using Lucas- Kanade AlgorithmEditor IJCATR
This paper presents a novel velocity estimation method for ground vehicles. The task here is to automatically estimate
vehicle speed from video sequences acquired with a fixed mounted camera. The vehicle motion is detected and tracked along the
frames using Lucas-Kanade algorithm. The distance traveled by the vehicle is calculated using the movement of the centroid over the
frames and the speed of the vehicle is estimated. The average speed of cars is determined from various frames. The application is
developed using MATLAB and SIMULINK.
Leader follower formation control of ground vehicles using camshift based gui...ijma
Autonomous ground vehicles have been designed for the purpose of that relies on ranging and bearing
information received from forward looking camera on the Formation control . A visual guidance control
algorithm is designed where real time image processing is used to provide feedback signals. The vision
subsystem and control subsystem work in parallel to accomplish formation control. A proportional
navigation and line of sight guidance laws are used to estimate the range and bearing information from the
leader vehicle using the vision subsystem. The algorithms for vision detection and localization used here
are similar to approaches for many computer vision tasks such as face tracking and detection that are
based color-and texture based features, and non-parametric Continuously Adaptive Mean-shift algorithms
to keep track of the leader. This is being proposed for the first time in the leader follower framework. The
algorithms are simple but effective for real time and provide an alternate approach to traditional based
approaches like the Viola Jones algorithm. Further to stabilize the follower to the leader trajectory, the
sliding mode controller is used to dynamically track the leader. The performance of the results is
demonstrated in simulation and in practical experiments.
Pedestrian Counting in Video Sequences based on Optical Flow ClusteringCSCJournals
The demand for automatic counting of pedestrians at event sites, buildings, or streets has been increased. Existing systems for counting pedestrians in video sequences have a problem that counting accuracy degrades when many pedestrians coexist and occlusion occurs frequently. In this paper, we introduce a method of clustering optical flows extracted from pedestrians in video frames to improve the counting accuracy. The proposed method counts the number of pedestrians by using pre-learned statistics, based on the strong correlation between the number of optical flow clusters and the actual number of pedestrians. We evaluate the accuracy of the proposed method using several video sequences, focusing in particular on the effect of parameters for optical flow clustering. We find that the proposed method improves the counting accuracy by up to 25% as compared with a non-clustering method. We also report that using a clustering threshold of angles less than 1 degree is effective for enhancing counting accuracy. Furthermore, we compare the performance of two algorithms that use feature points and lattice points when optical flows are detected. We confirm that the counting accuracy using feature points is higher than that using lattice points especially when the number of occluded pedestrians increases.
An Innovative Moving Object Detection and Tracking System by Using Modified R...sipij
The ultimate goal of this study is to afford enhanced video object detection and tracking by eliminating the
limitations which are existing nowadays. Although high performance ratio for video object detection and
tracking is achieved in the earlier work it takes more time for computation. Consequently we are in need to
propose a novel video object detection and tracking technique so as to minimize the computational
complexity. Our proposed technique covers five stages they are preprocessing, segmentation, feature
extraction, background subtraction and hole filling. Originally the video clip in the database is split into
frames. Then preprocessing is performed so as to get rid of noise, an adaptive median filter is used in this
stage to eliminate the noise. The preprocessed image then undergoes segmentation by means of modified
region growing algorithm. The segmented image is subjected to feature extraction phase so as to extract
the multi features from the segmented image and the background image, the feature value thus obtained
are compared so as to attain optimal value, consequently a foreground image is attained in this stage. The
foreground image is then subjected to morphological operations of erosion and dilation so as to fill the
holes and to get the object accurately as these foreground image contains holes and discontinuities. Thus
the moving object is tracked in this stage. This method will be employed in MATLAB platform and the
outcomes will be studied and compared with the existing techniques so as to reveal the performance of the
novel video object detection and tracking technique.
DETECTION OF MOVING OBJECT USING FOREGROUND EXTRACTION ALGORITHM BY PTZ CAMERAijistjournal
This paper proposes a way of recognition foreground of moving object by Foreground Extraction algorithm by Pan-Tilt-Zoom camera. It presents the combined process of Foreground Extraction and local histogram process. Background images are often modeled as multiple frames and their corresponding camera pan and tilt angles are determined. Initially have got to work out the foremost matchedbackground from sequence of input frames based on camera pose information. The method is more continued by compensating the matched background image with this image. Then Background Subtraction is completed between modeled background and current image. Finally before local histogram process is completed, noises are often removed by morphological operators. As a result correct foreground moving objects are successfully extracted by implementing these four steps.
Visual Mapping and Collision Avoidance Dynamic Environments in Dynamic Enviro...Darius Burschka
How conventional vision is more appropriate for control since it provides also error analysis. There is a lot of information in the images that is lost when converting to 3D
AN EFFICIENT IMPLEMENTATION OF TRACKING USING KALMAN FILTER FOR UNDERWATER RO...IJCSEIT Journal
The exploration of oceans and sea beds is being made increasingly possible through the development of
Autonomous Underwater Vehicles (AUVs). This is an activity that concerns the marine community and it
must confront the existence of notable challenges. However, an automatic detecting and tracking system is
the first and foremost element for an AUV or an aqueous surveillance network. In this paper a method of
Kalman filter was presented to solve the problems of objects track in sonar images. Region of object was
extracted by threshold segment and morphology process, and the features of invariant moment and area
were analysed. Results show that the method presented has the advantages of good robustness, high
accuracy and real-time characteristic, and it is efficient in underwater target track based on sonar images
and also suited for the purpose of Obstacle avoidance for the AUV to operate in the constrained
underwater environment.
Automated Traffic sign board classification system is one of the key technologies of Intelligent
Transportation Systems (ITS). Traffic Surveillance System is being more and important with improving
urban scale and increasing number of vehicles. This Paper presents an intelligent sign board
classification method based on blob analysis in traffic surveillance. Processing is done by three main
steps: moving object segmentation, blob analysis, and classifying. A Sign board is modelled as a
rectangular patch and classified via blob analysis. By processing the blob of sign boards, the meaningful
features are extracted. Tracking moving targets is achieved by comparing the extracted features with
training data. After classifying the sign boards the system will intimate to user in the form of alarms,
sound waves. The experimental results show that the proposed system can provide real-time and useful
information for traffic surveillance.
1. Camera-based Visual Inertial Navigation (VisNav) and Simultaneous
Localization and Mapping (SLAM) Error Reduction via Integration with
an Inertial Measurement Unit (IMU).
Michael Shawn Quinn
Graduate Student, Computer Science and Software Engineering
University of Washington, Bothell, WA
Abstract
The pose or position of a single camera in 3D can be used to monitor and track location
relative to a starting point, generating a map of a route travelled by a robot, autonomous
vehicle, or a human wearing an augmented reality headset. Camera-only tracking is
susceptible to loss of localization when the camera is moving at high speed, or cannot
clearly detect reference objects. An IMU does not require visible reference objects to
maintain localization, but an IMU does exhibit poor signal-to-noise ratio when moving at
low speed, the type of movement ideally suited to camera pose localization. A camera
and an IMU should ideally complement each other in tracking and localization
applications, and this paper presents data that demonstrates significant advantages of
combined camera-IMU VisNav/SLAM over camera-only or IMU-only implementations.
1.0 Introduction
The ability to navigate through an environment while avoiding obstacles and keeping a
record of the route travelled is an important capability for mobile robots, autonomous
vehicles, and human users wearing augmented reality head gear. Digital cameras provide
a view of the surrounding environment from which a significant amount of information
can be obtained. A monocular camera senses movement by tracking objects as they
appear in differing positions across a sequence of still images. Objects are identified by
locating key features, such as corners, and features are matched by the size, shape, and
intensity distribution of the image pixels comprising the feature. An important limitation
of this technique is that tracking stops or pauses when common features cannot be
identified between successive images. Moving at higher velocity further reduces the
likelihood of acceptable feature identification and tracking.
The term Visual Odometry, as discussed by [4] refers to the processing of images from a
camera or cameras to obtain sequences of camera translation measurements, incremental
measurements of distances translated by the camera itself. The measurements are with
respect to the reference frame of the 3D objects in the visible environment captured in the
2D images, through use of perspective geometry, and triangulation [6], [13]. This type of
odometry information can be used to keep track of total distance traveled, much the way
an automobile odometer does.
2. Visual Odometry data is a key component of systems performing Visual Navigation or
VisNav [10] and Simultaneous Localization and Mapping or SLAM [8]. VisNav
involves not only keeping track of incremental and total distance travelled, but also
monitors changes in direction and has the capability to implement obstacle avoidance.
SLAM establishes a zero reference starting point for a sequence of moves and records the
moves in a map of a route travelled. An important element of some SLAM
implementations is the ability to recognize when returning to the starting point, or to
acknowledge that an observed feature has been seen before.
An inertial measurement unit (IMU) allows a robot, autonomous vehicle, or augmented
reality headset user to keep track of distance moved and direction changes by integrating
acceleration and velocity measurements over time. The technique is relatively simple to
implement and the math behind it is relatively straightforward. Similar to Visual
Odometry, incremental move translations are generated over time intervals. The sensor
outputs are susceptible to electrical noise, and sensor offsets and scale factor errors are
difficult to eliminate. The resulting signal to noise ratio is improved when the sensors are
exited nearer the full scale ratings of the sensor. For this reason, an IMU works best
when navigating at higher velocity.
Camera-only VisNav and SLAM have been studied extensively in the recent past [12], as
have the performance of systems that utilize an IMU assisted by a camera. IMU only
performance and comparison of camera-only to IMU-only systems has been neglected up
to this point. To fill the void, this study compares camera-only, IMU-only, and camera-
IMU systems when tested over a common set of conditions. VisNav is defined in this
study as the ability to follow a path and avoid obstacles by means of vision-based object
identification and subsequent motion detection. SLAM is defined for this study as the
ability to start and maintain a record of a route traveled, by recording a sequence of
distance measurements relative to a zero reference starting point. It is hypothesized that
the camera-IMU system will perform better overall than either the camera-only system or
the IMU-only system.
2.0 Test Setup
2.1 Vision
A 2048 x 1536 pixel, color, USB camera, model number UI-1460SE-C-HQ,
manufactured by IDS Imaging, with 12 mm fixed focal length lens, Tamron No. 86761,
provides video input to both computer vision systems. Camera interface software was
implemented using the C++ application programming interface (API) from IDS. Image
processing software was developed using the OpenCV library version 3.1, for C++,
compiled in QtCreator.
2.2 IMU
3. A 3 axis IMU, model MotionNode manufactured by Motion Workshop, Seattle, WA,
provides navigation sensing input to the IMU-only and IMU-assisted computer vision
systems. The IMU consists of a 3 independent, orthogonal accelerometers, gyroscopes,
and magnetometers. Only the output from the accelerometers and gyroscopes are used in
this study. Sensor data is acquired via C++ API.
2.3 Platform
Camera, IMU, and notebook computer are mounted on a rolling platform to perform
tests. Camera and IMU are co-mounted on a rigid plate to ensure they experience
equivalent motion inputs.
2.4 Test Course
An indoor, concrete slab floor, marked off with tape, serves as the test track. Laminated
optical targets consisting of white backgrounds with alternating black squares in a 6 by 9
checkerboard pattern were mounted on the 4 surrounding walls to serve as computer
vision motion tracking objects. The test targets are intentionally arranged such that there
are points on the course where the cameras will temporarily lose track of the reference
objects. Visible timing markers were placed on the floor every 2 feet, allowing the
experimenter to log time stamps at regular intervals. The test course is shown in Figure
1.
Figure 1
4. 2.5 Computer
A Dell Inspiron notebook computer, with Intel Core-i7 processor, running Windows 7
and Linux Ubuntu 14.04 (dual boot) served as a computing platform. Initial tests were
run on Windows 7. The main control and data acquisition program was written in C++
using QtCreator version 5.5.1 and OpenCV version 3.1.
3.0 Data Gathering
Images used for motion detection were acquired at a rate of 15 frames per second from
the camera. Accelerometer and gyroscope data from the IMU was acquired over the
USB interface and stored in a delimited text file on the computer.
4.0 Algorithms
4.1 Vision
Before commencing the study, the camera is calibrated using the method in [1] and
the OpenCV Calibration module. The resulting calibration parameters are used to
correct each acquired image, removing lens distortion and correcting for non-
idealities in the camera sensor.
The basic steps in Visual Odometry, also commonly used in studies of structure-
from-motion, are discussed in [6] and are summarized here, with applicable
OpenCV modules identified where used:
Obtain first image
Move camera
Obtain second image
Detect objects, OpenCV optical flow
Track objects, OpenCV optical flow
Calculate fundamental and essential matrices, OpenCV 3D
Reconstruction
Use single value decomposition to obtain camera matrices
Calculate rotation and translation matrices required to move from first
image to second image, OpenCV 3D Reconstruction.
From resulting matrices, extract position data as 3D point coordinates
Repeat for next image pair
Basically, we treat the first image as a reference point and calculate how far the
object has moved based on the transformation required to convert the points in the
first image into the points in the second image. We are treating the image pair as
5. though it were a stereo image pair taken concurrently, instead of two separate
images taken at different times and different locations. We rotate the second
image, and adjust the height of the second image to allow horizontal point
correlation along epipolar lines, which is the basis of the optical flow algorithms.
Once we have a move value, we append it to our move sequence to continue
constructing out map for SLAM.
4.2 IMU
Linear velocity is obtained by integrating the acceleration value from the accelerometer,
linear displacement is found by integrating the calculated linear velocity values. Angular
acceleration is obtained by differentiating the angular velocity value from the gyroscope,
angular position is obtained by integrating the angular velocity values [9].
For this study, it is assumed that our course is sufficiently level to allow treating the
downward acceleration due to gravity as a constant. The true acceleration values will be
found be performing vector subtraction of this constant value from the recorded
accelerometer output.
5.0Test Protocol
Simultaneously start the timer, start the image and data acquisition, and begin
walking around the test route at approximately 2m/sec.
Note progress by hitting the space bar of the notebook computer keyboard when
passing interval markers on the floor.
6.0Data Analysis and Statistical Methods
Descriptive measures, including mean, variance, and standard deviation of measured
and observed values, will be used to depict the individual performance of each
system. Inferences from examining the difference between means and statistical
power calculations will be used to test the hypothesis that the IMU-assisted system
will perform better than the camera only system.
Statistical analysis is performed using Microsoft Excel 2013.
Calculate the deviation of measured incremental move translation from nominal
incremental move translation, at each time interval for the camera-only data set for
each speed setting.
Calculate the deviation of measured incremental move translation from nominal
incremental move translation, at each time interval for the IMU-only data set for
each speed setting.
6. Construct camera-IMU data sets for each speed setting by calculating the average
of the camera-only and IMU-only measured incremental move translation at each
time interval.
Calculate the deviation of camera-IMU incremental move translation from
nominal incremental move translation, at each time interval, for each speed
setting.
For a confidence level of 95%, a p value of .05, perform a z-test to test the null
hypothesis that there is no difference between the mean error of the camera-only
data set and the camera-IMU data set.
For a confidence level of 95%, a p value of .05, perform a z-test to test the null
hypothesis that there is no difference between the mean error of the IMU-only data
set and the camera-IMU data set.
7.0 Results
The course was traversed and the data was recorded. A total of 37 data points were
captured for both the camera and the IMU. The data was loaded into an Excel workbook
for analysis and chart generation. The combined camera-IMU data was calculated as the
average of the output of the camera-only and IMU-only data at each measurement point.
The maps were generated in an Excel XY scatter plot, by calculating the heading, where:
X = displacement magnitude x cos(heading angle)
Y = displacement magnitude x sin(heading angle)
The mapping results are shown in Figure 2.
7. Figure 2
As can be seen in Figure 2, both the camera and the IMU failed to track the nominal path
as well as the combined camera-IMU data, with the camera data varying near corners and
the IMU data variation appearing as a shift away from the nominal straight lines.
To determine the effectiveness of combining the two sensor outputs into a single data
value, we need to use a z-test. Excel has a built in function for calculating the z-test of
two means. The test was conducted to compare camera-only to camera-IMU data, and
IMU-only to camera-IMU data, based on the average error at each measurement interval
for each of the three data sets.
The comparison of the camera-only to camera-IMU resulted in:
z-Test: Two Sample for Means Camera-Only to Camera-IMU
0 0
-5
0
5
10
15
20
25
30
-2 0 2 4 6 8 10 12 14
Y
X
Mapping Results
Nominal Camera Only IMU Only Camera-IMU
8. Mean 0.049556219 -1.08633682
Known Variance 0.651 0.445
Observations 36 36
Hypothesized Mean
Difference 0
z 6.510036318
P(Z<=z) one-tail 3.75663E-11
z Critical one-tail 1.644853627
P(Z<=z) two-tail 7.51326E-11
z Critical two-tail 1.959963985
Since the two-tailed p value was less than .05, I would reject the null hypothesis that
there is no difference between the two means. Similarly for the IMU-only to camera-
IMU test:
z-Test: Two Sample for Means IMU-Only to Camera-IMU
0 0
Mean -2.155781176 -1.08633682
Known Variance 0.66854 0.445
Observations 36 36
Hypothesized Mean
Difference 0
z -6.080741366
P(Z<=z) one-tail 5.98141E-10
z Critical one-tail 1.644853627
P(Z<=z) two-tail 1.19628E-09
z Critical two-tail 1.959963985
Since the two-tailed p value was less than .05, I would reject the null hypothesis that
there is no difference between the two means
8.0Conclusions and Study Limitations
The camera performed poorly when turning corners, as the detection algorithm was
unable to maintain a lock of reference objects and computed incorrect heading angles.
The IMU performed equally well in straight lines as well as rounding corners, but was
subject to a static offset error and measurement noise that shifted the IMU off of the
nominal path.
Had time permitted it, the study would have been run at several different speeds to more
effectively characterize the tradeoffs between the two sensor types. A more precise
9. method of noting distance markers and of recording time intervals would improve the
repeatability of the study. The study would benefit from a more robust method of fusing
the camera and IMU output together, while at the same time potentially implementing an
error optimization algorithm, for example using a Kalman filter algorithm.
Even with the aforementioned limitations, the combination of the camera and the IMU
together was clearly superior to either sensor working alone, and would likely perform at
a greater advantage by implementing the improvements stated above.
9.0References
[1] Bradski, G. R., & Kaehler, A. (2008). Learning OpenCV: Computer vision with the
OpenCV library (1st ed.). Sebastopol, CA: O'Reilly.
[2] The OpenCV website, www.opencv.org.
[3] Siegwart, R., & Nourbakhsh, Illah R. (2011). Introduction to Autonomous Mobile Robots
(2nd Edition). MIT Press.
[4] Nister, D., Naroditsky, O., & Bergen, J. (2004). Visual Odometry. Computer Vision and
Pattern Recognition, 2004. CVPR 2004. Proceedings of the 2004 IEEE Computer Society
Conference on, 1, I.
[5] Davison. (2003). Real-time simultaneous localisation and mapping with a single
camera. Computer Vision, 2003. Proceedings. Ninth IEEE International Conference
on, 1403-1410.
[6] Scaramuzza, D., & Fraundorfer, F. (2011). Visual Odometry:Part I [Tutorial].Robotics &
Automation Magazine, IEEE, 18(4), 80-92.
[7] Scaramuzza, D., & Fraundorfer, F. (2012). Visual Odometry: Part II [Tutorial].Robotics
& Automation Magazine, IEEE, 19(2), 78-90.
[8] Rothganger, F., & Muguira, M. (2007). SLAM using camera and IMU sensors.
[9] Seifert K., & Camacho O. (2007). Implementing Positioning Algorithms Using
Accelerometers, AN3397, Freescale Semiconductor.
[10] Troiani, C., Martinelli, A., Laugier, C., & Scaramuzza, D. (2015). Low computational-
complexity algorithms for vision-aided inertial navigation of micro aerial vehicles.
Robotics And Autonomous Systems, 69, 80-97.
[11] Frese, U. (2006). A Discussion of Simultaneous Localization and Mapping. Autonomous
Robots, 20(1), 25-42.
10. [12] Davison, A. J. (2003). Real-time simultaneous localisation and mapping with a single
camera. Proceedings of the IEEE International Conference on Computer Vision, 2, 1403-
1410.
[13] Emami, S., & Levgen, Khvedchenia. (2012). Mastering OpenCV with Practical
Computer Vision Projects. Birmingham: Packt Publishing.