1) The document proposes using an embedded stereo camera and fusing optical flow and SIFT feature matching algorithms to estimate the localization of a micro aerial vehicle (MAV) in GPS-denied environments.
2) An Extended Kalman Filter is used to estimate the MAV's translational velocity and altitude from optical flow measurements separated into rotational and translational components using IMU data.
3) Initial experiments fusing optical flow and SIFT matching for altitude estimation showed promising results compared to ground truth, with room for improvement through onboard processing and successive frame SIFT matching for horizontal position estimation.
Title: "Mission Analysis, Formation Geometry and Dynamics for the IRASSI Space Interferometer "
Abstract: Space-based interferometry has gained prominence in recent years, largely because higher spatial resolutions of celestial observations can be achieved with multi-telescope formations compared to those achieved with a fixed-aperture, single telescope. IRASSI is a space interferometer composed of five spacecraft, whose aim is to observe particular chemical and physical processes in cold regions of space, such as dust clouds and stellar disks, in the far-infrared frequencies.
Ultimately, the goal is to study the genesis of planets, star formation and evolution processes in these cold regions and to understand how prebiotic conditions in Earth-like planets are created. IRASSI will orbit the second Lagrange point, L2, of Sun-Earth/Moon system. The operating principle of IRASSI is based on free-drifting baselines, which dynamically change during the observations and measure therefore the incoming wavefront of a celestial target at different locations in space. This process relies on very accurate measurements of the baselines - at micrometre level - rather than on precise control of the formation.
Naturally, a free-flying formation comes with a set of challenges, namely identifying a nominal formation geometry, that is, a suitable dispersion of the telescopes in three-dimensional space. In addition, understanding how this free-drifting geometry is expected to change is crucial, particularly if this may affect the operation of the telescope instruments and thus the quality of the final synthesized images.
The presentation introduces therefore the IRASSI mission and the main driving requirements. The formation geometry and dynamics are thereafter evaluated. Finally, preliminary results concerning formation control are presented
Title: "Mission Analysis, Formation Geometry and Dynamics for the IRASSI Space Interferometer "
Abstract: Space-based interferometry has gained prominence in recent years, largely because higher spatial resolutions of celestial observations can be achieved with multi-telescope formations compared to those achieved with a fixed-aperture, single telescope. IRASSI is a space interferometer composed of five spacecraft, whose aim is to observe particular chemical and physical processes in cold regions of space, such as dust clouds and stellar disks, in the far-infrared frequencies.
Ultimately, the goal is to study the genesis of planets, star formation and evolution processes in these cold regions and to understand how prebiotic conditions in Earth-like planets are created. IRASSI will orbit the second Lagrange point, L2, of Sun-Earth/Moon system. The operating principle of IRASSI is based on free-drifting baselines, which dynamically change during the observations and measure therefore the incoming wavefront of a celestial target at different locations in space. This process relies on very accurate measurements of the baselines - at micrometre level - rather than on precise control of the formation.
Naturally, a free-flying formation comes with a set of challenges, namely identifying a nominal formation geometry, that is, a suitable dispersion of the telescopes in three-dimensional space. In addition, understanding how this free-drifting geometry is expected to change is crucial, particularly if this may affect the operation of the telescope instruments and thus the quality of the final synthesized images.
The presentation introduces therefore the IRASSI mission and the main driving requirements. The formation geometry and dynamics are thereafter evaluated. Finally, preliminary results concerning formation control are presented
Algorithmic Techniques for Parametric Model RecoveryCurvSurf
A complete description of algorithmic techniques for automatic feature extraction from point cloud. The orthogonal distance fitting, an art of maximum liklihood estimation, plays the main role. Differential geometry determines the type of object surface.
Object tracking using motion flow projection for pan-tilt configurationIJECEIAES
We propose a new object tracking model for two degrees of freedom mechanism. Our model uses a reverse projection from a camera plane to a world plane. Here, the model takes advantage of optic flow technique by re-projecting the flow vectors from the image space into world space. A pan-tilt (PT) mounting system is used to verify the performance of our model and maintain the tracked object within a region of interest (ROI). This system contains two servo motors to enable a webcam rotating along PT axes. The PT rotation angles are estimated based on a rigid transformation of the the optic flow vectors in which an idealized translation matrix followed by two rotational matrices around PT axes are used. Our model was tested and evaluated using different objects with different motions. The results reveal that our model can keep the target object within a certain region in the camera view.
Optimal Sensor Management Technique For An Unmanned Aerial Vehicle Tracking M...Negar Farmani
In this paper, we present an optimal sensor management technique for an Unmanned Aerial Vehicle (UAV) to autonomously geo-localize multiple mobile ground targets. The target states are continuously estimated using target locations asynchronously captured by a gimbaled camera with a limited field of view and processed with a set of Extended Kalman Filters (EKFs). The technique incorporates a Dynamic Weighted Graph (DWG) method to first group estimated targets and then determine regions with high target densities. A Model Predictive Control (MPC) method is used to compute a camera pose that minimizes the overall uncertainty of the target state estimates. The validity of the proposed technique is demonstrated using simulation results.
Improving the safety of ride hailing services using iot analyticsMohan Manivannan
The aim of this project is to, Analyze and capture driving behaviors that are hazardous, Develop a predictive model for predicting Unsafe Trips and Improve the overall Safety of the ride-hailing services
이번에 다룰 논문은 "Learning by Analogy: Reliable Supervision From Transformations for Unsupervised Optical Flow Estimation"이라는 논문입니다. 얼마 전에 발표드렸던 FlowNet 논문처럼 이 논문도 Deep Learning을 통해 Optical Flow를 학습하는 방법입니다. 다른 점이 하나 있다면, Unsupervised 방식으로 학습이 진행된다는 점입니다. Supervised 방식 만큼이나 Unsupervised 방식으로 Optical Flow를 학습하는 연구 역시 이미 많이 진행이 되어 왔는데요, 오늘 소개 드릴 논문에서는 Data Augmentation을 통한 Consistency를 활용하여 성능을 높이는 방식을 채용한 경우를 소개드리고자 합니다.
영상 링크: 이번에 다룰 논문은 "Learning by Analogy: Reliable Supervision From Transformations for Unsupervised Optical Flow Estimation"이라는 논문입니다. 얼마 전에 발표드렸던 FlowNet 논문처럼 이 논문도 Deep Learning을 통해 Optical Flow를 학습하는 방법입니다. 다른 점이 하나 있다면, Unsupervised 방식으로 학습이 진행된다는 점입니다. Supervised 방식 만큼이나 Unsupervised 방식으로 Optical Flow를 학습하는 연구 역시 이미 많이 진행이 되어 왔는데요, 오늘 소개 드릴 논문에서는 Data Augmentation을 통한 Consistency를 활용하여 성능을 높이는 방식을 채용한 경우를 소개드리고자 합니다.
PR-278: RAFT: Recurrent All-Pairs Field Transforms for Optical FlowHyeongmin Lee
이번 논문은 ECCV2020에서 Best Paper를 받은 논문으로, 기존 방법들과는 다르게 반복적인 Update를 통해 Optical Flow를 예측하여 꽤나 높은 성능을 기록한 논문입니다.
paper link: https://arxiv.org/pdf/2003.12039.pdf
video link: https://youtu.be/OnZIDatotZ4
A ROS IMPLEMENTATION OF THE MONO-SLAM ALGORITHMcsandit
Computer vision approaches are increasingly used in mobile robotic systems, since they allow
to obtain a very good representation of the environment by using low-power and cheap sensors.
In particular it has been shown that they can compete with standard solutions based on laser
range scanners when dealing with the problem of simultaneous localization and mapping
(SLAM), where the robot has to explore an unknown environment while building a map of it and
localizing in the same map. We present a package for simultaneous localization and mapping in
ROS (Robot Operating System) using a monocular camera sensor only. Experimental results in
real scenarios as well as on standard datasets show that the algorithm is able to track the
trajectory of the robot and build a consistent map of small environments, while running in near
real-time on a standard PC.
Short Presentation of [1].
[1] C. Panagiotakis and A. Argyros, Parameter-free Modelling of 2D Shapes with Ellipses, Pattern Recognition, 2015.
For more details, please visit https://sites.google.com/site/costaspanagiotakis/research/EFA
Algorithmic Techniques for Parametric Model RecoveryCurvSurf
A complete description of algorithmic techniques for automatic feature extraction from point cloud. The orthogonal distance fitting, an art of maximum liklihood estimation, plays the main role. Differential geometry determines the type of object surface.
Object tracking using motion flow projection for pan-tilt configurationIJECEIAES
We propose a new object tracking model for two degrees of freedom mechanism. Our model uses a reverse projection from a camera plane to a world plane. Here, the model takes advantage of optic flow technique by re-projecting the flow vectors from the image space into world space. A pan-tilt (PT) mounting system is used to verify the performance of our model and maintain the tracked object within a region of interest (ROI). This system contains two servo motors to enable a webcam rotating along PT axes. The PT rotation angles are estimated based on a rigid transformation of the the optic flow vectors in which an idealized translation matrix followed by two rotational matrices around PT axes are used. Our model was tested and evaluated using different objects with different motions. The results reveal that our model can keep the target object within a certain region in the camera view.
Optimal Sensor Management Technique For An Unmanned Aerial Vehicle Tracking M...Negar Farmani
In this paper, we present an optimal sensor management technique for an Unmanned Aerial Vehicle (UAV) to autonomously geo-localize multiple mobile ground targets. The target states are continuously estimated using target locations asynchronously captured by a gimbaled camera with a limited field of view and processed with a set of Extended Kalman Filters (EKFs). The technique incorporates a Dynamic Weighted Graph (DWG) method to first group estimated targets and then determine regions with high target densities. A Model Predictive Control (MPC) method is used to compute a camera pose that minimizes the overall uncertainty of the target state estimates. The validity of the proposed technique is demonstrated using simulation results.
Improving the safety of ride hailing services using iot analyticsMohan Manivannan
The aim of this project is to, Analyze and capture driving behaviors that are hazardous, Develop a predictive model for predicting Unsafe Trips and Improve the overall Safety of the ride-hailing services
이번에 다룰 논문은 "Learning by Analogy: Reliable Supervision From Transformations for Unsupervised Optical Flow Estimation"이라는 논문입니다. 얼마 전에 발표드렸던 FlowNet 논문처럼 이 논문도 Deep Learning을 통해 Optical Flow를 학습하는 방법입니다. 다른 점이 하나 있다면, Unsupervised 방식으로 학습이 진행된다는 점입니다. Supervised 방식 만큼이나 Unsupervised 방식으로 Optical Flow를 학습하는 연구 역시 이미 많이 진행이 되어 왔는데요, 오늘 소개 드릴 논문에서는 Data Augmentation을 통한 Consistency를 활용하여 성능을 높이는 방식을 채용한 경우를 소개드리고자 합니다.
영상 링크: 이번에 다룰 논문은 "Learning by Analogy: Reliable Supervision From Transformations for Unsupervised Optical Flow Estimation"이라는 논문입니다. 얼마 전에 발표드렸던 FlowNet 논문처럼 이 논문도 Deep Learning을 통해 Optical Flow를 학습하는 방법입니다. 다른 점이 하나 있다면, Unsupervised 방식으로 학습이 진행된다는 점입니다. Supervised 방식 만큼이나 Unsupervised 방식으로 Optical Flow를 학습하는 연구 역시 이미 많이 진행이 되어 왔는데요, 오늘 소개 드릴 논문에서는 Data Augmentation을 통한 Consistency를 활용하여 성능을 높이는 방식을 채용한 경우를 소개드리고자 합니다.
PR-278: RAFT: Recurrent All-Pairs Field Transforms for Optical FlowHyeongmin Lee
이번 논문은 ECCV2020에서 Best Paper를 받은 논문으로, 기존 방법들과는 다르게 반복적인 Update를 통해 Optical Flow를 예측하여 꽤나 높은 성능을 기록한 논문입니다.
paper link: https://arxiv.org/pdf/2003.12039.pdf
video link: https://youtu.be/OnZIDatotZ4
A ROS IMPLEMENTATION OF THE MONO-SLAM ALGORITHMcsandit
Computer vision approaches are increasingly used in mobile robotic systems, since they allow
to obtain a very good representation of the environment by using low-power and cheap sensors.
In particular it has been shown that they can compete with standard solutions based on laser
range scanners when dealing with the problem of simultaneous localization and mapping
(SLAM), where the robot has to explore an unknown environment while building a map of it and
localizing in the same map. We present a package for simultaneous localization and mapping in
ROS (Robot Operating System) using a monocular camera sensor only. Experimental results in
real scenarios as well as on standard datasets show that the algorithm is able to track the
trajectory of the robot and build a consistent map of small environments, while running in near
real-time on a standard PC.
Short Presentation of [1].
[1] C. Panagiotakis and A. Argyros, Parameter-free Modelling of 2D Shapes with Ellipses, Pattern Recognition, 2015.
For more details, please visit https://sites.google.com/site/costaspanagiotakis/research/EFA
The State of the Art of Video Summarization for Mobile Devices:
Review Article
Hesham Farouk *, Kamal ElDahshan**, Amr Abozeid **
* Computers and Systems Dept., Electronics Research Institute, Cairo, Egypt.
** Dept. of Mathematics, Computer Science Division,
Faculty of Science, Al-Azhar University, Cairo, Egypt.
The US military suffers from PowerPoint addiction. A recent publication of a confusing slide on US military strategy in Afghanistan offered a proof. PowerPoint can explain each and every complex situation in life - with one single slide ...
Non-Invertible Wavelet Domain Watermarking using Hash Function
*Gangadhar Tiwari1, Debashis Nandi 2, Madhusudhan Mishra3
1,2 IT Department, NIT, Durgapur-713209, West Bengal, India,
3ECE Department, NERIST, Nirjuli-791109, Arunachal Pradesh, India,
Leader Follower Formation Control of Ground Vehicles Using Dynamic Pixel Coun...ijma
This paper deals with leader-follower formations of non-holonomic mobile robots, introducing a formation
control strategy based on pixel counts using a commercial grade electro optics camera. Localization of the
leader for motions along line of sight as well as the obliquely inclined directions are considered based on
pixel variation of the images by referencing to two arbitrarily designated positions in the image frames.
Based on an established relationship between the displacement of the camera movement along the viewing
direction and the difference in pixel counts between reference points in the images, the range and the angle
estimate between the follower camera and the leader is calculated. The Inverse Perspective Transform is
used to account for non linear relationship between the height of vehicle in a forward facing image and its
distance from the camera. The formulation is validated with experiments.
Real-time Moving Object Detection using SURFiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Visual Mapping and Collision Avoidance Dynamic Environments in Dynamic Enviro...Darius Burschka
How conventional vision is more appropriate for control since it provides also error analysis. There is a lot of information in the images that is lost when converting to 3D
SIMULTANEOUS MAPPING AND NAVIGATION FOR RENDEZVOUS IN SPACE APPLICATIONS Nandakishor Jahagirdar
The project is to develop a autonomous navigation system along with mapping of the path.
A robot which senses the edges of the object in the path and move without colliding the object. This application equipped with camera as main component which captures the images and transmitted to workstation through wireless antenna.
The processing of the image is done on a workstation or computer using MATLAB-2013a. An IR ranging device, which senses any objects ahead of it and accordingly the robot change its direction to avoid any collision.
Thus we ensure that even in cases of circumstances leading to errors in the output of the image processing algorithm, a decision can be made using the input from the IR sensors.
A Detailed Analysis on Feature Extraction Techniques of Panoramic Image Stitc...IJEACS
Image stitching is a technique which is used for attaining a high resolution panoramic image. In this technique, distinct aesthetic images that are imaged from different view and angles are combined together to produce a panoramic image. In the field of computer graphics, photographic and computer vision, Image stitching techniques are considered as current research areas. For obtaining a stitched image it becomes mandatory that one should have the knowledge of geometric relations among multiple image co-ordinate system [1].First, image stitching will be done based on feature key point matches. Final image with seam will be blended with image blending technique. Hence in this paper we are going to address multiple distinct techniques like some invariant features as Scale Invariant Feature Transform and Speeded up Robust Transform and Corner techniques as Harris Corner Detection Technique that are useful in sorting out the issues related with stitching of images.
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
This paper proposes a way of recognition foreground of moving object by Foreground Extraction
algorithm by Pan-Tilt-Zoom camera. It presents the combined process of Foreground Extraction and local
histogram process. Background images are often modeled as multiple frames and their corresponding
camera pan and tilt angles are determined. Initially have got to work out the foremost matchedbackground
from sequence of input frames based on camera pose information. The method is more continued by
compensating the matched background image with this image. Then Background Subtraction is completed
between modeled background and current image. Finally before local histogram process is completed,
noises are often removed by morphological operators. As a result correct foreground moving objects are
successfully extracted by implementing these four steps.
DETECTION OF MOVING OBJECT USING FOREGROUND EXTRACTION ALGORITHM BY PTZ CAMERAijistjournal
This paper proposes a way of recognition foreground of moving object by Foreground Extraction algorithm by Pan-Tilt-Zoom camera. It presents the combined process of Foreground Extraction and local histogram process. Background images are often modeled as multiple frames and their corresponding camera pan and tilt angles are determined. Initially have got to work out the foremost matchedbackground from sequence of input frames based on camera pose information. The method is more continued by compensating the matched background image with this image. Then Background Subtraction is completed between modeled background and current image. Finally before local histogram process is completed, noises are often removed by morphological operators. As a result correct foreground moving objects are successfully extracted by implementing these four steps.
Scanline Homographies for Rolling-Shutter Plane Absolute PoseAgnivaSen
Cameras on portable devices are manufactured with a rolling-shutter (RS) mechanism, where the image rows (aka. scanlines) are read out sequentially. The unknown cam-era motions during the imaging process cause the so-called RS effects which are solved by motion assumptions in the literature. In this work, we give a solution to the absolute pose problem-free of motion assumptions. We categorically demonstrate that the only requirement is motion smoothness instead of stronger constraints on the camera motion. To this end, we propose a novel mathematical abstraction for RS cameras observing a planar scene, called scanline-homography, a [3×2] matrix with 5 DOFs. We establish the relationship between a scanline-homography and the corresponding plane-homography, a [3×3] matrix with 6 DOFs assuming the camera is calibrated. We estimate the scanline-homographies of an RS frame using a smooth image warp powered by B-Splines and re-cover the plane-homographies afterward to obtain the scanline poses based on motion smoothness. We back our claims with various experiments
AUTO LANDING PROCESS FOR AUTONOMOUS FLYING ROBOT BY USING IMAGE PROCESSING BA...csandit
In today’s technological life, everyone is quite familiar with the importance of security
measures in our lives. So in this regard, many attempts have been made by researchers and one
of them is flying robots technology. One well-known usage of flying robot, perhaps, is its
capability in security and care measurements which made this device extremely practical, not
only for its unmanned movement, but also for the unique manoeuvre during flight over the
arbitrary areas. In this research, the automatic landing of a flying robot is discussed. The
system is based on the frequent interruptions that is sent from main microcontroller to camera
module in order to take images; these images have been distinguished by image processing
system based on edge detection, after analysing the image the system can tell whether or not to
land on the ground. This method shows better performance in terms of precision as well as
experimentally.
Gait Based Person Recognition Using Partial Least Squares Selection Scheme ijcisjournal
The variations of viewing angle and intra-class of human beings have great impact on gait recognition
systems. This work represents an Arbitrary View Transformation Model (AVTM) for recognizing the gait.
Gait energy image (GEI) based gait authentication is effective approach to address the above problem, the
method establishes an AVTM based on principle component analysis (PCA). Feature selection (FS) is
performed using Partial least squares (PLS) method. The comparison of the AVTM PLS method with the
existing methods shows significant advantages in terms of observing angle variation, carrying and attire
changes. Experiments evaluated over CASIA gait database, shows that the proposed method improves the
accuracy of recognition compared to the other existing methods.
Design and implementation of video tracking system based on camera field of viewsipij
The basic idea of this paper is to design and implement of video tracking system based on Camera Field of
View (CFOV), Otsu’s method was used to detect targets such as vehicles and people. Whereas most
algorithms were spent a lot of time to execute the process, an algorithm was developed to achieve it in a
little time. The histogram projection was used in both directional to detect target from search region,
which is robust to various light conditions in Charge Couple Device (CCD) camera images and saves
computation time.
Our algorithm based on background subtraction, and normalize cross correlation operation from a series
of sequential sub images can estimate the motion vector. Camera field of view (CFOV) was determined and
calibrated to find the relation between real distance and image distance. The system was tested by
measuring the real position of object in the laboratory and compares it with the result of computed one. So
these results are promising to develop the system in future.
Secure Image Encryption using Two Dimensional Logistic Map
* Gangadhar Tiwari1, Debashis Nandi2, Abhishek Kumar3, Madhusudhan Mishra4 1, 2Department of Information Technology, NIT Durgapur (W.B.), India 3Department of Electronics and Electrical Engineering, NITAP, (A.P.), India 4Department of Electronics and Communication Engineering, NERIST, (A.P.), India
Converting UML class diagram with anti-pattern problems to verified code based on Event-B
Eman K. Elsayed
Mathematical and computer science Dep., Faculty of Science,
Al-Azhar University, Cairo, Egypt
Approach to Seismic Signal Discrimination based on Takagi-Sugeno Fuzzy Inference System
E. H. Ait Laasri, E. Akhouayri, D. Agliz, A. Atmani Electronic, Signal processing and Physical Modelling Laboratory, Physics’ Department, Faculty of Sciences, Ibn Zohr University, B.P. 8106, Agadir, Morocco
Unit Commitment Using a Hybrid Differential Evolution with Triangular Distribution Factor for Adaptive Crossover
N. Malla Reddy* K. Ramesh Reddy** and N. V. Ramana***
Intelligent e-assessment: ontological model for personalizing assessment activities
Rafaela Blanca Silva-López1, Iris Iddaly Méndez-Gurrola1, Victor Germán Sánchez Arias2
1 Universidad Autónoma Metropolitana, Unidad Azcapotzalco.
Av. San Pablo 180, Col. Reynosa Tamaulipas, Del. Azcapotzalco, México, D.F.
2 Universidad Nacional Autónoma de México
Circuito Escolar Ciudad Universitaria, 04510 México, D.F.
Visual Perception Oriented CBIR envisaged through Fractals and Presence Score
Suhas Rautmare, Anjali Bhalchandra
A. Tata Consultancy Services, Mumbai B. Govt. College of Engineering, Aurangabad
Measuring Sub Pixel Erratic Shift in Egyptsat-1 Aliased Images: proposed method
1M.A. Fkirin, 1S.M. Badway, 2A.K. Helmy, 2S.A. Mohamed
1Department of Industrial Electronic Engineering and Control, Faculty of Electronic Engineering,
Menoufia University, Menoufia, Egypt.
2Division of Data Reception Analysis and Receiving Station Affairs, National Authority for Remote Sensing and Space Sciences, Cairo, Egypt.
Overwriting Grammar Model to Represent 2D Image Patterns
1Vishnu Murthy. G, 2Vakulabharanam Vijaya Kumar
1,2Anurag Group of Institutions, Hyderabad, AP,India.
Texture Classification Based on Binary Cross Diagonal Shape Descriptor Texture Matrix (BCDSDTM)
1P.Kiran Kumar Reddy, 2Vakulabharanam Vijaya Kumar, 3B.Eswar Reddy
1RGMCET, Nandyal, AP, India, 2Anurag Group of Institutions, Hyderabad, AP, India
3JNTUA College of Engineering, India.
Improved Iris Verification System
Basma M.Almezgagi, M. A. Wahby Shalaby, Hesham N. Elmahdy Faculty of Computers and Information, Cairo University, Egypt.
Employing Simple Connected Pattern Array Grammar for Generation and Recognition of Connected Patterns on an Image Neighborhood
1Vishnu Murthy. G, 2V. Vijaya Kumar, 3B.V. Ramana Reddy
1,2Anurag Group of Institutions, Hyderabad, AP,India.
3Mekapati Rajamohan Reddy Institute of Technology and Science, Udayagiri, AP,India.
Bench Marking Higuchi Fractal for CBIR
A. Suhas Rautmare, B. Anjali Bhalchandra
A. Tata Consultancy Services, Mumbai B. Govt. College of Engineering, Aurangabad
1. Localization of MAV in GPS-
denied Environment Using
Embedded Stereo Camera
Syaril Azrad, Farid Kendoul,Fadhil
Mohamad,, Kenzo Nonami
Department of Mechanical
Engineering,Nonami Lab, Chiba
University
2. Research Background
• Vision based Autonomous Q-Rotor
Research in Chiba University
– Started in 2008 , Participate in
1st US-Asian Demonstration and Assessment of Micro Aerial and
, MAV’08 Agra, India,
– Single camera approach
• Color-based object tracking algoritm (visual-servoing)
• Feature-based/Optical Flow (OF)
– Stereo camera approach
• Ground stereo camera based hovering
• Fully-embedded based object tracking
3. Current Research Background
• Localization of MAV using embedded
camera
– Research has been conducted by Farid et. al
using single embedded camera using Optical
Flow to localize rotorcraft position
– Height estimation improvement
• Fusion with Pressure Sensor
• Using stereo camera for higher-precision
4. Altitude Estimation For MAVs ?
• GPS
• Pressure sensor
• Laser Range Finder
• Radar Sensors
• Vision System
– Single Camera
– Stereo Camera
5. For Small UAVs (MAV) Altitude Estimation
We propose…
• Embedded Lightweight Stereo Camera
• Fusion Optical Flow (SFM algorithm) with
Image Homography
• Fusion Optical Flow (SFM algorithm) with
Scale Invariant Feature Transform (SIFT)
–based feature matching
7. 18th
Aug. 2010 MOVIC 2010 Tokyo,JAPAN
7
•Size : 54cm x 54cm x 20cm
•Total weight : < 700 grams
•Flight time : 10 min
•Range : 1~2 km
•Total cost : < 4000 USD
8. Proposed Vision-based MAV
localization algorithm
• Horizontal Position using Optical Flow-
based algorithm
• Altitude Position using Optical Flow fused
with Homography/SIFT-based matching
approach
9. 18th
Aug. 2010 MOVIC 2010 Tokyo,JAPAN
9
Computing Optical Flow
Implementation
We use Lucas-Kanade Method
10. 18th
Aug. 2010 MOVIC 2010 Tokyo,JAPAN
10
the feature point (x, y) in the next image I2 will be at (x+dx, y+dy),
and the same process will be repeated in the next step (t +Δt), providing the
optical flow and a simple feature tracking algorithm.
12. Or can be expressed as below
Then we can express the OF equation as below
WHAT DOES THE EQUATION ABOVE MEANS?
The velocity of can be expressed as Pi(xi
,yi)
A perspective-central camera model
maps Pi(xi ,yi)
13. If we have data from IMU about the rotational velocity of the camera on the body
of our MAVs we can get purely translational part
Meaning = velocity and position
Because OF calculated from images contains rotational and translational part
Kendoul et. al (2007)
14. The strategy is an estimation problem with the state vector
So for KF dynamics model, we can write the measurement equation plus noise
With H is as below, from our optical flow expression in equation (5)
15. Now, after estimating Oftrans, we can use them for measurement vector for
(MAV)translational velocity and structure parameter
Both our cameras are mounted on the MAV, and assuming camera motion is smooth,
we can write the dynamic model as below with γ is the camera acc.
We use the model proposed by Kendoul et. al for depth (altitude) as below
We can write the discrete dynamic system using the state vector X
16. Which is non-linear, we Implement Extended Kalman Filter as proposed by Kendoul et. A
and estimate translational velocity and depth (altitude)
While the observation of the discrete model is as below
17. OF raw data from camera,
Attitude data from IMU
Estimate the translational
part of OF, separate the
rotational part
Estimate the camera
velocity and depth(altitude)
18. Move along x-axis with various
attitude
X
Y
Verification Experiment of Image Algorithm Fused with IMU data
5 10 15 20 25 30
- 0.4
- 0.2
0
0.2
0.4
Time [s ]
X,Ydistance[m]
19. Localization of MAV using Optical
Flow and SIFT-matching technique
• SIFT?
– Detect & describe feature in local image
• In our computation we use SIFTGPU (Wu,2009)
to speed up the matching
• Matching result is filtered by RANSAC algorithm
to separate between outlier in inlier
• Applying threads in computations we can get the
Optical Flow based algorithm 50fps and SIFT
matching 7-8fps. This inculding triangulation
23. Implementation Strategy
1. Vicon Data
2. Control Program
3. Receive Image Data
4. Process Image Data
Image Processing
1.Optical Flow Base
2.Stereo Feature Matching
GPU Graphical Processing Unit
Socket
1 Computer with two separate Process, or
2 Computers
24. Implementation
• Implemented on 1 Computer due to high
capability, but have 2 share GPU
• Core-i7 (4 Cores 8 Threads)
– Separate Core for Image Processing
– Separate Thread for Control
– Separate Core for Receiving Image Data
– Separate Thread for Vicon Data
25. Results
• Process that has been implemented with
stable frequency
– PID Control (30Hz)
– Vicon Data Acquisition (30Hz)
– Image Processing SIFT (8Hz) and Optical
Flow (15Hz)
Standard GPS vertical precision between 25m to 50m
Pressure sensor depends on environment will have an error between 6% and 7% moreover, pressure sensor only calculate relative height
Laser altimeter is good but for our type of MAV it would be not suitable due to its energy consumption and surface requirement
Radar sensor would not be suitable due to its high cost and energy consuming
Thus we still think the vision system is one of the best alternative. However due to its high computing cost requirement, depending on algorithm there is an issue whether we can implement on-board or host-based.
Our platform is off-the shelf platform from Ascending Technologies, German. The size of our platform is 54 cm in diameters, and 20cm of height, with total weight of less than 700 grams, and flight time up to 12 minutes. However we state here 10 minutes for safety purposes. The flight range would be 1 to 2 kilometers. And the total cost is less than 4000 USD, which is around 350 thousand yen. The details of the price can be seen on the right corner table. In this platform we incorporated the flight computer GUMSTIX, which has a computing speed of 400MHz, equipped with wi-fi. The GUMSTIX is linux-based. We also use the Crossbow MNAV IMU, pressure, gyro sensor for avionics sensor, and mount a small analog camera on the bottom of the platform, which operates using 1.3GHz band.
The Optical Flow algorithm can be summarized by the picture here. When we select the object of interest as shown as the blue shape here. The Tomasi-Shi algorithm will automatically specify good features to track. We can limit the good features to track to a certain number of features. In this example five point is selected. Even though it is not exactly the center of the OOI, we output the center of the five features as the object center, and relay it to the controller as an input. Based on Optical Flow algorithm, the five same features will be tracked in the next frame. A process of reselection of feature is executed when the feature of interest is lost in detection. We can summarize the Optical Flow algorithm as in the next slide.
As shown in the formula above, feature point x,y will be x+dx and y+dy in the next image frame I2. dx and dy is defined when the argument of the minimum of this equation is fulfilled. The same process then will be repeated in the next step, to provide an optical flow and simple feature tracking algorithm. However there is a problem when feature detection. First it only selected limited points or features, next it will lose the object profile when the object is lost from the screen due to movement or just simply image blur due to bad video streaming reception. The feature based algorithm does not “remember” the object when it first selected. Thus we introduce a hypothetical algorithm to solve the problem, combining the color based and feature based algorithm.
The study has been done by Kendoul et al. using single camera, however we want to exploit the potential of stereo camera especially to predict correctly depth (Zi) used in the equation.