SlideShare a Scribd company logo
1 of 3
Download to read offline
Vision Based Autonomous Landing of UAV
Shikhar Gupta1 and Dr. Hyunchul Shin2
Abstract— This paper presents our work on quad-rotor
unmanned aerial vehicle and focuses on our vision-based
method for detection of landing platform at the destination
GPS location. The paper describes the approach for auto-
maneuvering of the UAV to the landing platform using onboard
control. The approach described uses Robot Operating System
(ROS) framework to integrate the onboard components and
generate control signals for the UAV.
I. INTRODUCTION
A lot of organizations involved in delivery systems are
promoting the use of miniature UAVs, also known as drones,
for delivering packages in a very short duration. It will also
increase the overall safety and efficiency of transportation
system. Amazon is developing its own drone delivery system,
Prime Air[1]
. Google has been testing UAVs in Australia
which aims to produce drones to deliver products sold via e-
commerce[2]
. USPS has been testing delivery systems[3]
with
HorseFly Drones.
Delivery systems used are generally GPS based and use
longitude and latitude coordinates to send off the drone to the
destination. While GPS-based control is sufficient for general
positioning (assuming the signal is not blocked), the accuracy
of GPS receivers fit for use on miniature UAVs is measured
in meters, making them unsuitable for precision tasks such
as landing. For a delivery system to function successfully, it
is mandatory for the drone to land on the target precisely.
For this paper, we use the Scale Invariant Feature Transform
(SIFT) to detect the landing site and attitude control of the
drone for its precise maneuvering.
A. The Quadcopter ”DJI Matrice 100”
The quadcopter used in this paper is Matrice 100 (Fig.1)
manufactured by DJI, a Chinese technology company[4]
. It
is propelled by 4 DC motors and can carry upto 3600g
(including the 2 LiPo 6S batteries) for 20 minutes. Apart
from the propellers, it is equipped with a GPS module, a
camera with full gimbal control and the usual accelerometers
and gyros. The GPS module enables the drone to reach the
destination and hold its position but the system has intrinsic
error sources that have to be taken into account when a
receiver reads the GPS signals from the constellation of
satellites in orbit. Hence, we have used the attitude control
mode along with the GPS module for precise landing. It gives
access to the roll and pitch control.
1Shikhar Gupta is an undergraduate with major Electronics
and Communication, Indian Institute of Technology, Guwahati
g.shikhar@iitg.ernet.in
2Dr. Hyunchul Shin is with the Faculty of Electrical
Engineering, Hanyang University, ERICA Campus, South Korea.
shin@hanyang.ac.kr
Fig. 1: DJI Matrice 100 with NVIDIA Jetson TK1 mounted
on top
B. The Onboard Processor ”NVIDIA Jetson TK1”
We have equipped the drone with additional hardware.
This includes NVIDIA’s Jetson TK1[5]
which is an embedded
Linux development platform featuring a Tegra K1 SOC
(CPU+GPU+ISP in a single chip). Besides the quad-core
2.3GHz ARM Cortex-A15 CPU and the Tegra K1 GPU, the
Jetson TK1 board includes similar features as a Raspberry
Pi.
One feature becoming common in miniature UAVs is
the onboard camera which helps in performing isolated
vision based tasks and position control. We have mounted
a USB camera over the drone and is connected to the Jetson
TK1’s USB port. Images are acquired from the camera and
processed onboard.
C. Related Work
Various organizations and research groups have presented
their work on UAVs and recently it has become more
famous among researchers and hobbyists due its wide range
applications. Most similar to our work is [6] which also
focuses on detection of landing site using vision algorithm
and defines the orientation of the UAV. [7] also describes
an algorithm to detect a landing site which has a shape
of ”H”. This is one of the earliest works in this field. [8]
uses Moire patterns which are convincing not very robust in
outdoor surroundings. [9] and [10] also describe two different
approaches for identification of landing sites.
We have tried to overcome the two limitations in our
work: There has to be a specific pattern for the landing
site according to the detection algorithm for robustness and
orientation of the drone must be known for maneuvering it to
the destination. The next section describes our algorithm can
work for any generic or complex design and maneuvering is
independent of the orientation of the drone at any point of
time.
II. ROS AND VISION BASED LANDING
A. Identification of The Target
Our algorithm for detection of the landing site is pro-
grammed in Python and uses the OpenCV[11]
library. Taking
advantage of the computational capacity of the Jetson TK1,
algorithm runs on an image of resolution 1280x720 and
produces results in real time.
We have used SIFT[12]
feature extraction method for the
detection. It first identifies and stores the keypoints in the
input image. Fig 2 shows the detected keypoints in training
and input image. The image is convolved with Gaussian
filters at different scales,
L(x, y, kσ) = G(x, y, kσ) ∗ I(x, y) (1)
and then the difference of successive Gaussian-blurred im-
ages are taken (Fig. 2).
D(x, y, kσ) = L(x, y, kiσ) − L(x, y, kjσ) (2)
Keypoints are then taken as maxima/minima of the Dif-
ference of Gaussians (DoG) that occur at multiple scales.
Keypoints having low contrast or are poorly localized along
an edge are discarded and each of the remaining ones are
assigned a 128 elements long unique descriptor vector.
Next step is to match the keypoints obtained from input
image to the keypoints in training image (Fig 3). This feature
matching is done through a Euclidean-distance based nearest
neighbor approach. After this step we have all the keypoints
in the input image that matches to those in the training image
and will be used for distance estimation in the next section.
(a) Landing mark
(b) Input Image
Fig. 2: SIFT keypoints detected on training and testing
images
Fig. 3: Target detection using feature matching
B. Distance Estimation
Distance estimation using vision based algorithm has still
not been improved to a great extent. We have proposed here
a simple mathematical model for this purpose.
The SIFT feature matching returns all matched keypoints
across the landing site. To simplify the calculations we select
a single keypoint to represent the target. This keypoint is
selected as the arithmetic mean of all the matched keypoints.
xmean =
(x1 + x2 + ....xn)
n
(3)
ymean =
(y1 + y2 + ....yn)
n
(4)
Current position of the drone at the time of taking the
image will be at the center of the image. Therefore, for an
image of resolution 1280x720 the position of drone can be
assumed at (640,360) on the Cartesian plane and position
of the target is (xmean, ymean). We get the relative position
of the target w.r.t the drone by translation of the x-y axes.
This relative position will be in terms of pixels and hence to
get the actual distance, we have to multiply it with a scaling
factor.
xnew = (xmean − 640) ∗ (scalingfactor) (5)
ynew = (ymean − 360) ∗ (scalingfactor) (6)
Here ’*’ denotes the product. This position vector gives x
and y distance of the drone from the landing site in meters
and will be used in next section to guide the drone to the
target location.
C. Roll and Pitch Control
Matrice 100’s attitude mode gives control over drone’s
pitch and roll angle which in turn can be used for maneuver-
ing. Both these angles determine the velocity of the drone in
the forward and lateral directions. We have kept the velocities
low for better results as in attitude mode (non-GPS) drone
tend to drift away more. xnew and ynew determine the times
for which drone will move in the forward/backward and
lateral directions with their respective velocities. This guides
the drone to the landing site independent of the orientation of
the drone. The sign of xnew determines whether the target is
in forward or backward direction and that of ynew determine
whether it is towards left or right and maneuvers the drone
accordingly.
D. The ROS Framework
All of the above mentioned processes are integrated into
a single ROS[13]
package as different nodes and forms the
controller structure. All the nodes can communicate with
each other and guide the drone to the target location.
• The launch node setups serial communication between
the Jetson TK1 and the drone through UART port.
• The camera node acquires image and publishes it to the
other nodes.
• The vision node subscribes to the image and uses it for
detection and distance estimation.
• The drone node uses the estimated x and y coordinates
to generate control signals and maneuver the drone to
the precise location.
III. EXPERIMENTAL RESULTS AND CONCLUSIONS
A. Experiment
The proposed landing algorithm is validated by the results.
Experiments were performed at the height of 3 meters and 5
meters and the landing mark was printed on an A4 sheet. The
scaling factor in eq. (5) and (6) had to be set accordingly. In
the steady wind conditions results are quite precise with an
error of 10 cm. Whereas in windy condition the drone tends
to drift away from the target in attitude mode (non-GPS)
and cannot hold its position with high accuracy increasing
the error. Instead, if we use a bigger landing site (bigger than
A4), the error is reduced as the area for landing is increased
and takes in account the drift.
B. Conclusion
In this paper we proposed the approach for the autonomous
landing of drone using a vision based algorithm. Our ap-
proach uses only 1 frame for identification and distance
estimation and hence usb cameras, which do not have very
high data rate, can also be used. Also, limitation of using
only a particular type of pattern for detection, as used in
most of the literature, is also eradicated.
REFERENCES
[1] Amazon: Prime Drones https://www.amazon.com/b?node=8037720011
[2] Alexis C. Madrigal (28 August 2014). ”Inside Google’s Secret Drone-
Delivery Program”. The Atlantic.
[3] ”USPS Drone Delivery — CNBC” https://youtu.be/V9GXiXgaK34.
[4] DJI developers : https://developer.dji.com.
[5] NVIDIA Jetson TK1 Wiki : http://elinux.org/Jetson TK1
[6] Sven Lange, Niko Sunderhauf, Peter Protzel, ”A Vision Based On-
board Approach for Landing and Position Control of an Autonomous
Multirotor UAV in GPS-Denied Environment” in International Con-
ference on Advanced Robotics, ICAR 2009.
[7] S. Saripalli, J.F. Montgomery, and G.S. Sukhatme, Vision-Based
Autonomous Landing of an Unmanned Aerial Vehicle, in IEEE
International Conference on Robotics and Automation (ICRA),2002
[8] G.P. Tournier, M. Valenti, J.P. How, and E. Feron, Estimation and
Control of a Quadrotor Vehicle Using Monocular Vision and Moire
Patterns, in AIAA Guidance, Navigation, and Control Conference and
Exhibit, 2006
[9] P.J. Garcia-Pardo, G.S. Sukhatme, and J.F. Montgomery, Towards
vision-based safe landing for an autonomous helicopter, Robotics and
Autonomous Systems,2001
[10] S. Bosch, S. Lacroix, and F. Caballero, Autonomous Detection of Safe
Landing Areas for an UAV from Monocular Images, in IEEE/RSJ
International Conference on Intelligent Robots and Systems, Beijing,
2006
[11] ”The OpenCV Library” http://opencv.org/
[12] Lowe, David G. (1999). ”Object recognition from local scale-invariant
features” in Proceedings of the International Conference on Computer
Vision.
[13] Morgan Quigley, Brian Gerkey, Ken Conley, Josh Faust, Tully Foote,
Jeremy Leibs, Eric Berger, Rob Wheeler and Andrew Ng, ”ROS: an
open-source Robot Operating System”, in Proc. of the IEEE Intl. Conf.
on Robotics and Automation (ICRA), 2009

More Related Content

What's hot

Real time filter and fusion of multi sensor data for localization of mobile r...
Real time filter and fusion of multi sensor data for localization of mobile r...Real time filter and fusion of multi sensor data for localization of mobile r...
Real time filter and fusion of multi sensor data for localization of mobile r...IAEME Publication
 
Application of Vision based Techniques for Position Estimation
Application of Vision based Techniques for Position EstimationApplication of Vision based Techniques for Position Estimation
Application of Vision based Techniques for Position EstimationIRJET Journal
 
Keynote at Tracking Workshop during ISMAR 2014
Keynote at Tracking Workshop during ISMAR 2014Keynote at Tracking Workshop during ISMAR 2014
Keynote at Tracking Workshop during ISMAR 2014Darius Burschka
 
Estimation of Terrain Gradient Conditions & Obstacle Detection Using a Monocu...
Estimation of Terrain Gradient Conditions & Obstacle Detection Using a Monocu...Estimation of Terrain Gradient Conditions & Obstacle Detection Using a Monocu...
Estimation of Terrain Gradient Conditions & Obstacle Detection Using a Monocu...Connor Goddard
 
Using HOG Descriptors on Superpixels for Human Detection of UAV Imagery
Using HOG Descriptors on Superpixels for Human Detection of UAV ImageryUsing HOG Descriptors on Superpixels for Human Detection of UAV Imagery
Using HOG Descriptors on Superpixels for Human Detection of UAV ImageryWai Nwe Tun
 
An automatic algorithm for object recognition and detection based on asift ke...
An automatic algorithm for object recognition and detection based on asift ke...An automatic algorithm for object recognition and detection based on asift ke...
An automatic algorithm for object recognition and detection based on asift ke...Kunal Kishor Nirala
 
Human Action Recognition Using 3D Joint Information and HOOFD Features
Human Action Recognition Using 3D Joint Information and HOOFD FeaturesHuman Action Recognition Using 3D Joint Information and HOOFD Features
Human Action Recognition Using 3D Joint Information and HOOFD FeaturesBarış Üstündağ
 
DTAM: Dense Tracking and Mapping in Real-Time, Robot vision Group
DTAM: Dense Tracking and Mapping in Real-Time, Robot vision GroupDTAM: Dense Tracking and Mapping in Real-Time, Robot vision Group
DTAM: Dense Tracking and Mapping in Real-Time, Robot vision GroupLihang Li
 
FR3T10-3-IGARSS2011_Geolocation_20110720.pptx
FR3T10-3-IGARSS2011_Geolocation_20110720.pptxFR3T10-3-IGARSS2011_Geolocation_20110720.pptx
FR3T10-3-IGARSS2011_Geolocation_20110720.pptxgrssieee
 
Research Paper (ISEEE 2019)
Research Paper (ISEEE 2019)Research Paper (ISEEE 2019)
Research Paper (ISEEE 2019)EgemenBalban
 
Pose estimation of a mobile robot
Pose estimation of a mobile robotPose estimation of a mobile robot
Pose estimation of a mobile robotAlwin Poulose
 
Human Action Recognition Based on Spacio-temporal features
Human Action Recognition Based on Spacio-temporal featuresHuman Action Recognition Based on Spacio-temporal features
Human Action Recognition Based on Spacio-temporal featuresnikhilus85
 
Action Recognition (Thesis presentation)
Action Recognition (Thesis presentation)Action Recognition (Thesis presentation)
Action Recognition (Thesis presentation)nikhilus85
 
Human Action Recognition Based on Spacio-temporal features-Poster
Human Action Recognition Based on Spacio-temporal features-PosterHuman Action Recognition Based on Spacio-temporal features-Poster
Human Action Recognition Based on Spacio-temporal features-Posternikhilus85
 
Human Action Recognition in Videos Employing 2DPCA on 2DHOOF and Radon Transform
Human Action Recognition in Videos Employing 2DPCA on 2DHOOF and Radon TransformHuman Action Recognition in Videos Employing 2DPCA on 2DHOOF and Radon Transform
Human Action Recognition in Videos Employing 2DPCA on 2DHOOF and Radon TransformFadwa Fouad
 
Flow Trajectory Approach for Human Action Recognition
Flow Trajectory Approach for Human Action RecognitionFlow Trajectory Approach for Human Action Recognition
Flow Trajectory Approach for Human Action RecognitionIRJET Journal
 

What's hot (20)

Real time filter and fusion of multi sensor data for localization of mobile r...
Real time filter and fusion of multi sensor data for localization of mobile r...Real time filter and fusion of multi sensor data for localization of mobile r...
Real time filter and fusion of multi sensor data for localization of mobile r...
 
Application of Vision based Techniques for Position Estimation
Application of Vision based Techniques for Position EstimationApplication of Vision based Techniques for Position Estimation
Application of Vision based Techniques for Position Estimation
 
D04432528
D04432528D04432528
D04432528
 
Keynote at Tracking Workshop during ISMAR 2014
Keynote at Tracking Workshop during ISMAR 2014Keynote at Tracking Workshop during ISMAR 2014
Keynote at Tracking Workshop during ISMAR 2014
 
Estimation of Terrain Gradient Conditions & Obstacle Detection Using a Monocu...
Estimation of Terrain Gradient Conditions & Obstacle Detection Using a Monocu...Estimation of Terrain Gradient Conditions & Obstacle Detection Using a Monocu...
Estimation of Terrain Gradient Conditions & Obstacle Detection Using a Monocu...
 
Using HOG Descriptors on Superpixels for Human Detection of UAV Imagery
Using HOG Descriptors on Superpixels for Human Detection of UAV ImageryUsing HOG Descriptors on Superpixels for Human Detection of UAV Imagery
Using HOG Descriptors on Superpixels for Human Detection of UAV Imagery
 
An automatic algorithm for object recognition and detection based on asift ke...
An automatic algorithm for object recognition and detection based on asift ke...An automatic algorithm for object recognition and detection based on asift ke...
An automatic algorithm for object recognition and detection based on asift ke...
 
Human Action Recognition Using 3D Joint Information and HOOFD Features
Human Action Recognition Using 3D Joint Information and HOOFD FeaturesHuman Action Recognition Using 3D Joint Information and HOOFD Features
Human Action Recognition Using 3D Joint Information and HOOFD Features
 
DTAM: Dense Tracking and Mapping in Real-Time, Robot vision Group
DTAM: Dense Tracking and Mapping in Real-Time, Robot vision GroupDTAM: Dense Tracking and Mapping in Real-Time, Robot vision Group
DTAM: Dense Tracking and Mapping in Real-Time, Robot vision Group
 
FR3T10-3-IGARSS2011_Geolocation_20110720.pptx
FR3T10-3-IGARSS2011_Geolocation_20110720.pptxFR3T10-3-IGARSS2011_Geolocation_20110720.pptx
FR3T10-3-IGARSS2011_Geolocation_20110720.pptx
 
Motion Human Detection & Tracking Based On Background Subtraction
Motion Human Detection & Tracking Based On Background SubtractionMotion Human Detection & Tracking Based On Background Subtraction
Motion Human Detection & Tracking Based On Background Subtraction
 
Research Paper (ISEEE 2019)
Research Paper (ISEEE 2019)Research Paper (ISEEE 2019)
Research Paper (ISEEE 2019)
 
Types of stereoscope
Types of stereoscopeTypes of stereoscope
Types of stereoscope
 
Pose estimation of a mobile robot
Pose estimation of a mobile robotPose estimation of a mobile robot
Pose estimation of a mobile robot
 
BAXTER PLAYING POOL
BAXTER PLAYING POOLBAXTER PLAYING POOL
BAXTER PLAYING POOL
 
Human Action Recognition Based on Spacio-temporal features
Human Action Recognition Based on Spacio-temporal featuresHuman Action Recognition Based on Spacio-temporal features
Human Action Recognition Based on Spacio-temporal features
 
Action Recognition (Thesis presentation)
Action Recognition (Thesis presentation)Action Recognition (Thesis presentation)
Action Recognition (Thesis presentation)
 
Human Action Recognition Based on Spacio-temporal features-Poster
Human Action Recognition Based on Spacio-temporal features-PosterHuman Action Recognition Based on Spacio-temporal features-Poster
Human Action Recognition Based on Spacio-temporal features-Poster
 
Human Action Recognition in Videos Employing 2DPCA on 2DHOOF and Radon Transform
Human Action Recognition in Videos Employing 2DPCA on 2DHOOF and Radon TransformHuman Action Recognition in Videos Employing 2DPCA on 2DHOOF and Radon Transform
Human Action Recognition in Videos Employing 2DPCA on 2DHOOF and Radon Transform
 
Flow Trajectory Approach for Human Action Recognition
Flow Trajectory Approach for Human Action RecognitionFlow Trajectory Approach for Human Action Recognition
Flow Trajectory Approach for Human Action Recognition
 

Viewers also liked

Revista LA ERRADURA No 3
Revista LA ERRADURA No 3Revista LA ERRADURA No 3
Revista LA ERRADURA No 3laura-bernal
 
Ley infogobierno venezuela
Ley infogobierno venezuelaLey infogobierno venezuela
Ley infogobierno venezuelahrivas2011
 
Tec.educ. ii act. 2 unidad 1(enviado) blog
Tec.educ. ii  act. 2 unidad 1(enviado) blogTec.educ. ii  act. 2 unidad 1(enviado) blog
Tec.educ. ii act. 2 unidad 1(enviado) blogAngeles GC
 
Alín Lucio-Villegas, nombrada directora territorial de Comercio y del Icex en...
Alín Lucio-Villegas, nombrada directora territorial de Comercio y del Icex en...Alín Lucio-Villegas, nombrada directora territorial de Comercio y del Icex en...
Alín Lucio-Villegas, nombrada directora territorial de Comercio y del Icex en...Castilla y León Económica
 
Tec.educ. ii act. 2 unidad 1(enviado) blog
Tec.educ. ii  act. 2 unidad 1(enviado) blogTec.educ. ii  act. 2 unidad 1(enviado) blog
Tec.educ. ii act. 2 unidad 1(enviado) blogAngeles GC
 
Ansal Pioneer city Broucher,9136824702
Ansal Pioneer city Broucher,9136824702Ansal Pioneer city Broucher,9136824702
Ansal Pioneer city Broucher,9136824702sahilkharkara1
 
Canon Brief Final Presentation
Canon Brief Final PresentationCanon Brief Final Presentation
Canon Brief Final PresentationTom Corne
 
Daniel alejandro serna
Daniel alejandro sernaDaniel alejandro serna
Daniel alejandro sernadanielser77
 

Viewers also liked (19)

Revista LA ERRADURA No 3
Revista LA ERRADURA No 3Revista LA ERRADURA No 3
Revista LA ERRADURA No 3
 
Blancasucia
BlancasuciaBlancasucia
Blancasucia
 
Mirarhaciaelfuturocorregido
MirarhaciaelfuturocorregidoMirarhaciaelfuturocorregido
Mirarhaciaelfuturocorregido
 
Ley infogobierno venezuela
Ley infogobierno venezuelaLey infogobierno venezuela
Ley infogobierno venezuela
 
Tec.educ. ii act. 2 unidad 1(enviado) blog
Tec.educ. ii  act. 2 unidad 1(enviado) blogTec.educ. ii  act. 2 unidad 1(enviado) blog
Tec.educ. ii act. 2 unidad 1(enviado) blog
 
Practica 3
Practica 3Practica 3
Practica 3
 
La paz
La pazLa paz
La paz
 
Alín Lucio-Villegas, nombrada directora territorial de Comercio y del Icex en...
Alín Lucio-Villegas, nombrada directora territorial de Comercio y del Icex en...Alín Lucio-Villegas, nombrada directora territorial de Comercio y del Icex en...
Alín Lucio-Villegas, nombrada directora territorial de Comercio y del Icex en...
 
Fontes gratis
Fontes gratisFontes gratis
Fontes gratis
 
Amicamilaq
AmicamilaqAmicamilaq
Amicamilaq
 
Tec.educ. ii act. 2 unidad 1(enviado) blog
Tec.educ. ii  act. 2 unidad 1(enviado) blogTec.educ. ii  act. 2 unidad 1(enviado) blog
Tec.educ. ii act. 2 unidad 1(enviado) blog
 
Ansal Pioneer city Broucher,9136824702
Ansal Pioneer city Broucher,9136824702Ansal Pioneer city Broucher,9136824702
Ansal Pioneer city Broucher,9136824702
 
Taller de bullying lsp
Taller de bullying lspTaller de bullying lsp
Taller de bullying lsp
 
Canon Brief Final Presentation
Canon Brief Final PresentationCanon Brief Final Presentation
Canon Brief Final Presentation
 
Daniel alejandro serna
Daniel alejandro sernaDaniel alejandro serna
Daniel alejandro serna
 
Catalogo habbosmurf
Catalogo habbosmurfCatalogo habbosmurf
Catalogo habbosmurf
 
Practica educativa presentacion en power point
Practica educativa presentacion en power pointPractica educativa presentacion en power point
Practica educativa presentacion en power point
 
Internet
InternetInternet
Internet
 
Internet1
Internet1Internet1
Internet1
 

Similar to report

Waypoint Flight Parameter Comparison of an Autonomous Uav
Waypoint  Flight  Parameter  Comparison  of an Autonomous UavWaypoint  Flight  Parameter  Comparison  of an Autonomous Uav
Waypoint Flight Parameter Comparison of an Autonomous Uavijaia
 
AUTO LANDING PROCESS FOR AUTONOMOUS FLYING ROBOT BY USING IMAGE PROCESSING BA...
AUTO LANDING PROCESS FOR AUTONOMOUS FLYING ROBOT BY USING IMAGE PROCESSING BA...AUTO LANDING PROCESS FOR AUTONOMOUS FLYING ROBOT BY USING IMAGE PROCESSING BA...
AUTO LANDING PROCESS FOR AUTONOMOUS FLYING ROBOT BY USING IMAGE PROCESSING BA...csandit
 
IRJET- Simultaneous Localization and Mapping for Automatic Chair Re-Arran...
IRJET-  	  Simultaneous Localization and Mapping for Automatic Chair Re-Arran...IRJET-  	  Simultaneous Localization and Mapping for Automatic Chair Re-Arran...
IRJET- Simultaneous Localization and Mapping for Automatic Chair Re-Arran...IRJET Journal
 
Leader Follower Formation Control of Ground Vehicles Using Dynamic Pixel Coun...
Leader Follower Formation Control of Ground Vehicles Using Dynamic Pixel Coun...Leader Follower Formation Control of Ground Vehicles Using Dynamic Pixel Coun...
Leader Follower Formation Control of Ground Vehicles Using Dynamic Pixel Coun...ijma
 
FLIGHT TRAJECTORY RECREATION AND PLAYBACK SYSTEM OF AERIAL MISSION BASED ON O...
FLIGHT TRAJECTORY RECREATION AND PLAYBACK SYSTEM OF AERIAL MISSION BASED ON O...FLIGHT TRAJECTORY RECREATION AND PLAYBACK SYSTEM OF AERIAL MISSION BASED ON O...
FLIGHT TRAJECTORY RECREATION AND PLAYBACK SYSTEM OF AERIAL MISSION BASED ON O...cscpconf
 
Engineering@SotonPoster
Engineering@SotonPosterEngineering@SotonPoster
Engineering@SotonPosterchang liu
 
Flight trajectory recreation and playback system of aerial mission based on o...
Flight trajectory recreation and playback system of aerial mission based on o...Flight trajectory recreation and playback system of aerial mission based on o...
Flight trajectory recreation and playback system of aerial mission based on o...csandit
 
Development of an Integrated Attitude Determination System for Small Unmanned...
Development of an Integrated Attitude Determination System for Small Unmanned...Development of an Integrated Attitude Determination System for Small Unmanned...
Development of an Integrated Attitude Determination System for Small Unmanned...IRJET Journal
 
LocalizationandMappingforAutonomousNavigationin OutdoorTerrains: A StereoVisi...
LocalizationandMappingforAutonomousNavigationin OutdoorTerrains: A StereoVisi...LocalizationandMappingforAutonomousNavigationin OutdoorTerrains: A StereoVisi...
LocalizationandMappingforAutonomousNavigationin OutdoorTerrains: A StereoVisi...Minh Quan Nguyen
 
Swarm.Robotics Research Report IEEE
Swarm.Robotics Research Report IEEESwarm.Robotics Research Report IEEE
Swarm.Robotics Research Report IEEEAsad Masood
 
SkyStitch: a Cooperative Multi-UAV-based Real-time Video Surveillance System ...
SkyStitch: a Cooperative Multi-UAV-based Real-time Video Surveillance System ...SkyStitch: a Cooperative Multi-UAV-based Real-time Video Surveillance System ...
SkyStitch: a Cooperative Multi-UAV-based Real-time Video Surveillance System ...Kitsukawa Yuki
 
Hazard Detection Algorithm for Safe Autonomous Landing
Hazard Detection Algorithm for Safe Autonomous LandingHazard Detection Algorithm for Safe Autonomous Landing
Hazard Detection Algorithm for Safe Autonomous LandingAliya Burkit
 
International Journal of Computational Engineering Research (IJCER)
International Journal of Computational Engineering Research (IJCER)International Journal of Computational Engineering Research (IJCER)
International Journal of Computational Engineering Research (IJCER)ijceronline
 
Enhanced Algorithm for Obstacle Detection and Avoidance Using a Hybrid of Pla...
Enhanced Algorithm for Obstacle Detection and Avoidance Using a Hybrid of Pla...Enhanced Algorithm for Obstacle Detection and Avoidance Using a Hybrid of Pla...
Enhanced Algorithm for Obstacle Detection and Avoidance Using a Hybrid of Pla...IOSR Journals
 
An Automatic Detection of Landing Sites for Emergency Landing of Aircraft
An Automatic Detection of Landing Sites for Emergency Landing of AircraftAn Automatic Detection of Landing Sites for Emergency Landing of Aircraft
An Automatic Detection of Landing Sites for Emergency Landing of AircraftIRJET Journal
 
Goal location prediction based on deep learning using RGB-D camera
Goal location prediction based on deep learning using RGB-D cameraGoal location prediction based on deep learning using RGB-D camera
Goal location prediction based on deep learning using RGB-D camerajournalBEEI
 
Motion planning and controlling algorithm for grasping and manipulating movin...
Motion planning and controlling algorithm for grasping and manipulating movin...Motion planning and controlling algorithm for grasping and manipulating movin...
Motion planning and controlling algorithm for grasping and manipulating movin...ijscai
 

Similar to report (20)

Waypoint Flight Parameter Comparison of an Autonomous Uav
Waypoint  Flight  Parameter  Comparison  of an Autonomous UavWaypoint  Flight  Parameter  Comparison  of an Autonomous Uav
Waypoint Flight Parameter Comparison of an Autonomous Uav
 
AUTO LANDING PROCESS FOR AUTONOMOUS FLYING ROBOT BY USING IMAGE PROCESSING BA...
AUTO LANDING PROCESS FOR AUTONOMOUS FLYING ROBOT BY USING IMAGE PROCESSING BA...AUTO LANDING PROCESS FOR AUTONOMOUS FLYING ROBOT BY USING IMAGE PROCESSING BA...
AUTO LANDING PROCESS FOR AUTONOMOUS FLYING ROBOT BY USING IMAGE PROCESSING BA...
 
IRJET- Simultaneous Localization and Mapping for Automatic Chair Re-Arran...
IRJET-  	  Simultaneous Localization and Mapping for Automatic Chair Re-Arran...IRJET-  	  Simultaneous Localization and Mapping for Automatic Chair Re-Arran...
IRJET- Simultaneous Localization and Mapping for Automatic Chair Re-Arran...
 
Leader Follower Formation Control of Ground Vehicles Using Dynamic Pixel Coun...
Leader Follower Formation Control of Ground Vehicles Using Dynamic Pixel Coun...Leader Follower Formation Control of Ground Vehicles Using Dynamic Pixel Coun...
Leader Follower Formation Control of Ground Vehicles Using Dynamic Pixel Coun...
 
FLIGHT TRAJECTORY RECREATION AND PLAYBACK SYSTEM OF AERIAL MISSION BASED ON O...
FLIGHT TRAJECTORY RECREATION AND PLAYBACK SYSTEM OF AERIAL MISSION BASED ON O...FLIGHT TRAJECTORY RECREATION AND PLAYBACK SYSTEM OF AERIAL MISSION BASED ON O...
FLIGHT TRAJECTORY RECREATION AND PLAYBACK SYSTEM OF AERIAL MISSION BASED ON O...
 
Engineering@SotonPoster
Engineering@SotonPosterEngineering@SotonPoster
Engineering@SotonPoster
 
Flight trajectory recreation and playback system of aerial mission based on o...
Flight trajectory recreation and playback system of aerial mission based on o...Flight trajectory recreation and playback system of aerial mission based on o...
Flight trajectory recreation and playback system of aerial mission based on o...
 
Development of an Integrated Attitude Determination System for Small Unmanned...
Development of an Integrated Attitude Determination System for Small Unmanned...Development of an Integrated Attitude Determination System for Small Unmanned...
Development of an Integrated Attitude Determination System for Small Unmanned...
 
LocalizationandMappingforAutonomousNavigationin OutdoorTerrains: A StereoVisi...
LocalizationandMappingforAutonomousNavigationin OutdoorTerrains: A StereoVisi...LocalizationandMappingforAutonomousNavigationin OutdoorTerrains: A StereoVisi...
LocalizationandMappingforAutonomousNavigationin OutdoorTerrains: A StereoVisi...
 
J017377578
J017377578J017377578
J017377578
 
Swarm.Robotics Research Report IEEE
Swarm.Robotics Research Report IEEESwarm.Robotics Research Report IEEE
Swarm.Robotics Research Report IEEE
 
100519 ver final
100519 ver final100519 ver final
100519 ver final
 
SkyStitch: a Cooperative Multi-UAV-based Real-time Video Surveillance System ...
SkyStitch: a Cooperative Multi-UAV-based Real-time Video Surveillance System ...SkyStitch: a Cooperative Multi-UAV-based Real-time Video Surveillance System ...
SkyStitch: a Cooperative Multi-UAV-based Real-time Video Surveillance System ...
 
Hazard Detection Algorithm for Safe Autonomous Landing
Hazard Detection Algorithm for Safe Autonomous LandingHazard Detection Algorithm for Safe Autonomous Landing
Hazard Detection Algorithm for Safe Autonomous Landing
 
International Journal of Computational Engineering Research (IJCER)
International Journal of Computational Engineering Research (IJCER)International Journal of Computational Engineering Research (IJCER)
International Journal of Computational Engineering Research (IJCER)
 
Enhanced Algorithm for Obstacle Detection and Avoidance Using a Hybrid of Pla...
Enhanced Algorithm for Obstacle Detection and Avoidance Using a Hybrid of Pla...Enhanced Algorithm for Obstacle Detection and Avoidance Using a Hybrid of Pla...
Enhanced Algorithm for Obstacle Detection and Avoidance Using a Hybrid of Pla...
 
An Automatic Detection of Landing Sites for Emergency Landing of Aircraft
An Automatic Detection of Landing Sites for Emergency Landing of AircraftAn Automatic Detection of Landing Sites for Emergency Landing of Aircraft
An Automatic Detection of Landing Sites for Emergency Landing of Aircraft
 
Ijecet 06 10_003
Ijecet 06 10_003Ijecet 06 10_003
Ijecet 06 10_003
 
Goal location prediction based on deep learning using RGB-D camera
Goal location prediction based on deep learning using RGB-D cameraGoal location prediction based on deep learning using RGB-D camera
Goal location prediction based on deep learning using RGB-D camera
 
Motion planning and controlling algorithm for grasping and manipulating movin...
Motion planning and controlling algorithm for grasping and manipulating movin...Motion planning and controlling algorithm for grasping and manipulating movin...
Motion planning and controlling algorithm for grasping and manipulating movin...
 

report

  • 1. Vision Based Autonomous Landing of UAV Shikhar Gupta1 and Dr. Hyunchul Shin2 Abstract— This paper presents our work on quad-rotor unmanned aerial vehicle and focuses on our vision-based method for detection of landing platform at the destination GPS location. The paper describes the approach for auto- maneuvering of the UAV to the landing platform using onboard control. The approach described uses Robot Operating System (ROS) framework to integrate the onboard components and generate control signals for the UAV. I. INTRODUCTION A lot of organizations involved in delivery systems are promoting the use of miniature UAVs, also known as drones, for delivering packages in a very short duration. It will also increase the overall safety and efficiency of transportation system. Amazon is developing its own drone delivery system, Prime Air[1] . Google has been testing UAVs in Australia which aims to produce drones to deliver products sold via e- commerce[2] . USPS has been testing delivery systems[3] with HorseFly Drones. Delivery systems used are generally GPS based and use longitude and latitude coordinates to send off the drone to the destination. While GPS-based control is sufficient for general positioning (assuming the signal is not blocked), the accuracy of GPS receivers fit for use on miniature UAVs is measured in meters, making them unsuitable for precision tasks such as landing. For a delivery system to function successfully, it is mandatory for the drone to land on the target precisely. For this paper, we use the Scale Invariant Feature Transform (SIFT) to detect the landing site and attitude control of the drone for its precise maneuvering. A. The Quadcopter ”DJI Matrice 100” The quadcopter used in this paper is Matrice 100 (Fig.1) manufactured by DJI, a Chinese technology company[4] . It is propelled by 4 DC motors and can carry upto 3600g (including the 2 LiPo 6S batteries) for 20 minutes. Apart from the propellers, it is equipped with a GPS module, a camera with full gimbal control and the usual accelerometers and gyros. The GPS module enables the drone to reach the destination and hold its position but the system has intrinsic error sources that have to be taken into account when a receiver reads the GPS signals from the constellation of satellites in orbit. Hence, we have used the attitude control mode along with the GPS module for precise landing. It gives access to the roll and pitch control. 1Shikhar Gupta is an undergraduate with major Electronics and Communication, Indian Institute of Technology, Guwahati g.shikhar@iitg.ernet.in 2Dr. Hyunchul Shin is with the Faculty of Electrical Engineering, Hanyang University, ERICA Campus, South Korea. shin@hanyang.ac.kr Fig. 1: DJI Matrice 100 with NVIDIA Jetson TK1 mounted on top B. The Onboard Processor ”NVIDIA Jetson TK1” We have equipped the drone with additional hardware. This includes NVIDIA’s Jetson TK1[5] which is an embedded Linux development platform featuring a Tegra K1 SOC (CPU+GPU+ISP in a single chip). Besides the quad-core 2.3GHz ARM Cortex-A15 CPU and the Tegra K1 GPU, the Jetson TK1 board includes similar features as a Raspberry Pi. One feature becoming common in miniature UAVs is the onboard camera which helps in performing isolated vision based tasks and position control. We have mounted a USB camera over the drone and is connected to the Jetson TK1’s USB port. Images are acquired from the camera and processed onboard. C. Related Work Various organizations and research groups have presented their work on UAVs and recently it has become more famous among researchers and hobbyists due its wide range applications. Most similar to our work is [6] which also focuses on detection of landing site using vision algorithm and defines the orientation of the UAV. [7] also describes an algorithm to detect a landing site which has a shape of ”H”. This is one of the earliest works in this field. [8] uses Moire patterns which are convincing not very robust in outdoor surroundings. [9] and [10] also describe two different approaches for identification of landing sites. We have tried to overcome the two limitations in our work: There has to be a specific pattern for the landing site according to the detection algorithm for robustness and orientation of the drone must be known for maneuvering it to the destination. The next section describes our algorithm can work for any generic or complex design and maneuvering is
  • 2. independent of the orientation of the drone at any point of time. II. ROS AND VISION BASED LANDING A. Identification of The Target Our algorithm for detection of the landing site is pro- grammed in Python and uses the OpenCV[11] library. Taking advantage of the computational capacity of the Jetson TK1, algorithm runs on an image of resolution 1280x720 and produces results in real time. We have used SIFT[12] feature extraction method for the detection. It first identifies and stores the keypoints in the input image. Fig 2 shows the detected keypoints in training and input image. The image is convolved with Gaussian filters at different scales, L(x, y, kσ) = G(x, y, kσ) ∗ I(x, y) (1) and then the difference of successive Gaussian-blurred im- ages are taken (Fig. 2). D(x, y, kσ) = L(x, y, kiσ) − L(x, y, kjσ) (2) Keypoints are then taken as maxima/minima of the Dif- ference of Gaussians (DoG) that occur at multiple scales. Keypoints having low contrast or are poorly localized along an edge are discarded and each of the remaining ones are assigned a 128 elements long unique descriptor vector. Next step is to match the keypoints obtained from input image to the keypoints in training image (Fig 3). This feature matching is done through a Euclidean-distance based nearest neighbor approach. After this step we have all the keypoints in the input image that matches to those in the training image and will be used for distance estimation in the next section. (a) Landing mark (b) Input Image Fig. 2: SIFT keypoints detected on training and testing images Fig. 3: Target detection using feature matching B. Distance Estimation Distance estimation using vision based algorithm has still not been improved to a great extent. We have proposed here a simple mathematical model for this purpose. The SIFT feature matching returns all matched keypoints across the landing site. To simplify the calculations we select a single keypoint to represent the target. This keypoint is selected as the arithmetic mean of all the matched keypoints. xmean = (x1 + x2 + ....xn) n (3) ymean = (y1 + y2 + ....yn) n (4) Current position of the drone at the time of taking the image will be at the center of the image. Therefore, for an image of resolution 1280x720 the position of drone can be assumed at (640,360) on the Cartesian plane and position of the target is (xmean, ymean). We get the relative position of the target w.r.t the drone by translation of the x-y axes. This relative position will be in terms of pixels and hence to get the actual distance, we have to multiply it with a scaling factor. xnew = (xmean − 640) ∗ (scalingfactor) (5) ynew = (ymean − 360) ∗ (scalingfactor) (6) Here ’*’ denotes the product. This position vector gives x and y distance of the drone from the landing site in meters and will be used in next section to guide the drone to the target location. C. Roll and Pitch Control Matrice 100’s attitude mode gives control over drone’s pitch and roll angle which in turn can be used for maneuver- ing. Both these angles determine the velocity of the drone in the forward and lateral directions. We have kept the velocities low for better results as in attitude mode (non-GPS) drone tend to drift away more. xnew and ynew determine the times for which drone will move in the forward/backward and lateral directions with their respective velocities. This guides the drone to the landing site independent of the orientation of the drone. The sign of xnew determines whether the target is
  • 3. in forward or backward direction and that of ynew determine whether it is towards left or right and maneuvers the drone accordingly. D. The ROS Framework All of the above mentioned processes are integrated into a single ROS[13] package as different nodes and forms the controller structure. All the nodes can communicate with each other and guide the drone to the target location. • The launch node setups serial communication between the Jetson TK1 and the drone through UART port. • The camera node acquires image and publishes it to the other nodes. • The vision node subscribes to the image and uses it for detection and distance estimation. • The drone node uses the estimated x and y coordinates to generate control signals and maneuver the drone to the precise location. III. EXPERIMENTAL RESULTS AND CONCLUSIONS A. Experiment The proposed landing algorithm is validated by the results. Experiments were performed at the height of 3 meters and 5 meters and the landing mark was printed on an A4 sheet. The scaling factor in eq. (5) and (6) had to be set accordingly. In the steady wind conditions results are quite precise with an error of 10 cm. Whereas in windy condition the drone tends to drift away from the target in attitude mode (non-GPS) and cannot hold its position with high accuracy increasing the error. Instead, if we use a bigger landing site (bigger than A4), the error is reduced as the area for landing is increased and takes in account the drift. B. Conclusion In this paper we proposed the approach for the autonomous landing of drone using a vision based algorithm. Our ap- proach uses only 1 frame for identification and distance estimation and hence usb cameras, which do not have very high data rate, can also be used. Also, limitation of using only a particular type of pattern for detection, as used in most of the literature, is also eradicated. REFERENCES [1] Amazon: Prime Drones https://www.amazon.com/b?node=8037720011 [2] Alexis C. Madrigal (28 August 2014). ”Inside Google’s Secret Drone- Delivery Program”. The Atlantic. [3] ”USPS Drone Delivery — CNBC” https://youtu.be/V9GXiXgaK34. [4] DJI developers : https://developer.dji.com. [5] NVIDIA Jetson TK1 Wiki : http://elinux.org/Jetson TK1 [6] Sven Lange, Niko Sunderhauf, Peter Protzel, ”A Vision Based On- board Approach for Landing and Position Control of an Autonomous Multirotor UAV in GPS-Denied Environment” in International Con- ference on Advanced Robotics, ICAR 2009. [7] S. Saripalli, J.F. Montgomery, and G.S. Sukhatme, Vision-Based Autonomous Landing of an Unmanned Aerial Vehicle, in IEEE International Conference on Robotics and Automation (ICRA),2002 [8] G.P. Tournier, M. Valenti, J.P. How, and E. Feron, Estimation and Control of a Quadrotor Vehicle Using Monocular Vision and Moire Patterns, in AIAA Guidance, Navigation, and Control Conference and Exhibit, 2006 [9] P.J. Garcia-Pardo, G.S. Sukhatme, and J.F. Montgomery, Towards vision-based safe landing for an autonomous helicopter, Robotics and Autonomous Systems,2001 [10] S. Bosch, S. Lacroix, and F. Caballero, Autonomous Detection of Safe Landing Areas for an UAV from Monocular Images, in IEEE/RSJ International Conference on Intelligent Robots and Systems, Beijing, 2006 [11] ”The OpenCV Library” http://opencv.org/ [12] Lowe, David G. (1999). ”Object recognition from local scale-invariant features” in Proceedings of the International Conference on Computer Vision. [13] Morgan Quigley, Brian Gerkey, Ken Conley, Josh Faust, Tully Foote, Jeremy Leibs, Eric Berger, Rob Wheeler and Andrew Ng, ”ROS: an open-source Robot Operating System”, in Proc. of the IEEE Intl. Conf. on Robotics and Automation (ICRA), 2009