Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Disaster Monitoring using Unmanned Aerial Vehicles and Deep Learning

322 views

Published on

Monitoring and identification of disasters are crucial for mitigating their effects on the
environment and on human population, and can be facilitated by the use of unmanned aerial vehicles
(UAV), equipped with camera sensors which can produce frequent aerial photos of the areas of interest. A
modern, promising technique for recognition of events based on aerial photos is deep learning. In this paper,
we present the state of the art work related to the use of deep learning techniques for disaster monitoring
and identification. Moreover, we demonstrate the potential of this technique in identifying disasters
automatically, with high accuracy, by means of a relatively simple deep learning model. Based on a small
dataset of 544 images (containing images of disasters such as fires, earthquakes, collapsed buildings,
tsunami and flooding, as well as “non-disaster” scenes), our preliminary results show an accuracy of 91%
achieved, indicating that deep learning, combined with UAV equipped with camera sensors, have the
potential to predict disasters with high accuracy in the near future. Presented at the EnviroInfo 2017 Conference in Luxembourg.

Published in: Technology
  • Be the first to comment

Disaster Monitoring using Unmanned Aerial Vehicles and Deep Learning

  1. 1. 1 EnviroInfo Conference 2017 Disaster Management for Resilience and Public Safety Workshop Disaster Monitoring using UAV and Deep Learning Andreas Kamilaris 13th September, 2017 Luxembourg
  2. 2. Problem 2 Monitoring and identification of disasters are crucial for mitigating their effects on the environment and on human population.
  3. 3. Motivation 3 Disaster monitoring can be facilitated by the use of unmanned aerial vehicles (UAV), equipped with camera sensors which can produce frequent aerial photos of the areas of interest.
  4. 4. Motivation 4 Advantages of Drones: • Small size • Low cost of operation • Exposure to dangerous environments • High probability of mission success • No risk of loss of aircrew resource • High-resolution image sensing • High operational flexibility
  5. 5. Motivation 5 Modern computer vision techniques: • Artificial Neural Networks • Scalable Vector Machines • Multi-layer Perceptrons • Random Forests • Gaussian Mixture Models • K-Nearest Neighbors • Unsupervised feature learning • Feature extraction techniques: Color, shape, texture • Deep learning Machine Learning- based Approaches Probabilistic Modelling
  6. 6. Motivation 6 Advantages of Deep learning: • Superior performance in terms of precision • Perform classification and predictions particularly well due to their structure. • Flexible and adaptable • No need for hand-engineered features • Generalizes well • Robust in low-resolution and -quality images. Andreas Kamilaris and Francesc X. Prenafeta-Boldú, Deep Learning in Agriculture: A Survey, Computers and Electronics in Agriculture Journal, 2017. [Under review]
  7. 7. Research Questions 7 Can drones and aerial image sensing be used for real-time monitoring of physical areas and? accurate identification of disasters? Can deep learning be used in combination with drones and aerial images for real-time disaster monitoring/identification?
  8. 8. Deep Learning 8 Convolutional Neural Networks
  9. 9. Deep Learning 9 Convolutional Neural Networks • Can be applied to any form of data, such as audio, video, images, speech, and natural language. • Various “successful” popular architectures: AlexNet, VGG, GoogleNet, Inception-ResNet etc. • Pre-trained weights • Common datasets for pre-training CNN architectures include ImageNet and PASCAL VOC. • Many tools and platforms that allow researchers to experiment with deep learning e.g. Keras, Theano.
  10. 10. General Idea 10 Disaster!Nothing to worry about!
  11. 11. State of the Art 11 No. Disaster Image source Accuracy 1. Fire (Kim, Lee, Park, Lee, & Lee, 2016) Aerial photos Human-like judgement 2. Avalanche (Bejiga, Zeggada, Nouffidj, & Melgani, 2017) Aerial photos 72-97% 3. Car accidents and fire (Kang & Choo, 2016) CCTV cameras 96-99% 4. Landslides (Liu & Wu, 2016) Optical remote sensing 96% 5. Landslides and flood (Amit, Shiraishi, Inoshita, & Aoki, 2016) Optical remote sensing 80-90%
  12. 12. Methodology 12 CNN Model: VGG architecture, pre-trained with the ImageNet dataset of images. Dataset: 544 aerial photos from Google images (min. 256x256 pixels), acquired using the query: [Disaster]: earthquake, hurricanes, flood and fire. [Landscape]: aerial views of cities, villages, forests and rivers [Disaster | Landscape] + "aerial view" + "drone"
  13. 13. Dataset 13 No. Image Group No. of Images Relevant Possible Disaster 1. Buildings collapsed 101 Earthquakes and hurricanes 2. Flames or smoke 111 Fire 3. Flood 125 Earthquakes, hurricanes and tsunami 4. Forests and rivers 104 No Disaster 5. Cities and urban landscapes 103 No Disaster
  14. 14. Dataset: Disasters 14 Buildings collapsed Flames or smoke Flood
  15. 15. Dataset: Landscapes 15 Forests and rivers Cities and urban landscapes
  16. 16. Setup 16 • 82% (444 images) of our dataset as training data and 18% (100 images) as testing data. • Random assignment of images in training/testing. • Training procedure 20 minutes on a Linux machine, testing 5 minutes for the 100 images. • Learning rate: 0.001 • Used data augmentation techniques. • 30 epochs
  17. 17. Results: Training Vs. Testing 17 83 84 85 86 87 88 89 90 91 92 82-18 70-30 75-25 85-15 90-10 Training Vs. Testing Percentage OverallPrecision(%)
  18. 18. Results: Training Vs. Precision 18 0 10 20 30 40 50 60 70 80 90 100 5 10 15 20 25 30 35 OverallPrecision(%) Number of Epochs
  19. 19. Results: Confusion Matrix 19 91% Precision 9% Error
  20. 20. Results: Analysis of Error 20 9% Error Urban Vs. Buildings collapsed (4%) Urban Vs. Fire (2%) Urban Vs. Flooding (1%)Flooding Vs. Buildings collapsed (2%)
  21. 21. Conclusion 21 Deep learning offers good precision and many benefits. Can be successfully used in combination with UAV for disaster monitoring/identification. It has also some disadvantages: • It takes (sometimes much) longer time to train. • It requires the preparation and pre-labeling of a dataset containing at least some hundreds of images.
  22. 22. Future Work 22 • Publish the dataset to the research community. • Enhance the dataset with more images. • Experiment with different architectures, platforms and parameters. • Increase overall precision to more than 95%. • Perform a real-life case study with drones used for monitoring some particular disaster e.g. indication of fire.
  23. 23. Vision 23 Better disaster modelling, especially when combining UAV and deep learning with geo- tagging of the events identified and geospatial applications. Facilitate the integration of relevant actors (i.e. action forces/authorities, citizens/volunteers, other stakeholders) in disaster management activities with regard to communication, coordination and collaboration.
  24. 24. 24 Many thanks for your attention! Andreas Kamilaris andreas.kamilaris@irta.cat

×