Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

MediaEval 2017 - Satellite Task: The Multimedia Satellite Task at MediaEval 2017: Emergence Response for Flooding Events (Overview)

181 views

Published on

Presenter: Benjamin Bischke, German Research Center for Artificial Intelligence (DFKI), Germany

Paper: http://ceur-ws.org/Vol-1984/Mediaeval_2017_paper_2.pdf

Video: https://youtu.be/ADBGHRFlJ8M

Authors: Benjamin Bischke, Patrick Helber, Christian Schulze, Venkat Srinivasan, Andreas Dengel, Damian Borth

Abstract: This paper provides a description of the MediaEval 2017 Multimedia Satellite Task. The primary goal of the task is to extract and fuse content of events which are present in Satellite Imagery and Social Media. Establishing a link from Satellite Imagery to Social Multimedia can yield to a comprehensive event representation which is vital for numerous applications. Focusing on natural disaster events in this year, the main objective of the task is to leverage the combined event representation withing the context of emergency response and environmental monitoring. In particular, our task focuses this year on flooding events and consists of two subtasks. The first Disaster Image Retrieval form Social Media subtask requires participants to retrieve images from Social Media which show a direct evidence of the flooding event. The second task Flood Detection in Satellite Images aims to extract regions in satellite images which are affected by a flooding event. Extracted content from both tasks can be fused by means of the geographic information. The task seeks to go beyond state-of-the-art flooding map generation towards recent approaches in Deep-Learning while augmenting the satellite information at the same time with rich social multimedia.

Published in: Science
  • Be the first to comment

  • Be the first to like this

MediaEval 2017 - Satellite Task: The Multimedia Satellite Task at MediaEval 2017: Emergence Response for Flooding Events (Overview)

  1. 1. DFKI – KM - DLCC MediaEval 2017 Multimedia Satellite Task Deep Learning Competence Center & Smart Data and Knowledge Services Benjamin Bischke, Patrick Helber, Venkat Srinivasan, Alan Woodley, Andreas Dengel, Damian Borth Emergency Response for Flooding Events ALL RIGHTS RESERVED. No part of this work may be reproduced in any form or by any means, electronic or mechanical, including photocopying, recording, or by any information storage and retrieval system without expressed written permission from the authors.
  2. 2. DFKI – KM - DLCC Satellite Analysis - What can be done with it? 2
  3. 3. DFKI – KM - DLCC Satellite Analysis - What can be done with it? 3 Water utilisation DeforestationClimate Change Natural & Manmade
 Disasters Infrastructure & Traffic Monitoring Wildlife Monitoring Pollution Monitoring (Oil Spills, Smok, Pipelines) Agriculture Alge-Bloom Ice Caps Carbon Stocks Social Events
  4. 4. DFKI – KM - DLCC Satellite Analysis - What can be done with it? 4 Water utilisation DeforestationClimate Change Natural & Manmade
 Disasters Infrastructure & Traffic Monitoring Wildlife Monitoring Pollution Monitoring (Oil Spills, Smok, Pipelines) Agriculture Alge-Bloom Ice Caps Carbon Stocks Social Events
  5. 5. DFKI – SDS - DLCC 5 Current Natural Disasters
  6. 6. DFKI – SDS - DLCC 6 Current Natural Disasters
  7. 7. DFKI – SDS - DLCC 7 Current Natural Disasters
  8. 8. DFKI – SDS - DLCC 8 Current Natural Disasters
  9. 9. DFKI – SDS - DLCC Is Satellite Imagery enough? 9 DigitalGlobe, September 2017
  10. 10. DFKI – SDS - DLCC Is Satellite Imagery enough? 10 DigitalGlobe, September 2017 Limited Perspective (Clouds, etc.) Low Temporal Resolution
  11. 11. DFKI – SDS - DLCC Contextual Enrichment of Satellite Imagery (Idea) 11 DigitalGlobe, September 2017
  12. 12. DFKI – SDS - DLCC 12 Multimedia Satellite Task - Overview • Goal: Combine Satellite Imagery 
 with Social Multimedia • Focus on Flooding Events • Two Subtasks: • Disaster Image Retrieval from Social Media (DIRSM) • Flood Detection in Satellite Imagery (FDSI)
  13. 13. DFKI – SDS - DLCC 13 Multimedia Satellite Task - Overview • Goal: Combine Satellite Imagery 
 with Social Multimedia • Focus on Flooding Events • Two Subtasks: • Disaster Image Retrieval from Social Media (DIRSM) • Flood Detection in Satellite Imagery (FDSI)
  14. 14. DFKI – SDS - DLCC Disaster Image Retrieval from Social Media 14
  15. 15. DFKI – SDS - DLCC Disaster Image Retrieval from Social Media 15
  16. 16. DFKI – SDS - DLCC 16 Multimedia Satellite Task - Overview • Goal: Combine Satellite Imagery 
 with Social Multimedia • Focus on Flooding Events • Two Subtasks: • Disaster Image Retrieval from Social Media (DIRSM) • Flood Detection in Satellite Imagery (FDSI)
  17. 17. DFKI – SDS - DLCC 17 Flood Detection in Satellite Imagery
  18. 18. DFKI – SDS - DLCC 18 Run Submissions and Evaluation • Up to 5 runs for each subtask: • DIRSM: three required runs only with the provided dev set (Visual, textual, both) • FDSI: three required runs only with the provided dev set • Standard Evaluation Metrics: • DIRSM: Average Precision at different cutoffs • FDSI: Intersection over Union
  19. 19. DFKI – SDS - DLCC 19 Task Dataset • DIRSM-Dataset: • 6.6k images from YFCC100M + metadata (under CC-licence) • Basic set of precomputed features • Two labels (Flooding/no Flooding) • FDSI-Dataset: • High resolution satellite scenes of seven flooding events provided by PlanetLabs • Cropped Image Patches (320x320px) • Segmentation masks for each patch
  20. 20. DFKI – SDS - DLCC 20 Ground Truth • DIRSM: • Image rating according to the strength of the evidence of flooding (1-5) on Crowdflower (crowdsourcing) • Label as flooding if annotators rate with 4, 5 and non flooding for 1, 2 • Additional distractor images • FDSI: • Segmentation masks extracted by human annotators
  21. 21. DFKI – SDS - DLCC 21 Task participation • 15 Teams registered, 11 submitted runs: • 11 teams submitted for the DIRSM subtask • 6 teams additionally for the FDSI subtask • In total 63 submissions: • 44 submission for the first task • 19 submission for the second subtask
  22. 22. DFKI – SDS - DLCC 22 Participants Approaches - DIRSM • Many different approaches! • Features: • Visual Features (CNN Features, Basic Features) • Metadata (Word Embeddings, BoW of text, title, tags) • Classifiers: • Convolutional Neural Networks, Relation Networks, LSTMs • SVM, Random Forests, Logistic Regression • Late-Fusion vs. Early Fusion • Additional Data-Sources (DBPedia-Spotlight, YFCC100M) • Spectral Regression based Kernel Discriminant Analysis
  23. 23. DFKI – SDS - DLCC 23 Participants Approaches - FDSI • Many different approaches! • Domain Knowledge: • Features from remote sensing (NDVI, NDWI) + SVM & K-Means • Neural Network based: • CNN Features + SVM • Segmentation Architectures (FCN + SegNet) • Generative Adversarial Networks (V-GAN)
  24. 24. DFKI – SDS - DLCC 24 Results - DIRSM - Mean over AP@[50, 100, 150, 240, 480] Visual Metadata Visual + Metadata Open run Open run MultiBrasil 87.88 62.53 85.63 91.59 41.13 WISC 62.75 74.37 80.87 81.61 81.99 CERTH-ITI 92.27 39.90 83.37 - - BMC 19.69 12.46 11.93 11.89 11.79 UTAOS 95.11 31.45 68.12 89.77 82.68 RU-DS 64.70 75.74 85.43 - - B-CVC 70.16 66.38 83.96 75.96 - ELEDIA@UTB 87.87 57.12 90.39 97.36 - MRLDCSE 95.73 18.23 92.55 - - FAST-NU-DS 80.98 71.79 80.84 - - DFKI 95.71 77.64 97.40 64.50 -
  25. 25. DFKI – SDS - DLCC 25 Results - DIRSM - AP@480 Visual Metadata Visual + Metadata Open run Open run MultiBrasil 74.60 76.71 95.84 82.06 54.31 WISC 50.95 66.78 72.26 71.97 72.10 CERTH-ITI 87.82 36.15 68.57 - - BMC 15.55 12.37 12.20 12.23 12.16 UTAOS 84.94 25.88 54.74 81.11 73.83 RU-DS 51.46 63.70 73.16 - - B-CVC 68.40 61.58 81.60 68.40 - ELEDIA@UTB 77.62 57.07 85.41 90.69 - MRLDCSE 86.81 22.83 83.73 - - FAST-NU-DS 64.88 65.00 64.58 - - DFKI 86.64 63.41 90.45 74.08 -
  26. 26. DFKI – SDS - DLCC 26 Results - FDSI - Same Locations Run 1 Run 2 Run 3 Run 4 Run 5 MultiBrasil 87 86 88 78 87 WISC 80 81 - - - CERTH-ITI 75 - - - - BMC 37 37 37 - - UTAOS 82 80 83 83 81 DFKI 73 84 84 - -
  27. 27. DFKI – SDS - DLCC 27 Results - FDSI - New Locations Run 1 Run 2 Run 3 Run 4 Run 5 MultiBrasil 82 80 84 49 84 WISC 83 77 - - - CERTH-ITI 56 - - - - BMC 40 40 40 - - UTAOS 73 70 74 74 73 DFKI 69 70 74 - -
  28. 28. DFKI – SDS - DLCC 28 Conclusion • Many different approaches for both subtask • Multimodal Fusion is important • CNN-Features of pre-trained models are often used • trained dataset matters! • High accuracies for the retrieval task • Quantify impact of a disaster event? • Good accuracies for segmentation based on satellite information • Taking temporal dimension into account • More datasources (more Satellites, Topographical Maps) • Fusion of modalities for prediction

×