MediaEval 2017 - Satellite Task: The Multimedia Satellite Task at MediaEval 2017: Emergence Response for Flooding Events (Overview)

DFKI – KM - DLCC
MediaEval	2017	Multimedia	Satellite	Task
Deep	Learning	Competence	Center	&		
Smart	Data	and	Knowledge	Services
Benjamin Bischke, Patrick Helber, Venkat Srinivasan,
Alan Woodley, Andreas Dengel, Damian Borth
Emergency Response for Flooding Events
ALL	RIGHTS	RESERVED.	No	part	of	this	work	may	be	reproduced	in	any	form	or	by	any	means,	electronic	or	mechanical,	including	photocopying,	recording,	or	by	any	
information	storage	and	retrieval	system	without	expressed	written	permission	from	the	authors.
DFKI – KM - DLCC
Satellite Analysis - What can be done with it?
2
DFKI – KM - DLCC
Satellite Analysis - What can be done with it?
3
Water utilisation
DeforestationClimate Change Natural & Manmade

Disasters
Infrastructure & Traffic
Monitoring
Wildlife
Monitoring
Pollution Monitoring
(Oil Spills, Smok, Pipelines)
Agriculture
Alge-Bloom
Ice
Caps
Carbon
Stocks
Social Events
DFKI – KM - DLCC
Satellite Analysis - What can be done with it?
4
Water utilisation
DeforestationClimate Change Natural & Manmade

Disasters
Infrastructure & Traffic
Monitoring
Wildlife
Monitoring
Pollution Monitoring
(Oil Spills, Smok, Pipelines)
Agriculture
Alge-Bloom
Ice
Caps
Carbon
Stocks
Social Events
DFKI – SDS - DLCC 5
Current Natural Disasters
DFKI – SDS - DLCC 6
Current Natural Disasters
DFKI – SDS - DLCC 7
Current Natural Disasters
DFKI – SDS - DLCC 8
Current Natural Disasters
DFKI – SDS - DLCC
Is Satellite Imagery enough?
9
DigitalGlobe, September 2017
DFKI – SDS - DLCC
Is Satellite Imagery enough?
10
DigitalGlobe, September 2017
Limited Perspective (Clouds, etc.)
Low Temporal Resolution
DFKI – SDS - DLCC
Contextual Enrichment of Satellite Imagery (Idea)
11
DigitalGlobe, September 2017
DFKI – SDS - DLCC 12
Multimedia Satellite Task - Overview
• Goal:	Combine	Satellite	Imagery	

with	Social	Multimedia	
• Focus	on	Flooding	Events	
• Two	Subtasks:	
• Disaster	Image	Retrieval	from	
Social	Media	(DIRSM)	
• Flood	Detection	in	Satellite	Imagery	
(FDSI)
DFKI – SDS - DLCC 13
Multimedia Satellite Task - Overview
• Goal:	Combine	Satellite	Imagery	

with	Social	Multimedia	
• Focus	on	Flooding	Events	
• Two	Subtasks:	
• Disaster	Image	Retrieval	from	
Social	Media	(DIRSM)	
• Flood	Detection	in	Satellite	Imagery	
(FDSI)
DFKI – SDS - DLCC
Disaster Image Retrieval from Social Media
14
DFKI – SDS - DLCC
Disaster Image Retrieval from Social Media
15
DFKI – SDS - DLCC 16
Multimedia Satellite Task - Overview
• Goal:	Combine	Satellite	Imagery	

with	Social	Multimedia	
• Focus	on	Flooding	Events	
• Two	Subtasks:	
• Disaster	Image	Retrieval	from	
Social	Media	(DIRSM)	
• Flood	Detection	in	Satellite	
Imagery	(FDSI)
DFKI – SDS - DLCC 17
Flood Detection in Satellite Imagery
DFKI – SDS - DLCC 18
Run Submissions and Evaluation
• Up	to	5	runs	for	each	subtask:	
• DIRSM:	three	required	runs	only	
with	the	provided	dev	set	(Visual,	
textual,	both)	
• FDSI:	three	required	runs	only	with	
the	provided	dev	set	
• Standard	Evaluation	Metrics:	
• DIRSM:	Average	Precision	at	
different	cutoffs	
• FDSI:	Intersection	over	Union
DFKI – SDS - DLCC 19
Task Dataset
• DIRSM-Dataset:	
• 6.6k	images	from	YFCC100M	+	
metadata	(under	CC-licence)	
• Basic	set	of	precomputed	features		
• Two	labels	(Flooding/no	Flooding)	
• FDSI-Dataset:	
• High	resolution	satellite	scenes	of	
seven	flooding	events	provided	by	
PlanetLabs	
• Cropped	Image	Patches	(320x320px)	
• Segmentation	masks	for	each	patch
DFKI – SDS - DLCC 20
Ground Truth
• DIRSM:	
• Image	rating	according	to	the	
strength	of	the	evidence	of	
flooding	(1-5)	on	Crowdflower	
(crowdsourcing)	
• Label	as	flooding	if	annotators	
rate	with	4,	5	and	non	
flooding	for	1,	2	
• Additional	distractor	images	
• FDSI:	
• Segmentation	masks	
extracted	by	human	
annotators
DFKI – SDS - DLCC 21
Task participation
• 15	Teams	registered,	11	submitted	runs:	
• 11	teams	submitted	for	the	DIRSM	subtask	
• 6	teams	additionally	for	the	FDSI	subtask	
• In	total	63	submissions:	
• 44	submission	for	the	first	task	
• 19	submission	for	the	second	subtask
DFKI – SDS - DLCC 22
Participants Approaches - DIRSM
• Many	different	approaches!	
• Features:	
• Visual	Features	(CNN	Features,	Basic	Features)	
• Metadata	(Word	Embeddings,	BoW	of	text,	title,	tags)	
• Classifiers:	
• Convolutional	Neural	Networks,	Relation	Networks,	LSTMs	
• SVM,	Random	Forests,	Logistic	Regression	
• Late-Fusion	vs.	Early	Fusion	
• Additional	Data-Sources	(DBPedia-Spotlight,	YFCC100M)	
• Spectral	Regression	based	Kernel	Discriminant	Analysis
DFKI – SDS - DLCC 23
Participants Approaches - FDSI
• Many	different	approaches!	
• Domain	Knowledge:	
• Features	from	remote	sensing	(NDVI,	NDWI)	+	SVM	&	
K-Means	
• Neural	Network	based:	
• CNN	Features	+	SVM	
• Segmentation	Architectures	(FCN	+	SegNet)	
• Generative	Adversarial	Networks	(V-GAN)
DFKI – SDS - DLCC 24
Results - DIRSM - Mean over AP@[50, 100, 150, 240, 480]
Visual Metadata Visual +
Metadata
Open run Open run
MultiBrasil 87.88 62.53 85.63 91.59 41.13
WISC 62.75 74.37 80.87 81.61 81.99
CERTH-ITI 92.27 39.90 83.37 - -
BMC 19.69 12.46 11.93 11.89 11.79
UTAOS 95.11 31.45 68.12 89.77 82.68
RU-DS 64.70 75.74 85.43 - -
B-CVC 70.16 66.38 83.96 75.96 -
ELEDIA@UTB 87.87 57.12 90.39 97.36 -
MRLDCSE 95.73 18.23 92.55 - -
FAST-NU-DS 80.98 71.79 80.84 - -
DFKI 95.71 77.64 97.40 64.50 -
DFKI – SDS - DLCC 25
Results - DIRSM - AP@480
Visual Metadata Visual +
Metadata
Open run Open run
MultiBrasil 74.60 76.71 95.84 82.06 54.31
WISC 50.95 66.78 72.26 71.97 72.10
CERTH-ITI 87.82 36.15 68.57 - -
BMC 15.55 12.37 12.20 12.23 12.16
UTAOS 84.94 25.88 54.74 81.11 73.83
RU-DS 51.46 63.70 73.16 - -
B-CVC 68.40 61.58 81.60 68.40 -
ELEDIA@UTB 77.62 57.07 85.41 90.69 -
MRLDCSE 86.81 22.83 83.73 - -
FAST-NU-DS 64.88 65.00 64.58 - -
DFKI 86.64 63.41 90.45 74.08 -
DFKI – SDS - DLCC 26
Results - FDSI - Same Locations
Run 1 Run 2 Run 3 Run 4 Run 5
MultiBrasil 87 86 88 78 87
WISC 80 81 - - -
CERTH-ITI 75 - - - -
BMC 37 37 37 - -
UTAOS 82 80 83 83 81
DFKI 73 84 84 - -
DFKI – SDS - DLCC 27
Results - FDSI - New Locations
Run 1 Run 2 Run 3 Run 4 Run 5
MultiBrasil 82 80 84 49 84
WISC 83 77 - - -
CERTH-ITI 56 - - - -
BMC 40 40 40 - -
UTAOS 73 70 74 74 73
DFKI 69 70 74 - -
DFKI – SDS - DLCC 28
Conclusion
• Many	different	approaches	for	both	subtask	
• Multimodal	Fusion	is	important	
• CNN-Features	of	pre-trained	models	are	often	used		
• trained	dataset	matters!	
• High	accuracies	for	the	retrieval	task	
• Quantify	impact	of	a	disaster	event?	
• Good	accuracies	for	segmentation	based	on	satellite	
information	
• Taking	temporal	dimension	into	account	
• More	datasources	(more	Satellites,	Topographical	Maps)	
• Fusion	of	modalities	for	prediction
1 of 28

More Related Content

Similar to MediaEval 2017 - Satellite Task: The Multimedia Satellite Task at MediaEval 2017: Emergence Response for Flooding Events (Overview)(20)

Data Capacitor II at Indiana UniversityData Capacitor II at Indiana University
Data Capacitor II at Indiana University
inside-BigData.com570 views
Web-based On-demand Global NDVI Data ServicesWeb-based On-demand Global NDVI Data Services
Web-based On-demand Global NDVI Data Services
The HDF-EOS Tools and Information Center535 views
The OptIPuter and Its ApplicationsThe OptIPuter and Its Applications
The OptIPuter and Its Applications
Larry Smarr355 views
Jerry Sheehan Green CanarieJerry Sheehan Green Canarie
Jerry Sheehan Green Canarie
Bill St. Arnaud341 views
ESDIS Project StatusESDIS Project Status
ESDIS Project Status
The HDF-EOS Tools and Information Center414 views
PresentationPresentation
Presentation
bolu804510 views
FMI Information Management SystemFMI Information Management System
FMI Information Management System
Roope Tervo79 views
EOSDIS StatusEOSDIS Status
EOSDIS Status
The HDF-EOS Tools and Information Center409 views
Introduction to HDFLook_MODISIntroduction to HDFLook_MODIS
Introduction to HDFLook_MODIS
The HDF-EOS Tools and Information Center734 views

More from multimediaeval(20)

MediaEval 2017 - Satellite Task: The Multimedia Satellite Task at MediaEval 2017: Emergence Response for Flooding Events (Overview)

  • 1. DFKI – KM - DLCC MediaEval 2017 Multimedia Satellite Task Deep Learning Competence Center & Smart Data and Knowledge Services Benjamin Bischke, Patrick Helber, Venkat Srinivasan, Alan Woodley, Andreas Dengel, Damian Borth Emergency Response for Flooding Events ALL RIGHTS RESERVED. No part of this work may be reproduced in any form or by any means, electronic or mechanical, including photocopying, recording, or by any information storage and retrieval system without expressed written permission from the authors.
  • 2. DFKI – KM - DLCC Satellite Analysis - What can be done with it? 2
  • 3. DFKI – KM - DLCC Satellite Analysis - What can be done with it? 3 Water utilisation DeforestationClimate Change Natural & Manmade
 Disasters Infrastructure & Traffic Monitoring Wildlife Monitoring Pollution Monitoring (Oil Spills, Smok, Pipelines) Agriculture Alge-Bloom Ice Caps Carbon Stocks Social Events
  • 4. DFKI – KM - DLCC Satellite Analysis - What can be done with it? 4 Water utilisation DeforestationClimate Change Natural & Manmade
 Disasters Infrastructure & Traffic Monitoring Wildlife Monitoring Pollution Monitoring (Oil Spills, Smok, Pipelines) Agriculture Alge-Bloom Ice Caps Carbon Stocks Social Events
  • 5. DFKI – SDS - DLCC 5 Current Natural Disasters
  • 6. DFKI – SDS - DLCC 6 Current Natural Disasters
  • 7. DFKI – SDS - DLCC 7 Current Natural Disasters
  • 8. DFKI – SDS - DLCC 8 Current Natural Disasters
  • 9. DFKI – SDS - DLCC Is Satellite Imagery enough? 9 DigitalGlobe, September 2017
  • 10. DFKI – SDS - DLCC Is Satellite Imagery enough? 10 DigitalGlobe, September 2017 Limited Perspective (Clouds, etc.) Low Temporal Resolution
  • 11. DFKI – SDS - DLCC Contextual Enrichment of Satellite Imagery (Idea) 11 DigitalGlobe, September 2017
  • 12. DFKI – SDS - DLCC 12 Multimedia Satellite Task - Overview • Goal: Combine Satellite Imagery 
 with Social Multimedia • Focus on Flooding Events • Two Subtasks: • Disaster Image Retrieval from Social Media (DIRSM) • Flood Detection in Satellite Imagery (FDSI)
  • 13. DFKI – SDS - DLCC 13 Multimedia Satellite Task - Overview • Goal: Combine Satellite Imagery 
 with Social Multimedia • Focus on Flooding Events • Two Subtasks: • Disaster Image Retrieval from Social Media (DIRSM) • Flood Detection in Satellite Imagery (FDSI)
  • 14. DFKI – SDS - DLCC Disaster Image Retrieval from Social Media 14
  • 15. DFKI – SDS - DLCC Disaster Image Retrieval from Social Media 15
  • 16. DFKI – SDS - DLCC 16 Multimedia Satellite Task - Overview • Goal: Combine Satellite Imagery 
 with Social Multimedia • Focus on Flooding Events • Two Subtasks: • Disaster Image Retrieval from Social Media (DIRSM) • Flood Detection in Satellite Imagery (FDSI)
  • 17. DFKI – SDS - DLCC 17 Flood Detection in Satellite Imagery
  • 18. DFKI – SDS - DLCC 18 Run Submissions and Evaluation • Up to 5 runs for each subtask: • DIRSM: three required runs only with the provided dev set (Visual, textual, both) • FDSI: three required runs only with the provided dev set • Standard Evaluation Metrics: • DIRSM: Average Precision at different cutoffs • FDSI: Intersection over Union
  • 19. DFKI – SDS - DLCC 19 Task Dataset • DIRSM-Dataset: • 6.6k images from YFCC100M + metadata (under CC-licence) • Basic set of precomputed features • Two labels (Flooding/no Flooding) • FDSI-Dataset: • High resolution satellite scenes of seven flooding events provided by PlanetLabs • Cropped Image Patches (320x320px) • Segmentation masks for each patch
  • 20. DFKI – SDS - DLCC 20 Ground Truth • DIRSM: • Image rating according to the strength of the evidence of flooding (1-5) on Crowdflower (crowdsourcing) • Label as flooding if annotators rate with 4, 5 and non flooding for 1, 2 • Additional distractor images • FDSI: • Segmentation masks extracted by human annotators
  • 21. DFKI – SDS - DLCC 21 Task participation • 15 Teams registered, 11 submitted runs: • 11 teams submitted for the DIRSM subtask • 6 teams additionally for the FDSI subtask • In total 63 submissions: • 44 submission for the first task • 19 submission for the second subtask
  • 22. DFKI – SDS - DLCC 22 Participants Approaches - DIRSM • Many different approaches! • Features: • Visual Features (CNN Features, Basic Features) • Metadata (Word Embeddings, BoW of text, title, tags) • Classifiers: • Convolutional Neural Networks, Relation Networks, LSTMs • SVM, Random Forests, Logistic Regression • Late-Fusion vs. Early Fusion • Additional Data-Sources (DBPedia-Spotlight, YFCC100M) • Spectral Regression based Kernel Discriminant Analysis
  • 23. DFKI – SDS - DLCC 23 Participants Approaches - FDSI • Many different approaches! • Domain Knowledge: • Features from remote sensing (NDVI, NDWI) + SVM & K-Means • Neural Network based: • CNN Features + SVM • Segmentation Architectures (FCN + SegNet) • Generative Adversarial Networks (V-GAN)
  • 24. DFKI – SDS - DLCC 24 Results - DIRSM - Mean over AP@[50, 100, 150, 240, 480] Visual Metadata Visual + Metadata Open run Open run MultiBrasil 87.88 62.53 85.63 91.59 41.13 WISC 62.75 74.37 80.87 81.61 81.99 CERTH-ITI 92.27 39.90 83.37 - - BMC 19.69 12.46 11.93 11.89 11.79 UTAOS 95.11 31.45 68.12 89.77 82.68 RU-DS 64.70 75.74 85.43 - - B-CVC 70.16 66.38 83.96 75.96 - ELEDIA@UTB 87.87 57.12 90.39 97.36 - MRLDCSE 95.73 18.23 92.55 - - FAST-NU-DS 80.98 71.79 80.84 - - DFKI 95.71 77.64 97.40 64.50 -
  • 25. DFKI – SDS - DLCC 25 Results - DIRSM - AP@480 Visual Metadata Visual + Metadata Open run Open run MultiBrasil 74.60 76.71 95.84 82.06 54.31 WISC 50.95 66.78 72.26 71.97 72.10 CERTH-ITI 87.82 36.15 68.57 - - BMC 15.55 12.37 12.20 12.23 12.16 UTAOS 84.94 25.88 54.74 81.11 73.83 RU-DS 51.46 63.70 73.16 - - B-CVC 68.40 61.58 81.60 68.40 - ELEDIA@UTB 77.62 57.07 85.41 90.69 - MRLDCSE 86.81 22.83 83.73 - - FAST-NU-DS 64.88 65.00 64.58 - - DFKI 86.64 63.41 90.45 74.08 -
  • 26. DFKI – SDS - DLCC 26 Results - FDSI - Same Locations Run 1 Run 2 Run 3 Run 4 Run 5 MultiBrasil 87 86 88 78 87 WISC 80 81 - - - CERTH-ITI 75 - - - - BMC 37 37 37 - - UTAOS 82 80 83 83 81 DFKI 73 84 84 - -
  • 27. DFKI – SDS - DLCC 27 Results - FDSI - New Locations Run 1 Run 2 Run 3 Run 4 Run 5 MultiBrasil 82 80 84 49 84 WISC 83 77 - - - CERTH-ITI 56 - - - - BMC 40 40 40 - - UTAOS 73 70 74 74 73 DFKI 69 70 74 - -
  • 28. DFKI – SDS - DLCC 28 Conclusion • Many different approaches for both subtask • Multimodal Fusion is important • CNN-Features of pre-trained models are often used • trained dataset matters! • High accuracies for the retrieval task • Quantify impact of a disaster event? • Good accuracies for segmentation based on satellite information • Taking temporal dimension into account • More datasources (more Satellites, Topographical Maps) • Fusion of modalities for prediction