These slides present an online system that leverages social media data in real time to identify landslide-related information automatically using state-of-the-art artificial intelligence techniques. The designed system can (i) reduce the information overload by eliminating duplicate and irrelevant content, (ii) identify landslide images, (iii) infer geolocation of the images, and (iv) categorize the user type (organization or person) of the account sharing the information. The system was deployed in February 2020 online at https://landslide-aidr.qcri.org/landslide_system.php to monitor live Twitter data stream and has been running continuously since then to provide time-critical information to partners such as British Geological Survey and European Mediterranean Seismological Centre. We trust this system can both contribute to harvesting of global landslide data for further research and support global landslide maps to facilitate emergency response and decision making.
MediaEval 2018: Multimedia Satellite Task: Emergency Response for Flooding Ev...multimediaeval
Paper: http://ceur-ws.org/Vol-2283/MediaEval_18_paper_10.pdf
Youtube: https://youtu.be/yq1nIPc6dWw
Benjamin Bischke, Patrick Helber, Zhengyu Zhao, Jens de Bruijn, Damian Borth, The Multimedia Satellite Task at MediaEval 2018. Proc. of MediaEval 2018, 29-31 October 2018, Sophia Antipolis, France.
Abstract: This paper provides a description of the MediaEval 2018 Multimedia Satellite Task. The primary goal of the task is to extract and fuse content of events which are present in Satellite Imagery and Social Media. Establishing a link from Satellite Imagery to Social Multimedia can yield to a comprehensive event representation which is vital for numerous applications. Focusing on natural disaster events, the main objective of the task is to leverage the combined event representation within the context of emergency response and environmental monitoring. In particular, our task focuses on flooding events and consists of two subtasks. The first Image Classification from Social Media subtask requires participants to retrieve images from Social Media which show a direct evidence for road passability during flooding events. The second task Flood Detection from Satellite Images aims to extract potentially flooded road sections from satellite images. The task seeks to go beyond state-of-the-art flooding map generation by focusing on information about the road passability and accessibility to urban infrastructure. Such information shows a clear potential to complement information from social images with satellite imagery is of vital importance for emergency management.
Presented by Benjamin Bischke
Advances in Agricultural remote sensingsAyanDas644783
This document summarizes a 3-part training program on crop mapping using synthetic aperture radar (SAR) and optical remote sensing. The training will cover crop classification using time series of polarimetric SAR data, monitoring crop growth through SAR-derived crop structural parameters, and classifying crop types using time series optical and radar data. Attendees will learn how to analyze satellite image time series from sensors like Sentinel-1 and Sentinel-2 for applications like crop monitoring. The training objectives are to understand polarimetric SAR for crop assessment and using multitemporal SAR and optical data for crop monitoring and classification.
1) The document describes a new methodology for identifying landslides using synthetic aperture radar (SAR) data on the Google Earth Engine platform.
2) It applies the methodology to map landslides triggered by heavy rainfall in Hiroshima, Japan in 2018, achieving good detection results by combining pre- and post-event SAR images and a digital elevation model.
3) The study finds that using more SAR images and data from multiple satellite flyovers improves landslide identification accuracy, and that rapid landslide mapping is possible within a week of an event.
The database will serve to inform users on consequences from past events, as a benchmarking tool for analytical loss models and to support the development of tools to create vulnerability data appropriate to specific countries, structures, or building classes.
1) Deep learning is being applied to tasks in Earth observation like land cover mapping, vegetation biomass estimation, 3D building reconstruction, anomaly detection, and simulating remote sensing images.
2) There are unique challenges in applying deep learning to Earth observation data including the curved surface of the Earth, different acquisition geometries, sparse and heterogeneous data, and integrating multiple data sources and dimensions.
3) Examples of deep learning applications presented include using convolutional autoencoders to detect anomalies in remote sensing images, incorporating Lidar data to improve biomass estimation from SAR images, and using generative models to simulate SAR images from optical images.
Smartphones and Earthquakes - A System Design PresentationPratyush Pandab
The document presents an overview of the design phase that was carried out during a collaboration with Natural Disaster Team of Japan and the Research team at the University of Oulu.
The document summarizes the role of geospatial information in a hyper-connected society. It discusses how the digital earth and geo big data/internet of things are generating massive amounts of geospatial data. It also describes how web geo services, participatory mapping, and geo crowdsourcing are making this data accessible and enabling new forms of interaction between people, places, and things on the internet.
MediaEval 2018: Multimedia Satellite Task: Emergency Response for Flooding Ev...multimediaeval
Paper: http://ceur-ws.org/Vol-2283/MediaEval_18_paper_10.pdf
Youtube: https://youtu.be/yq1nIPc6dWw
Benjamin Bischke, Patrick Helber, Zhengyu Zhao, Jens de Bruijn, Damian Borth, The Multimedia Satellite Task at MediaEval 2018. Proc. of MediaEval 2018, 29-31 October 2018, Sophia Antipolis, France.
Abstract: This paper provides a description of the MediaEval 2018 Multimedia Satellite Task. The primary goal of the task is to extract and fuse content of events which are present in Satellite Imagery and Social Media. Establishing a link from Satellite Imagery to Social Multimedia can yield to a comprehensive event representation which is vital for numerous applications. Focusing on natural disaster events, the main objective of the task is to leverage the combined event representation within the context of emergency response and environmental monitoring. In particular, our task focuses on flooding events and consists of two subtasks. The first Image Classification from Social Media subtask requires participants to retrieve images from Social Media which show a direct evidence for road passability during flooding events. The second task Flood Detection from Satellite Images aims to extract potentially flooded road sections from satellite images. The task seeks to go beyond state-of-the-art flooding map generation by focusing on information about the road passability and accessibility to urban infrastructure. Such information shows a clear potential to complement information from social images with satellite imagery is of vital importance for emergency management.
Presented by Benjamin Bischke
Advances in Agricultural remote sensingsAyanDas644783
This document summarizes a 3-part training program on crop mapping using synthetic aperture radar (SAR) and optical remote sensing. The training will cover crop classification using time series of polarimetric SAR data, monitoring crop growth through SAR-derived crop structural parameters, and classifying crop types using time series optical and radar data. Attendees will learn how to analyze satellite image time series from sensors like Sentinel-1 and Sentinel-2 for applications like crop monitoring. The training objectives are to understand polarimetric SAR for crop assessment and using multitemporal SAR and optical data for crop monitoring and classification.
1) The document describes a new methodology for identifying landslides using synthetic aperture radar (SAR) data on the Google Earth Engine platform.
2) It applies the methodology to map landslides triggered by heavy rainfall in Hiroshima, Japan in 2018, achieving good detection results by combining pre- and post-event SAR images and a digital elevation model.
3) The study finds that using more SAR images and data from multiple satellite flyovers improves landslide identification accuracy, and that rapid landslide mapping is possible within a week of an event.
The database will serve to inform users on consequences from past events, as a benchmarking tool for analytical loss models and to support the development of tools to create vulnerability data appropriate to specific countries, structures, or building classes.
1) Deep learning is being applied to tasks in Earth observation like land cover mapping, vegetation biomass estimation, 3D building reconstruction, anomaly detection, and simulating remote sensing images.
2) There are unique challenges in applying deep learning to Earth observation data including the curved surface of the Earth, different acquisition geometries, sparse and heterogeneous data, and integrating multiple data sources and dimensions.
3) Examples of deep learning applications presented include using convolutional autoencoders to detect anomalies in remote sensing images, incorporating Lidar data to improve biomass estimation from SAR images, and using generative models to simulate SAR images from optical images.
Smartphones and Earthquakes - A System Design PresentationPratyush Pandab
The document presents an overview of the design phase that was carried out during a collaboration with Natural Disaster Team of Japan and the Research team at the University of Oulu.
The document summarizes the role of geospatial information in a hyper-connected society. It discusses how the digital earth and geo big data/internet of things are generating massive amounts of geospatial data. It also describes how web geo services, participatory mapping, and geo crowdsourcing are making this data accessible and enabling new forms of interaction between people, places, and things on the internet.
The document summarizes the role of geospatial information in a hyper-connected society. It discusses how the digital earth utilizes geospatial data and services to create three-dimensional, multi-resolution models of the planet. It also explores how geo big data from satellites, sensors, social media, and the internet of things is creating massive datasets. Web geospatial services allow users to access, analyze and visualize this geospatial data through applications and participatory platforms.
PEARC17: Visual exploration and analysis of time series earthquake dataAmit Chourasia
Earthquake hazard estimation requires systematic investigation of past records as well as fundamental processes that cause the quake. However, detailed long-term records of earthquakes at all scales (magnitude, space and time) are not available. Hence a synthetic method based on first principals could be employed to generate such records to bridge this critical gap of missing data. RSQSim is such a simulator that generates seismic event catalogs for several thousand years at various scales. This synthetic catalog contains rich detail about the earthquake events and associated properties.
Exploring this data is of vital importance to validate the simulator as well as to identify features of interest such as quake time histories, conduct analyses such as calculating mean recurrence interval of events on each fault section. This work1 describes and demonstrates a prototype web based visual tool that enables domain scientists and students explore this rich dataset, as well as discusses refinement and streamlining of data management and analysis that is less error prone and scalable.
Community Structure, Interaction and Evolution Analysis of Online Social Netw...Symeon Papadopoulos
This document presents a framework for analyzing the structure, interaction, and evolution of online social networks around real-world social phenomena using Twitter data. It describes discretizing interaction data into timeslots, detecting communities using Louvain detection, and identifying evolving communities over time. Key features include influential users/communities, popular hashtags, persistence, stability, and centrality. The framework was applied to a Greek Twitter dataset involving political hashtags, extracting meaningful information about influential discussions. Future work aims to improve similarity search and incorporate retweets.
Evaluation of NEAMWave12 and discussion on NEAMWave14
ICG/NEAMTWS-X, Rome, Italy, November 19-21, 2013
Event: http://www.ioc-unesco.org/index.php?option=com_oe&task=viewEventRecord&eventID=1376
Related paper: http://www.ioc-unesco.org/index.php?option=com_oe&task=viewDocumentRecord&docID=12084
Efficient and thorough data collection and its timely analysis are critical for disaster response and recovery in order to save people's lives during disasters. However, access to comprehensive data in disaster areas and their quick analysis to transform the data to actionable knowledge are challenging. With the popularity and pervasiveness of mobile devices, crowdsourcing data collection and analysis has emerged as an effective and scalable solution. This paper addresses the problem of crowdsourcing mobile videos for disasters by identifying two unique challenges of 1) prioritizing visual data collection and transmission under bandwidth scarcity caused by damaged communication networks and 2) analyzing the acquired data in a timely manner. We introduce a new crowdsourcing framework for acquiring and analyzing the mobile videos utilizing fine granularity spatial metadata of videos for a rapidly changing disaster situation. We also develop an analytical model to quantify the visual awareness of a video based on its metadata and propose the visual awareness maximization problem for acquiring the most relevant data under bandwidth constraints. The collected videos are evenly distributed to off-site analysts to collectively minimize crowdsourcing efforts for analysis. Our simulation results demonstrate the effectiveness and feasibility of the proposed framework.
Links:
http://infolab.usc.edu/DocsDemos/to_ieeebigdata2015.pdf
http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=7363814
The lecture discusses space surveillance networks and space situational awareness. It introduces the US Space Surveillance Network, which tracks objects 10 cm and larger in low Earth orbit and 1 m and larger in geosynchronous orbit. Commercial networks like LeoLabs and ExoAnalytic Solutions are also discussed. Data from these networks is used for important services like conjunction predictions to ensure space safety. The next lecture will focus on systems used to measure and characterize space objects.
Processing and understanding geo-social media contentfoostermann
This document discusses using geo-social media as sensors for earth observation and disaster response. It provides an overview of the current state-of-the-art in processing and understanding geo-social media content, including examples from forest fire monitoring and crisis management. Specifically, it describes approaches for retrieving, analyzing, geocoding and clustering geo-social media data from Twitter and Flickr, as well as scoring and assessing the quality of the information to support decision making during disasters.
Presentation delivered by James (Jim) Breaux, PE, MSF, Director, Engineering Operations, Centurion Pipeline Co. at the marcus evans Energy Pipeline Management Summit 2017 held in Dallas, TX
Geographic context analysis of volunteered informationfoostermann
This document discusses using geographic context analysis to assess the credibility and relevance of volunteered geographic information (VGI) during crisis events like forest fires. It outlines a workflow for collecting VGI from social media via APIs, analyzing the text for topicality and geocoding the location, then assessing credibility based on geographic context factors like land cover, population density, and proximity to known hotspots. Case studies on 2010-2011 French forest fires demonstrate the volumes of tweets and photos retrieved and geocoded in near real-time to support crisis response.
Validation of services, data and metadataLuis Bermudez
Now days organizations are making available data (e.g. vector data, rasters) via web services, that follow open standards and are easier to integrate with other data. Validation of these services is important to guarantee that clients (e.g. web portals, mobile applications) can properly discover and download the data that a user needs. Validation can also serve as curation process to improve discovery on registries [1][2] or for certification purposes [3]. This session will provide an overview and a demo of the Open Geospatial Consortium (OGC) Validation tools. The participants will understand how to invoke a test and install the tools in their own environment. The validation tools are used to test servers, data and clients. The tests can be customized to not only test implementations against OGC standards but also community profiles. The validation engine and the tests are available as open source in GitHub.
[1] ESIP Discovery Cluster Testbed: Validate and Relate Data & Services - Draft - http://commons.esipfed.org/node/406
[2] Community Inventory of EarthCube Resources for Geosciences Interoperability - http://earthcube.org/group/cinergi
[3] OCG Validation Website - http://cite.opengeospatial.org/teamengine/
The slides of the invited talk Maurizio Marchese from the LiquidPub team gave at the Workhop on Automated Experimentation at e-Science Institute, Edinburgh, February 24th, 2010
This interdisciplinary tutorial was presented at the 2017 IEEE International Conference on Data Engineering in San Diego.
Reference:
Andreas Züfle, Goce Trajcevski, Dieter Pfoser, Matthew T. Rice, Matthias Renz, Timothy Leslie, Paul Delamater and Tobias Emrich. Handling Uncertainty in Geo-Spatial Data. 33rd International Conference on Data Engineering (ICDE). 2017.
The OpenQuake Platform provides global databases, hazard and risk models, and products. It uses OpenQuake modeling tools and engine to generate global and local results from global and local models and databases. The platform is built on open source GeoNode which allows for management and publication of geospatial data with social features. It enables users to view, filter, overlay datasets and contribute data through a web GIS interface. The full platform is scheduled for release in mid-2014.
TraitCapture: NextGen Monitoring and Visualization from seed to ecosystemTimeScience
This document discusses next generation software and hardware for plant phenotyping and ecosystem monitoring. It outlines challenges such as processing and managing large datasets, and optimizing open data sharing. Emerging tools discussed for high resolution field phenotyping include gigapixel imaging, drones, LiDAR, virtual/augmented reality, and sensor networks. A case study is presented of a sensor array installed at the Australian National Arboretum to monitor the environment, tree growth phenotypes, and genotypes over time at high precision across the landscape. The goal is to address fundamental ecological questions by capturing data at finer spatial and temporal resolutions than previously possible.
Crowdsourcing Land Cover and Land Use Data: Experiences from IIASALouisa Diggs
Quantifying Error in Training Data for Mapping and Monitoring the Earth System - A Workshop on “Quantifying Error in Training Data for Mapping and Monitoring the Earth System” was held on January 8-9, 2019 at Clark University, with support from Omidyar Network’s Property Rights Initiative, now PlaceFund.
The National Ecological Observatory Network (NEON) will deploy thousands of sensors across the United States to study ecological change. NEON will establish core sites across 20 ecoclimatic domains to collect standardized long-term data on climate, hydrology, soils, vegetation, and aquatic and terrestrial wildlife. Sensors will include phenocams to monitor plant phenology. Data will be made freely available through the NEON data portal to enable scientists to understand ecological responses to climate change, land use change, and invasive species at a continental scale.
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
The document summarizes the role of geospatial information in a hyper-connected society. It discusses how the digital earth utilizes geospatial data and services to create three-dimensional, multi-resolution models of the planet. It also explores how geo big data from satellites, sensors, social media, and the internet of things is creating massive datasets. Web geospatial services allow users to access, analyze and visualize this geospatial data through applications and participatory platforms.
PEARC17: Visual exploration and analysis of time series earthquake dataAmit Chourasia
Earthquake hazard estimation requires systematic investigation of past records as well as fundamental processes that cause the quake. However, detailed long-term records of earthquakes at all scales (magnitude, space and time) are not available. Hence a synthetic method based on first principals could be employed to generate such records to bridge this critical gap of missing data. RSQSim is such a simulator that generates seismic event catalogs for several thousand years at various scales. This synthetic catalog contains rich detail about the earthquake events and associated properties.
Exploring this data is of vital importance to validate the simulator as well as to identify features of interest such as quake time histories, conduct analyses such as calculating mean recurrence interval of events on each fault section. This work1 describes and demonstrates a prototype web based visual tool that enables domain scientists and students explore this rich dataset, as well as discusses refinement and streamlining of data management and analysis that is less error prone and scalable.
Community Structure, Interaction and Evolution Analysis of Online Social Netw...Symeon Papadopoulos
This document presents a framework for analyzing the structure, interaction, and evolution of online social networks around real-world social phenomena using Twitter data. It describes discretizing interaction data into timeslots, detecting communities using Louvain detection, and identifying evolving communities over time. Key features include influential users/communities, popular hashtags, persistence, stability, and centrality. The framework was applied to a Greek Twitter dataset involving political hashtags, extracting meaningful information about influential discussions. Future work aims to improve similarity search and incorporate retweets.
Evaluation of NEAMWave12 and discussion on NEAMWave14
ICG/NEAMTWS-X, Rome, Italy, November 19-21, 2013
Event: http://www.ioc-unesco.org/index.php?option=com_oe&task=viewEventRecord&eventID=1376
Related paper: http://www.ioc-unesco.org/index.php?option=com_oe&task=viewDocumentRecord&docID=12084
Efficient and thorough data collection and its timely analysis are critical for disaster response and recovery in order to save people's lives during disasters. However, access to comprehensive data in disaster areas and their quick analysis to transform the data to actionable knowledge are challenging. With the popularity and pervasiveness of mobile devices, crowdsourcing data collection and analysis has emerged as an effective and scalable solution. This paper addresses the problem of crowdsourcing mobile videos for disasters by identifying two unique challenges of 1) prioritizing visual data collection and transmission under bandwidth scarcity caused by damaged communication networks and 2) analyzing the acquired data in a timely manner. We introduce a new crowdsourcing framework for acquiring and analyzing the mobile videos utilizing fine granularity spatial metadata of videos for a rapidly changing disaster situation. We also develop an analytical model to quantify the visual awareness of a video based on its metadata and propose the visual awareness maximization problem for acquiring the most relevant data under bandwidth constraints. The collected videos are evenly distributed to off-site analysts to collectively minimize crowdsourcing efforts for analysis. Our simulation results demonstrate the effectiveness and feasibility of the proposed framework.
Links:
http://infolab.usc.edu/DocsDemos/to_ieeebigdata2015.pdf
http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=7363814
The lecture discusses space surveillance networks and space situational awareness. It introduces the US Space Surveillance Network, which tracks objects 10 cm and larger in low Earth orbit and 1 m and larger in geosynchronous orbit. Commercial networks like LeoLabs and ExoAnalytic Solutions are also discussed. Data from these networks is used for important services like conjunction predictions to ensure space safety. The next lecture will focus on systems used to measure and characterize space objects.
Processing and understanding geo-social media contentfoostermann
This document discusses using geo-social media as sensors for earth observation and disaster response. It provides an overview of the current state-of-the-art in processing and understanding geo-social media content, including examples from forest fire monitoring and crisis management. Specifically, it describes approaches for retrieving, analyzing, geocoding and clustering geo-social media data from Twitter and Flickr, as well as scoring and assessing the quality of the information to support decision making during disasters.
Presentation delivered by James (Jim) Breaux, PE, MSF, Director, Engineering Operations, Centurion Pipeline Co. at the marcus evans Energy Pipeline Management Summit 2017 held in Dallas, TX
Geographic context analysis of volunteered informationfoostermann
This document discusses using geographic context analysis to assess the credibility and relevance of volunteered geographic information (VGI) during crisis events like forest fires. It outlines a workflow for collecting VGI from social media via APIs, analyzing the text for topicality and geocoding the location, then assessing credibility based on geographic context factors like land cover, population density, and proximity to known hotspots. Case studies on 2010-2011 French forest fires demonstrate the volumes of tweets and photos retrieved and geocoded in near real-time to support crisis response.
Validation of services, data and metadataLuis Bermudez
Now days organizations are making available data (e.g. vector data, rasters) via web services, that follow open standards and are easier to integrate with other data. Validation of these services is important to guarantee that clients (e.g. web portals, mobile applications) can properly discover and download the data that a user needs. Validation can also serve as curation process to improve discovery on registries [1][2] or for certification purposes [3]. This session will provide an overview and a demo of the Open Geospatial Consortium (OGC) Validation tools. The participants will understand how to invoke a test and install the tools in their own environment. The validation tools are used to test servers, data and clients. The tests can be customized to not only test implementations against OGC standards but also community profiles. The validation engine and the tests are available as open source in GitHub.
[1] ESIP Discovery Cluster Testbed: Validate and Relate Data & Services - Draft - http://commons.esipfed.org/node/406
[2] Community Inventory of EarthCube Resources for Geosciences Interoperability - http://earthcube.org/group/cinergi
[3] OCG Validation Website - http://cite.opengeospatial.org/teamengine/
The slides of the invited talk Maurizio Marchese from the LiquidPub team gave at the Workhop on Automated Experimentation at e-Science Institute, Edinburgh, February 24th, 2010
This interdisciplinary tutorial was presented at the 2017 IEEE International Conference on Data Engineering in San Diego.
Reference:
Andreas Züfle, Goce Trajcevski, Dieter Pfoser, Matthew T. Rice, Matthias Renz, Timothy Leslie, Paul Delamater and Tobias Emrich. Handling Uncertainty in Geo-Spatial Data. 33rd International Conference on Data Engineering (ICDE). 2017.
The OpenQuake Platform provides global databases, hazard and risk models, and products. It uses OpenQuake modeling tools and engine to generate global and local results from global and local models and databases. The platform is built on open source GeoNode which allows for management and publication of geospatial data with social features. It enables users to view, filter, overlay datasets and contribute data through a web GIS interface. The full platform is scheduled for release in mid-2014.
TraitCapture: NextGen Monitoring and Visualization from seed to ecosystemTimeScience
This document discusses next generation software and hardware for plant phenotyping and ecosystem monitoring. It outlines challenges such as processing and managing large datasets, and optimizing open data sharing. Emerging tools discussed for high resolution field phenotyping include gigapixel imaging, drones, LiDAR, virtual/augmented reality, and sensor networks. A case study is presented of a sensor array installed at the Australian National Arboretum to monitor the environment, tree growth phenotypes, and genotypes over time at high precision across the landscape. The goal is to address fundamental ecological questions by capturing data at finer spatial and temporal resolutions than previously possible.
Crowdsourcing Land Cover and Land Use Data: Experiences from IIASALouisa Diggs
Quantifying Error in Training Data for Mapping and Monitoring the Earth System - A Workshop on “Quantifying Error in Training Data for Mapping and Monitoring the Earth System” was held on January 8-9, 2019 at Clark University, with support from Omidyar Network’s Property Rights Initiative, now PlaceFund.
The National Ecological Observatory Network (NEON) will deploy thousands of sensors across the United States to study ecological change. NEON will establish core sites across 20 ecoclimatic domains to collect standardized long-term data on climate, hydrology, soils, vegetation, and aquatic and terrestrial wildlife. Sensors will include phenocams to monitor plant phenology. Data will be made freely available through the NEON data portal to enable scientists to understand ecological responses to climate change, land use change, and invasive species at a continental scale.
Similar to A Real-time System for Detecting Landslide Reports on Social Media using Artificial Intelligence (20)
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
Essentials of Automations: Exploring Attributes & Automation ParametersSafe Software
Building automations in FME Flow can save time, money, and help businesses scale by eliminating data silos and providing data to stakeholders in real-time. One essential component to orchestrating complex automations is the use of attributes & automation parameters (both formerly known as “keys”). In fact, it’s unlikely you’ll ever build an Automation without using these components, but what exactly are they?
Attributes & automation parameters enable the automation author to pass data values from one automation component to the next. During this webinar, our FME Flow Specialists will cover leveraging the three types of these output attributes & parameters in FME Flow: Event, Custom, and Automation. As a bonus, they’ll also be making use of the Split-Merge Block functionality.
You’ll leave this webinar with a better understanding of how to maximize the potential of automations by making use of attributes & automation parameters, with the ultimate goal of setting your enterprise integration workflows up on autopilot.
"Frontline Battles with DDoS: Best practices and Lessons Learned", Igor IvaniukFwdays
At this talk we will discuss DDoS protection tools and best practices, discuss network architectures and what AWS has to offer. Also, we will look into one of the largest DDoS attacks on Ukrainian infrastructure that happened in February 2022. We'll see, what techniques helped to keep the web resources available for Ukrainians and how AWS improved DDoS protection for all customers based on Ukraine experience
In the realm of cybersecurity, offensive security practices act as a critical shield. By simulating real-world attacks in a controlled environment, these techniques expose vulnerabilities before malicious actors can exploit them. This proactive approach allows manufacturers to identify and fix weaknesses, significantly enhancing system security.
This presentation delves into the development of a system designed to mimic Galileo's Open Service signal using software-defined radio (SDR) technology. We'll begin with a foundational overview of both Global Navigation Satellite Systems (GNSS) and the intricacies of digital signal processing.
The presentation culminates in a live demonstration. We'll showcase the manipulation of Galileo's Open Service pilot signal, simulating an attack on various software and hardware systems. This practical demonstration serves to highlight the potential consequences of unaddressed vulnerabilities, emphasizing the importance of offensive security practices in safeguarding critical infrastructure.
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
zkStudyClub - LatticeFold: A Lattice-based Folding Scheme and its Application...Alex Pruden
Folding is a recent technique for building efficient recursive SNARKs. Several elegant folding protocols have been proposed, such as Nova, Supernova, Hypernova, Protostar, and others. However, all of them rely on an additively homomorphic commitment scheme based on discrete log, and are therefore not post-quantum secure. In this work we present LatticeFold, the first lattice-based folding protocol based on the Module SIS problem. This folding protocol naturally leads to an efficient recursive lattice-based SNARK and an efficient PCD scheme. LatticeFold supports folding low-degree relations, such as R1CS, as well as high-degree relations, such as CCS. The key challenge is to construct a secure folding protocol that works with the Ajtai commitment scheme. The difficulty, is ensuring that extracted witnesses are low norm through many rounds of folding. We present a novel technique using the sumcheck protocol to ensure that extracted witnesses are always low norm no matter how many rounds of folding are used. Our evaluation of the final proof system suggests that it is as performant as Hypernova, while providing post-quantum security.
Paper Link: https://eprint.iacr.org/2024/257
The Microsoft 365 Migration Tutorial For Beginner.pptxoperationspcvita
This presentation will help you understand the power of Microsoft 365. However, we have mentioned every productivity app included in Office 365. Additionally, we have suggested the migration situation related to Office 365 and how we can help you.
You can also read: https://www.systoolsgroup.com/updates/office-365-tenant-to-tenant-migration-step-by-step-complete-guide/
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
How information systems are built or acquired puts information, which is what they should be about, in a secondary place. Our language adapted accordingly, and we no longer talk about information systems but applications. Applications evolved in a way to break data into diverse fragments, tightly coupled with applications and expensive to integrate. The result is technical debt, which is re-paid by taking even bigger "loans", resulting in an ever-increasing technical debt. Software engineering and procurement practices work in sync with market forces to maintain this trend. This talk demonstrates how natural this situation is. The question is: can something be done to reverse the trend?
Astute Business Solutions | Oracle Cloud Partner |
A Real-time System for Detecting Landslide Reports on Social Media using Artificial Intelligence
1. A Real-time System for Detecting
Landslide Reports on Social Media
using Artificial Intelligence
Ferda Ofli1, Umair Qazi1, Muhammad Imran1, Julien Roch2,
Catherine Pennington3, Vanessa Banks3, Remy Bossu2
1Qatar Computing Research Institute
2European-Mediterranean Seismological Centre
3British Geological Survey
ICWE 2022
Bari, Italy
2. Agenda
• Motivation
• System Design
• Model Development
• System Benchmark
• Real-world Deployment
• Conclusion
2
4. Motivation
Landslide events are often under-reported and insufficiently documented.
Credit: Petley, D. Geology (2012)
4
Lack of such important data not only hinders humanitarian aid
but also impedes scientific research.
6. Existing Approaches – Citizen Science (I)
6
Juang et al., “Using citizen science to expand the global map of landslides: Introducing the Cooperative Open
Online Landslide Repository”, Plos One 2019.
NASA Landslide Reporter
7. Existing Approaches – Citizen Science (II)
7
Mobile Applications
Kocaman & Gokceoglu, “A CitSci app for
landslide data collection”, Landslides 2019.
Sellers et al., “MARLI: a mobile application for regional
landslide inventories in Ecuador”, Landslides 2021.
Not easily scalable as they require active participation of
volunteers that opt-in to use a particular application.
27. Duplicate Filter
• Image features extracted from the penultimate layer of a ResNet-50
model pre-trained on the Places dataset
• Threshold based on Euclidean distance
• 600 image pairs (460 duplicate / 140 non-duplicate)
27
28. Junk Filter
• Fine-tune a ResNet-50 model, pre-trained on the ImageNet dataset,
using a custom dataset introduced by Nguyen et al. [ISCRAM 2017]
28
Nguyen et al., “Automatic Image Filtering on Social Networks Using Deep Learning
and Perceptual Hashing During Crises”, ISCRAM 2017.
31. Collection of Landslide Images
• Downloaded from Google and Twitter using keywords
• Donated by BGS
31
32. Labeling Methodology
• Manual annotation by three landslide specialists
• Several rounds of discussion to agree on a labeling methodology
• CV-based interpretation is different from desk- or field-based landslide identification
32
Pennington et al., “A near-real-time global landslide incident reporting tool
demonstrator using social media and artificial intelligence”, IJDRR 2022.
33. Final Dataset
• Inter-annotator agreement
• Fleiss’ Kappa = 0.58 (almost substantial)
• Percent Agreement = 76%
• Imbalanced class distribution
• 23% landslide vs. 77% not-landslide
Google Twitter BGS Total
Landslide 1,240 598 852 2,690
Not-landslide 5,044 555 3,448 9,047
Total 6,284 1,153 4,300 11,737
34
Pennington et al., “A near-real-time global landslide incident reporting tool
demonstrator using social media and artificial intelligence”, IJDRR 2022.
35. Landslide Model Training
• Fine-tune a ResNet-50 model, pre-trained on the ImageNet dataset,
using the home-grown dataset.
36
Ofli et al., “Landslide Detection in Real-Time Social Media Image
Streams”, arXiv preprint arXiv:2110.04080, 2021.
41. Geolocation Tagger
42
Qazi et al., “GeoCoV19: A Dataset of Hundreds of Millions of Multilingual COVID-19 Tweets with
Location Information”, Computer Science, ACM SIGSPATIAL Special, v12, pp 6-15, 2020.
42. Performance Evaluation & Benchmarking
• Stress-test the system and understand its scalability
• Latency
• time taken by a module to process a given input load
• Throughput
• number of items processed in a unit time (one second) given an input load
• Critical system components
• Duplicate filter
• Junk filter
• Landslide detector
• Geolocation tagger
43
47. Real-world Deployment
• Online since February 2020 to monitor live Twitter stream globally
• 339 multilingual keywords in 32 languages
• February 2020 – December 2021
• Collected more than 54 million tweets and 15 million image URLs
• ~2.5 million image URLs deemed unique and downloaded for further analysis
• ~17,000 images classified as relevant, unique and landslides
• Corresponds to <1% of the collected images
• Highlights the challenging nature of the problem
• ~6,500 landslide reports shared by personal accounts whereas ~4,500 by
organizational accounts
48
49. Real-world Deployment – Verification
• Randomly sampled 3,600 images processed by the system
• Asked experts to label the sampled images
• System-predicted labels compared to expert annotations
50
50. Real-world Deployment – Verification
• Randomly sampled 3,600 images processed by the system
• Asked experts to label the sampled images
• System-predicted labels compared to expert annotations
51
True False
Landslide (positive) 123 39
Not-landslide (negative) 3395 43
68. Conclusion
• An interdisciplinary collaboration between computer scientists
(QCRI), seismologists (EMSC), and landslide specialists (BGS).
• The system leverages online social media data in real time to identify
landslide reports automatically using state-of-the-art AI techniques
• Reduces the information overload by eliminating duplicate and irrelevant
content
• Identifies landslide images
• Infers their geolocation
• Categorizes the user type (organization or person)
• The real-world deployment shows the success of the system.
69
69. Conclusion
• We believe that our system can contribute to harvesting of global
landslide data and facilitate further landslide research.
• It can support global landslide susceptibility maps to provide
situational awareness and improve emergency response and decision
making.
• Next steps:
• Historical data analysis w/ ground truth from other sources, e.g., BGS, NASA,
EM-DAT, etc.
• Spatiotemporal detection of events
70