The DSDI Task has been running at TRECVID to benchmark systems working on automatically detecting features after natural disaster events (e.g. earthquakes, flooding, etc) captured from civil air patrol and airborne video footage.
Emerging Communications Tech: Lessons from Hurricane Sandy and Super Typhoon ...Cisco Crisis Response
This presentation takes a look at two recent Cisco TacOps deployments, Hurricane Sandy and Super Typhoon Haiyan, and examines the emerging communications technologies that are bringing innovation to disasters and humanitarian crises.
Efficient and thorough data collection and its timely analysis are critical for disaster response and recovery in order to save people's lives during disasters. However, access to comprehensive data in disaster areas and their quick analysis to transform the data to actionable knowledge are challenging. With the popularity and pervasiveness of mobile devices, crowdsourcing data collection and analysis has emerged as an effective and scalable solution. This paper addresses the problem of crowdsourcing mobile videos for disasters by identifying two unique challenges of 1) prioritizing visual data collection and transmission under bandwidth scarcity caused by damaged communication networks and 2) analyzing the acquired data in a timely manner. We introduce a new crowdsourcing framework for acquiring and analyzing the mobile videos utilizing fine granularity spatial metadata of videos for a rapidly changing disaster situation. We also develop an analytical model to quantify the visual awareness of a video based on its metadata and propose the visual awareness maximization problem for acquiring the most relevant data under bandwidth constraints. The collected videos are evenly distributed to off-site analysts to collectively minimize crowdsourcing efforts for analysis. Our simulation results demonstrate the effectiveness and feasibility of the proposed framework.
The database will serve to inform users on consequences from past events, as a benchmarking tool for analytical loss models and to support the development of tools to create vulnerability data appropriate to specific countries, structures, or building classes.
Efficient and thorough data collection and its timely analysis are critical for disaster response and recovery in order to save people's lives during disasters. However, access to comprehensive data in disaster areas and their quick analysis to transform the data to actionable knowledge are challenging. With the popularity and pervasiveness of mobile devices, crowdsourcing data collection and analysis has emerged as an effective and scalable solution. This paper addresses the problem of crowdsourcing mobile videos for disasters by identifying two unique challenges of 1) prioritizing visual data collection and transmission under bandwidth scarcity caused by damaged communication networks and 2) analyzing the acquired data in a timely manner. We introduce a new crowdsourcing framework for acquiring and analyzing the mobile videos utilizing fine granularity spatial metadata of videos for a rapidly changing disaster situation. We also develop an analytical model to quantify the visual awareness of a video based on its metadata and propose the visual awareness maximization problem for acquiring the most relevant data under bandwidth constraints. The collected videos are evenly distributed to off-site analysts to collectively minimize crowdsourcing efforts for analysis. Our simulation results demonstrate the effectiveness and feasibility of the proposed framework.
Links:
http://infolab.usc.edu/DocsDemos/to_ieeebigdata2015.pdf
http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=7363814
Applications of Deep Learning in Construction IndustryGaurav Verma
These slides are based on the following two research questions:
1) What are the possible areas where Deep Learning can be applied in Construction Industry?
2) What are the problems associated with the application of Deep Learning in Construction Industry?
Using Data Integration to Deliver Intelligence to Anyone, AnywhereSafe Software
Data integration makes it possible to deliver intelligence and keep decision makers, first responders, and civilians informed. For over 20 years, FME has been trusted by federal governments to move data from nearly any source to the target destination, while saving time and budget resources.
With FME, federal governments can deliver open data, improve emergency & disaster response, enhance land management, turn public safety and defense into actionable results, and integrate & deliver location intelligence.
Emerging Communications Tech: Lessons from Hurricane Sandy and Super Typhoon ...Cisco Crisis Response
This presentation takes a look at two recent Cisco TacOps deployments, Hurricane Sandy and Super Typhoon Haiyan, and examines the emerging communications technologies that are bringing innovation to disasters and humanitarian crises.
Efficient and thorough data collection and its timely analysis are critical for disaster response and recovery in order to save people's lives during disasters. However, access to comprehensive data in disaster areas and their quick analysis to transform the data to actionable knowledge are challenging. With the popularity and pervasiveness of mobile devices, crowdsourcing data collection and analysis has emerged as an effective and scalable solution. This paper addresses the problem of crowdsourcing mobile videos for disasters by identifying two unique challenges of 1) prioritizing visual data collection and transmission under bandwidth scarcity caused by damaged communication networks and 2) analyzing the acquired data in a timely manner. We introduce a new crowdsourcing framework for acquiring and analyzing the mobile videos utilizing fine granularity spatial metadata of videos for a rapidly changing disaster situation. We also develop an analytical model to quantify the visual awareness of a video based on its metadata and propose the visual awareness maximization problem for acquiring the most relevant data under bandwidth constraints. The collected videos are evenly distributed to off-site analysts to collectively minimize crowdsourcing efforts for analysis. Our simulation results demonstrate the effectiveness and feasibility of the proposed framework.
The database will serve to inform users on consequences from past events, as a benchmarking tool for analytical loss models and to support the development of tools to create vulnerability data appropriate to specific countries, structures, or building classes.
Efficient and thorough data collection and its timely analysis are critical for disaster response and recovery in order to save people's lives during disasters. However, access to comprehensive data in disaster areas and their quick analysis to transform the data to actionable knowledge are challenging. With the popularity and pervasiveness of mobile devices, crowdsourcing data collection and analysis has emerged as an effective and scalable solution. This paper addresses the problem of crowdsourcing mobile videos for disasters by identifying two unique challenges of 1) prioritizing visual data collection and transmission under bandwidth scarcity caused by damaged communication networks and 2) analyzing the acquired data in a timely manner. We introduce a new crowdsourcing framework for acquiring and analyzing the mobile videos utilizing fine granularity spatial metadata of videos for a rapidly changing disaster situation. We also develop an analytical model to quantify the visual awareness of a video based on its metadata and propose the visual awareness maximization problem for acquiring the most relevant data under bandwidth constraints. The collected videos are evenly distributed to off-site analysts to collectively minimize crowdsourcing efforts for analysis. Our simulation results demonstrate the effectiveness and feasibility of the proposed framework.
Links:
http://infolab.usc.edu/DocsDemos/to_ieeebigdata2015.pdf
http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=7363814
Applications of Deep Learning in Construction IndustryGaurav Verma
These slides are based on the following two research questions:
1) What are the possible areas where Deep Learning can be applied in Construction Industry?
2) What are the problems associated with the application of Deep Learning in Construction Industry?
Using Data Integration to Deliver Intelligence to Anyone, AnywhereSafe Software
Data integration makes it possible to deliver intelligence and keep decision makers, first responders, and civilians informed. For over 20 years, FME has been trusted by federal governments to move data from nearly any source to the target destination, while saving time and budget resources.
With FME, federal governments can deliver open data, improve emergency & disaster response, enhance land management, turn public safety and defense into actionable results, and integrate & deliver location intelligence.
geecon 2013 - Standards for the Future of Java EmbeddedWerner Keil
This session highlights how Java Embedded can play a role in the Internet of Things and Distributed Sensor Web as well as related technologies like Smart Home or Automotive. We demonstrate how existing Java standards like JSR 256 (Mobile Sensor API) can be modernized and improved towards a new generation of Java Embedded and Mobile. Taking technologies like the IEEE 1451 "Smart Sensor" standard into consideration, as well as OGC standards like SensorML or The Unified Code for Units of Measurement (UCUM) allowing type and context safe data transfer using various formats and protocols, whether it is XML, JSON or specific M2M protocols like MQTT as well as new JSRs like 360 (CLDC 8) and 361 (Java ME Embedded)
James Lord - USCAD, Solutions Consultant and Edward Tallmadge - CEO, CyberCAD, Inc. present "UAV - Unmanned Aerial Vehicles to ASCE OC on December 10, 2015.
Secured Video Watermarking Based On DWTEditor IJMTER
Copy right protection and Claiming the digital rights is the major problem for the content
developers. Content like video, images, audio, etc., are prone to violation of protection under many
circumstances in the digital world. In this paper we propose a method that proves copyright of the
video by embedding watermarking on selected frames and ensures that the frames is not modified by
performing hashing and then modifying the frame based on Discrete Wavelet Transformations . This
method protects the video from many types of attacks like frame-edit
M4M 2 the Rescue of M2M (Eclipse DemoCamp Trondheim)Werner Keil
M4M or Measure 4 Measure, ever since Shakespeare's play with the same name we know, people can be mistaken for one another. A Duke (like the beloved Java mascot) claims to be a monk, the head of a dead pirate is presented to be that of the young hero. So can important information like Units of Measurement be misinterpreted. While humans reading 10°C, 10 C or 10 Degree Celsius, each of those could be interpreted and understood well enough. For M2M communication, unless a program is provided with a large glossary of alternate terms, only ONE of these would be acceptable.
This is where the Unified Code for Units of Measurement (UCUM) among similar approaches like UnitsML, SensorML or a few others are vital for error-free M2M transactions, not just between sensors or measurement devices, but also and especially vehicles or distributed devices.
OSGi Measurement has been around for some time (R3) but never gained as much momentum, as many other bundles of OSGi did. Except for very few use cases in the Embedded or Automotive sector it is practically unused and based on statements by its contributors in the OSGi Alliance to be considered legacy with no plans continue development.
After a brief overview of common M2M errors from Gimli to Mars, This session provides an overview of OSGi Measurement, Eclipse OUMo, what they have in common and where the differences lie. Although most of today's OSGi containers are capable of dealing with units or measurement better and more reliable with UOMo, both can where necessary also exchange information and collaborate. E.g. if legacy devices and code cannot be easily replaced. For this We'll take a look at interoperability between different systems or with other unit technologies and languages like F#, Fantom, Python or Lua.
MediaEval 2018: Multimedia Satellite Task: Emergency Response for Flooding Ev...multimediaeval
Paper: http://ceur-ws.org/Vol-2283/MediaEval_18_paper_10.pdf
Youtube: https://youtu.be/yq1nIPc6dWw
Benjamin Bischke, Patrick Helber, Zhengyu Zhao, Jens de Bruijn, Damian Borth, The Multimedia Satellite Task at MediaEval 2018. Proc. of MediaEval 2018, 29-31 October 2018, Sophia Antipolis, France.
Abstract: This paper provides a description of the MediaEval 2018 Multimedia Satellite Task. The primary goal of the task is to extract and fuse content of events which are present in Satellite Imagery and Social Media. Establishing a link from Satellite Imagery to Social Multimedia can yield to a comprehensive event representation which is vital for numerous applications. Focusing on natural disaster events, the main objective of the task is to leverage the combined event representation within the context of emergency response and environmental monitoring. In particular, our task focuses on flooding events and consists of two subtasks. The first Image Classification from Social Media subtask requires participants to retrieve images from Social Media which show a direct evidence for road passability during flooding events. The second task Flood Detection from Satellite Images aims to extract potentially flooded road sections from satellite images. The task seeks to go beyond state-of-the-art flooding map generation by focusing on information about the road passability and accessibility to urban infrastructure. Such information shows a clear potential to complement information from social images with satellite imagery is of vital importance for emergency management.
Presented by Benjamin Bischke
Disaster response involves rapid integration of diverse data sources in order to derive spatial information critical to an effective disaster response. This presentation will look at a range of use cases where FME has been used in both disaster response scenarios and practical applications. Safe participated in a recent OGC Testbed that involved a flood disaster response scenario for the San Francisco Bay area. A number of national mapping and environment agencies use FME for managing their flood data, weather data and disaster response resources. Typical disaster response workflows and formats will also be reviewed, including how FME works with DEMs, imagery, time series data and live web feeds to integrate real time with base map data and provide the results planners need to respond effectively. FME's strong support for open standards related to emergency response will also be discussed.
Efficient and thorough data collection and its timely analysis are critical for disaster response and recovery in order to save peoples lives during disasters. However, access to comprehensive data in disaster areas and their quick analysis to transform the data to actionable knowledge are challenging. With the popularity and pervasiveness of mobile devices, crowdsourcing data collection and analysis has emerged as an effective and scalable solution. This paper addresses the problem of crowdsourcing mobile videos for disasters by identifying two unique challenges of 1) prioritizing visualdata collection and transmission under bandwidth scarcity caused by damaged communication networks and 2) analyzing the acquired data in a timely manner. We introduce a new crowdsourcing framework for acquiring and analyzing the mobile videos utilizing fine granularity spatial metadata of videos for a rapidly changing disaster situation. We also develop an analytical model to quantify the visual awareness of a video based on its metadata and propose the visual awareness maximization problem for acquiring the most relevant data under bandwidth constraints. The collected videos are evenly distributed to off-site analysts to collectively minimize crowdsourcing efforts for analysis. Our simulation results demonstrate the effectiveness and feasibility of the proposed framework.
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/pathpartner/embedded-vision-training/videos/pages/may-2019-embedded-vision-summit
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Praveen Nayak, Tech Lead at PathPartner Technology, presents the "Using Deep Learning for Video Event Detection on a Compute Budget" tutorial at the May 2019 Embedded Vision Summit.
Convolutional neural networks (CNNs) have made tremendous strides in object detection and recognition in recent years. However, extending the CNN approach to understanding of video or volumetric data poses tough challenges, including trade-offs between representation quality and computational complexity, which is of particular concern on embedded platforms with tight computational budgets. This presentation explores the use of CNNs for video understanding.
Nayak reviews the evolution of deep representation learning methods involving spatio- temporal fusion from C3D to Conv-LSTMs for vision-based human activity detection. He proposes a decoupled alternative to this fusion, describing an approach that combines a low-complexity predictive temporal segment proposal model and a fine-grained (perhaps high- complexity) inference model. PathPartner Technology finds that this hybrid approach, in addition to reducing computational load with minimal loss of accuracy, enables effective solutions to these high complexity inference tasks.
The MSUM Task has been running at TRECVID to benchmark systems working on video summarization. In this context a system should summarize the keyfact events of a character in a whole movie.
The ActEv Task has been running at TRECVID to benchmark systems working on automatically detecting activities from long surveillance videos. Activities include objects, humans and their combinations. Data comes from the MEVA dataset.
More Related Content
Similar to Disaster Scene Indexing at TRECVID 2022
geecon 2013 - Standards for the Future of Java EmbeddedWerner Keil
This session highlights how Java Embedded can play a role in the Internet of Things and Distributed Sensor Web as well as related technologies like Smart Home or Automotive. We demonstrate how existing Java standards like JSR 256 (Mobile Sensor API) can be modernized and improved towards a new generation of Java Embedded and Mobile. Taking technologies like the IEEE 1451 "Smart Sensor" standard into consideration, as well as OGC standards like SensorML or The Unified Code for Units of Measurement (UCUM) allowing type and context safe data transfer using various formats and protocols, whether it is XML, JSON or specific M2M protocols like MQTT as well as new JSRs like 360 (CLDC 8) and 361 (Java ME Embedded)
James Lord - USCAD, Solutions Consultant and Edward Tallmadge - CEO, CyberCAD, Inc. present "UAV - Unmanned Aerial Vehicles to ASCE OC on December 10, 2015.
Secured Video Watermarking Based On DWTEditor IJMTER
Copy right protection and Claiming the digital rights is the major problem for the content
developers. Content like video, images, audio, etc., are prone to violation of protection under many
circumstances in the digital world. In this paper we propose a method that proves copyright of the
video by embedding watermarking on selected frames and ensures that the frames is not modified by
performing hashing and then modifying the frame based on Discrete Wavelet Transformations . This
method protects the video from many types of attacks like frame-edit
M4M 2 the Rescue of M2M (Eclipse DemoCamp Trondheim)Werner Keil
M4M or Measure 4 Measure, ever since Shakespeare's play with the same name we know, people can be mistaken for one another. A Duke (like the beloved Java mascot) claims to be a monk, the head of a dead pirate is presented to be that of the young hero. So can important information like Units of Measurement be misinterpreted. While humans reading 10°C, 10 C or 10 Degree Celsius, each of those could be interpreted and understood well enough. For M2M communication, unless a program is provided with a large glossary of alternate terms, only ONE of these would be acceptable.
This is where the Unified Code for Units of Measurement (UCUM) among similar approaches like UnitsML, SensorML or a few others are vital for error-free M2M transactions, not just between sensors or measurement devices, but also and especially vehicles or distributed devices.
OSGi Measurement has been around for some time (R3) but never gained as much momentum, as many other bundles of OSGi did. Except for very few use cases in the Embedded or Automotive sector it is practically unused and based on statements by its contributors in the OSGi Alliance to be considered legacy with no plans continue development.
After a brief overview of common M2M errors from Gimli to Mars, This session provides an overview of OSGi Measurement, Eclipse OUMo, what they have in common and where the differences lie. Although most of today's OSGi containers are capable of dealing with units or measurement better and more reliable with UOMo, both can where necessary also exchange information and collaborate. E.g. if legacy devices and code cannot be easily replaced. For this We'll take a look at interoperability between different systems or with other unit technologies and languages like F#, Fantom, Python or Lua.
MediaEval 2018: Multimedia Satellite Task: Emergency Response for Flooding Ev...multimediaeval
Paper: http://ceur-ws.org/Vol-2283/MediaEval_18_paper_10.pdf
Youtube: https://youtu.be/yq1nIPc6dWw
Benjamin Bischke, Patrick Helber, Zhengyu Zhao, Jens de Bruijn, Damian Borth, The Multimedia Satellite Task at MediaEval 2018. Proc. of MediaEval 2018, 29-31 October 2018, Sophia Antipolis, France.
Abstract: This paper provides a description of the MediaEval 2018 Multimedia Satellite Task. The primary goal of the task is to extract and fuse content of events which are present in Satellite Imagery and Social Media. Establishing a link from Satellite Imagery to Social Multimedia can yield to a comprehensive event representation which is vital for numerous applications. Focusing on natural disaster events, the main objective of the task is to leverage the combined event representation within the context of emergency response and environmental monitoring. In particular, our task focuses on flooding events and consists of two subtasks. The first Image Classification from Social Media subtask requires participants to retrieve images from Social Media which show a direct evidence for road passability during flooding events. The second task Flood Detection from Satellite Images aims to extract potentially flooded road sections from satellite images. The task seeks to go beyond state-of-the-art flooding map generation by focusing on information about the road passability and accessibility to urban infrastructure. Such information shows a clear potential to complement information from social images with satellite imagery is of vital importance for emergency management.
Presented by Benjamin Bischke
Disaster response involves rapid integration of diverse data sources in order to derive spatial information critical to an effective disaster response. This presentation will look at a range of use cases where FME has been used in both disaster response scenarios and practical applications. Safe participated in a recent OGC Testbed that involved a flood disaster response scenario for the San Francisco Bay area. A number of national mapping and environment agencies use FME for managing their flood data, weather data and disaster response resources. Typical disaster response workflows and formats will also be reviewed, including how FME works with DEMs, imagery, time series data and live web feeds to integrate real time with base map data and provide the results planners need to respond effectively. FME's strong support for open standards related to emergency response will also be discussed.
Efficient and thorough data collection and its timely analysis are critical for disaster response and recovery in order to save peoples lives during disasters. However, access to comprehensive data in disaster areas and their quick analysis to transform the data to actionable knowledge are challenging. With the popularity and pervasiveness of mobile devices, crowdsourcing data collection and analysis has emerged as an effective and scalable solution. This paper addresses the problem of crowdsourcing mobile videos for disasters by identifying two unique challenges of 1) prioritizing visualdata collection and transmission under bandwidth scarcity caused by damaged communication networks and 2) analyzing the acquired data in a timely manner. We introduce a new crowdsourcing framework for acquiring and analyzing the mobile videos utilizing fine granularity spatial metadata of videos for a rapidly changing disaster situation. We also develop an analytical model to quantify the visual awareness of a video based on its metadata and propose the visual awareness maximization problem for acquiring the most relevant data under bandwidth constraints. The collected videos are evenly distributed to off-site analysts to collectively minimize crowdsourcing efforts for analysis. Our simulation results demonstrate the effectiveness and feasibility of the proposed framework.
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/pathpartner/embedded-vision-training/videos/pages/may-2019-embedded-vision-summit
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Praveen Nayak, Tech Lead at PathPartner Technology, presents the "Using Deep Learning for Video Event Detection on a Compute Budget" tutorial at the May 2019 Embedded Vision Summit.
Convolutional neural networks (CNNs) have made tremendous strides in object detection and recognition in recent years. However, extending the CNN approach to understanding of video or volumetric data poses tough challenges, including trade-offs between representation quality and computational complexity, which is of particular concern on embedded platforms with tight computational budgets. This presentation explores the use of CNNs for video understanding.
Nayak reviews the evolution of deep representation learning methods involving spatio- temporal fusion from C3D to Conv-LSTMs for vision-based human activity detection. He proposes a decoupled alternative to this fusion, describing an approach that combines a low-complexity predictive temporal segment proposal model and a fine-grained (perhaps high- complexity) inference model. PathPartner Technology finds that this hybrid approach, in addition to reducing computational load with minimal loss of accuracy, enables effective solutions to these high complexity inference tasks.
Similar to Disaster Scene Indexing at TRECVID 2022 (20)
The MSUM Task has been running at TRECVID to benchmark systems working on video summarization. In this context a system should summarize the keyfact events of a character in a whole movie.
The ActEv Task has been running at TRECVID to benchmark systems working on automatically detecting activities from long surveillance videos. Activities include objects, humans and their combinations. Data comes from the MEVA dataset.
Multimedia Understanding at TRECVID 2022George Awad
The DVU Task running at TRECVID to benchmark systems working on automatically understand whole movies and answer queries on the movie and scene levels representing relationships, interactions, sentiments, and summaries. The task uses real licensed movies.
The AVS Task has been running at TRECVID to benchmark systems working on automatically retrieving relevant video shots to satisfy textual queries composed of persons, actions, locations, and any combinations of them.
The VTT Task has been running at TRECVID to benchmark systems working on automatically describe short videos in 1 sentence (aka: video captioning). Videos come from the vimeo creative common dataset (V3C).
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
Disaster Scene Indexing at TRECVID 2022
1. Disaster Scene Description and
Indexing (DSDI) Task Overview
TRECVID 2022
Asad Anwar Butt, National Institute of Standards and Technology; Johns Hopkins University
George Awad, National Institute of Standards and Technology
Jeffrey Liu, MIT Lincoln Laboratory
William Drew, Office of Homeland Security and Preparedness
2. • Video and imagery data can be extremely helpful for public
safety operations.
• Natural Disasters, e.g.,
• Wildfires
• Hurricanes
• Earthquakes
• Floods
• Man-made Disasters, e.g.,
• Hazardous material spills
• Mining accidents
• Explosions
Introduction - DSDI
* All images are under creative common licenses
3. • Prior knowledge about affected areas can be very useful for
the first responders.
• Oftentimes, the communication systems go down in major
disasters, which makes it very difficult to get any information
regarding the damage.
• Automated systems to gather information before rescue
workers enter the area can be very helpful.
Introduction - DSDI
4. • Computer vision capabilities have rapidly advanced recently
with the popularity of deep learning.
• Research groups have access to large image and video datasets for various tasks.
• However, the capabilities do not meet public safety needs.
• Lack of relevant training data.
• Most current image and video datasets have no public safety
hazard labels.
• State-of-the-art systems trained on such datasets fail to provide helpful labels.
Introduction - DSDI
5. • In response, the MIT Lincoln Lab developed a dataset of images
collected by the Civil Air Patrol of various natural disasters.
• The Low Altitude Disaster Imagery (LADI) dataset was developed as
part of a larger NIST Public Safety Innovator Accelerator Program
(PSIAP) grant.
• Two key properties of the dataset are:
• Low altitude
• Oblique perspective of the imagery and disaster-related features.
• The DSDI test data and ground truth from 2020 & 2021 are also
available for teams to use as training data.
Training Dataset
6. • LADI Dataset:
• Hosted as part of the AWS Public Dataset program.
• Consists of 20,000+ annotated images.
• The images are from locations with FEMA (Federal Emergency Management
Agency) major disaster declarations for a hurricane or flooding.
• Lower altitude criteria distinguish the LADI dataset from satellite datasets to support
the development of computer vision capabilities with small drones operating at low
altitudes.
• A minimum image size was selected to maximize the efficiency of the crowd source
workers; lower resolution images are harder to annotate.
• 2020 - 2021 DSDI Test Set:
• ~ 11 hours of video.
• Segmented into small video clips (shots) of maximum 20 sec.
• Videos are from earthquake, hurricane, and flood affected areas.
• Total number of shots: 4626
Training Dataset
7. • A test dataset of about 6 hours of video was distributed for this task.
• Collected by FEMA as individual images after disaster events.
• Individual images were stitched to form videos with reasonable
speed.
• The test dataset was segmented into small video clips (shots) of a
maximum of 16 sec, with a mean length of 10 sec.
• Subset was selected by NIST taking into account diversity
• In addition, a small set of videos was collected from Defense Visual
Information Distribution Service (DVIDS): https://www.dvidshub.net/
• Total number of shots: 2157
Test Dataset
9. • Hierarchical labeling scheme: 5 coarse categories, each with 4 to 9
more specific annotations.
Testing Dataset - Categories
Damage Environment Infrastructure Vehicles Water
Misc. Damage Dirt Bridge Aircraft Flooding
Flooding/Water Damage Grass Building Boat Lake/Pond
Landslide Lava Dam/Levee Car Ocean
Road Washout Rocks Pipes Truck Puddle
Rubble/Debris Sand Utility Or Power Lines/Electric Towers River/Stream
Smoke/Fire Shrubs Railway
Snow/Ice
Wireless/Radio Communication
Towers
Trees Water Tower
Road
10. • We had 2 full time annotators (same annotators since 2020) instead
of crowdsourcing.
• For each category, a practice web page was created with multiple
examples and sample test videos.
• This allowed the annotators to become familiarized with the task and
labels before starting a category.
• The annotators worked independently on each category.
• For each coarse category, they marked all the specific labels that
were present in the video.
• To create the final ground truth, for each shot, the union of labels
was used.
Annotation
11. The annotators watch the video and mark the categories that are visible in the video.
Annotation Tool
12. • Systems are required to return a ranked list of up to 1000 shots for
each of the 32 features.
• Each submitted run specified its training type:
• LADI-based (L): The run only used the supplied LADI dataset for development of its system.
• Non-LADI (N): The run did not use the LADI dataset, but only trained using other dataset(s).
• LADI + Others (O): The run used the LADI dataset in addition to any other dataset(s) for
training purposes.
DSDI System Task
13. • The following evaluation metrics were used to compare the
submissions:
Evaluation Metrics
Metric Description
Speed Clock time per inference (reported by participants).
Mean Average
Precision (MAP)
Average precision is calculated for each feature, and the mean average
precision reported for a submission.
Recall True positive, true negative, false positive, and false negative rates.
14. Submissions
Run Type Run Id
O PKU_WICT_7
O PKU_WICT_2
O PKU_WICT_4
O PKU_WICT_5
L PKU_WICT_6
L PKU_WICT_3
L PKU_WICT_1
L PKU_WICT_8
L UMKC_1
O UMKC_1
PKU_WICT : Peking University
UMKC : University of Missouri-Kansas City
15. Frequency of Features
• Graph shows number of shots containing each feature.
• Some features (e.g. grass, trees, buildings, roads, etc.) occur much more frequently than others.
• 4 features were dropped due to the rare occurrence in ground truth (lava, snow/ice, landslide, road washout)
0
200
400
600
800
1000
1200
1400
1600
1800
2000
2200
d
a
m
a
g
e
(
m
i
s
c
)
f
l
o
o
d
i
n
g
r
u
b
b
l
e
/
d
e
b
r
i
s
s
m
o
k
e
/
f
i
r
e
d
i
r
t
g
r
a
s
s
r
o
c
k
s
s
a
n
d
s
h
r
u
b
s
t
r
e
e
s
b
r
i
d
g
e
b
u
i
l
d
i
n
g
d
a
m
/
l
e
v
e
e
p
i
p
e
s
u
t
i
l
i
t
y
/
p
o
w
e
r
l
i
n
e
s
r
a
i
l
w
a
y
w
i
r
e
l
e
s
s
c
o
m
m
u
n
i
c
a
t
i
o
n
t
o
w
e
r
s
w
a
t
e
r
t
o
w
e
r
a
i
r
c
r
a
f
t
b
o
a
t
c
a
r
t
r
u
c
k
f
l
o
o
d
i
n
g
l
a
k
e
/
p
o
n
d
o
c
e
a
n
p
u
d
d
l
e
r
i
v
e
r
/
s
t
r
e
a
m
r
o
a
d
Frequency
of
shots
True Positives per feature
16. Results by Teams
0.501 0.5 0.482
0.429
0.354
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
PKU_WICT _7 PKU_WICT_2 PKU_WICT_4 PKU_WICT_5 UMKC_1
Mean
Average
Precision
LADI+Others-based Systems
Performance across all features
0.468 0.468 0.465
0.423
0.354
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
PKU_WICT_6 PKU_WICT_3 PKU_WICT_1 PKU_WICT_8 UMKC_1
Mean
Average
Precision
LADI-based Systems
Performance across all features
18. • Average precision values for each feature categorized by training type.
• 5 LADI-based runs ; 5 LADI+Others-based runs
Results by Features
0
0.2
0.4
0.6
0.8
1
1.2
damage
flooding
rubble
smoke
dirt
grass
rocks
sand
shrubs
trees
bridge
building
dam
pipes
utility
line
railway
wireless
tower
water
tower
aircraft
boat
car
truck
flooding
lake
ocean
puddle
river
road
Average
Precision
LADI-based runs - by feature
Min
Median
Max
0
0.2
0.4
0.6
0.8
1
1.2
damage
flooding
rubble
smoke
dirt
grass
rocks
sand
shrubs
trees
bridge
building
dam
pipes
utility
line
railway
wireless
tower
water
tower
aircraft
boat
car
truck
flooding
lake
ocean
puddle
river
road
Average
Precision
LADI+Others-based runs - by feature
Min
Median
Max
19. Efficiency
0
100
200
300
400
500
600
700
800
0 0.2 0.4 0.6 0.8 1 1.2
Time
(sec)
Average Precision
LADI-based runs
0
100
200
300
400
500
600
700
800
0 0.2 0.4 0.6 0.8 1 1.2
Time
(sec)
Average Precision
LADI+Others-based runs
• LADI-based systems reported less processing time
• Majority of systems consumed more time but without more gain in performance
• Lowest processing time was ~30 sec at max performance
O_UMKC_1
L_UMKC_1
L_PKU_WICT_8
21. • A new test dataset from various event sources were employed
representing more diversity.
• Performance varies by feature.
• L+O runs performed higher than L-based runs.
• Few runs/features are good and efficient.
• Challenges include:
• Small dataset and limited resources for annotation.
• Training and testing dataset should be from the same distribution. Hard to do with
different nature of calamities.
• The task had little participation compared to the last 2 years.
• Question to teams about continuation of task.
Conclusion and Future Work