The document discusses a project called ACAV that aims to make videos on the web more accessible through collaborative annotations. The consortium includes companies like Dailymotion and research groups working on multimedia, semantic web, and disabilities. The goals are to increase accessible videos through both automatic and manual annotations, and rendering annotations in multiple formats depending on users' needs. Preliminary studies explored requirements like different annotation granularities and outputs, and using auditory icons to convey video rhythm to blind users. The proposed ACAV system will include annotation schemas, a social network for annotations, integrated speech technologies, and authoring/rendering tools.
Multimedia Semantics:Metadata, Analysis and InteractionRaphael Troncy
Multimedia Semantics:Metadata, Analysis and Interaction. Keynote Talk at the Latin-American Conference on Networked Electronic Media (LACNEM), August 2009, Bogota, Colombia
Multimedia Semantics:Metadata, Analysis and InteractionRaphael Troncy
Multimedia Semantics:Metadata, Analysis and Interaction. Keynote Talk at the Latin-American Conference on Networked Electronic Media (LACNEM), August 2009, Bogota, Colombia
Amplified IFTF is a series of talks where IFTF Staff and Affiliates present ways for our extended community to amplify our impact in the world by acquiring new skills and knowledge. This presentation by David Evan Harris draws on IFTF research on the Future of Video (2009) and David's own work with the Global Lives Project and as a participant in the Open Video Alliance.
Amplified IFTF is a series of talks where IFTF Staff and Affiliates present ways for our extended community to amplify our impact in the world by acquiring new skills and knowledge. This presentation by David Evan Harris draws on IFTF research on the Future of Video (2009) and David's own work with the Global Lives Project and as a participant in the Open Video Alliance.
Content Modelling for Human Action Detection via Multidimensional ApproachCSCJournals
Video content analysis is an active research domain due to the availability and the increment of audiovisual data in the digital format. There is a need to automatically extracting video content for efficient access, understanding, browsing and retrieval of videos. To obtain the information that is of interest and to provide better entertainment, tools are needed to help users extract relevant content and to effectively navigate through the large amount of available video information. Existing methods do not seem to attempt to model and estimate the semantic content of the video. Detecting and interpreting human presence, actions and activities is one of the most valuable functions in this proposed framework. The general objectives of this research are to analyze and process the audio-video streams to a robust audiovisual action recognition system by integrating, structuring and accessing multimodal information via multidimensional retrieval and extraction model. The proposed technique characterizes the action scenes by integrating cues obtained from both the audio and video tracks. Information is combined based on visual features (motion, edge, and visual characteristics of objects), audio features and video for recognizing action. This model uses HMM and GMM to provide a framework for fusing these features and to represent the multidimensional structure of the framework. The action-related visual cues are obtained by computing the spatiotemporal dynamic activity from the video shots and by abstracting specific visual events. Simultaneously, the audio features are analyzed by locating and compute several sound effects of action events that embedded in the video. Finally, these audio and visual cues are combined to identify the action scenes. Compared with using single source of either visual or audio track alone, such combined audiovisual information provides more reliable performance and allows us to understand the story content of movies in more detail. To compare the usefulness of the proposed framework, several experiments were conducted and the results were obtained by using visual features only (77.89% for precision; 72.10% for recall), audio features only (62.52% for precision; 48.93% for recall) and combined audiovisual (90.35% for precision; 90.65% for recall).
This is an invited presentation at the Content-based Multimedia Indexing (CBMI) workshop 2009, in Chania, Greece, June 2009
http://www.cbmi2009.org/practitioner
It focusses on the results of the PrestoPrime and NoTube projects
What considerations need to be considered in order to make video accessible to all users? This presentation considers the law, standards and ways to make your video more accessible when used online.
PowerPoint presentation "iMovie on the iPad" given by Fran Feeley on October 19, 2012, at the Illinois School Library Media Association Conference at Pheasant Run Resort, Saint Charles, IL.
Video Accessibility Toolkit for Success in a Virtual Environment3Play Media
Discover why video accessibility can transform the way you communicate in a virtual environment. In this session, you will learn how to use video accessibility to create an inclusive environment, while also improving your SEO, brand experience, and engagement.
Amplified IFTF is a series of talks where IFTF Staff and Affiliates present ways for our extended community to amplify our impact in the world by acquiring new skills and knowledge. This presentation by David Evan Harris draws on IFTF research on the Future of Video (2009) and David's own work with the Global Lives Project and as a participant in the Open Video Alliance.
Amplified IFTF is a series of talks where IFTF Staff and Affiliates present ways for our extended community to amplify our impact in the world by acquiring new skills and knowledge. This presentation by David Evan Harris draws on IFTF research on the Future of Video (2009) and David's own work with the Global Lives Project and as a participant in the Open Video Alliance.
Content Modelling for Human Action Detection via Multidimensional ApproachCSCJournals
Video content analysis is an active research domain due to the availability and the increment of audiovisual data in the digital format. There is a need to automatically extracting video content for efficient access, understanding, browsing and retrieval of videos. To obtain the information that is of interest and to provide better entertainment, tools are needed to help users extract relevant content and to effectively navigate through the large amount of available video information. Existing methods do not seem to attempt to model and estimate the semantic content of the video. Detecting and interpreting human presence, actions and activities is one of the most valuable functions in this proposed framework. The general objectives of this research are to analyze and process the audio-video streams to a robust audiovisual action recognition system by integrating, structuring and accessing multimodal information via multidimensional retrieval and extraction model. The proposed technique characterizes the action scenes by integrating cues obtained from both the audio and video tracks. Information is combined based on visual features (motion, edge, and visual characteristics of objects), audio features and video for recognizing action. This model uses HMM and GMM to provide a framework for fusing these features and to represent the multidimensional structure of the framework. The action-related visual cues are obtained by computing the spatiotemporal dynamic activity from the video shots and by abstracting specific visual events. Simultaneously, the audio features are analyzed by locating and compute several sound effects of action events that embedded in the video. Finally, these audio and visual cues are combined to identify the action scenes. Compared with using single source of either visual or audio track alone, such combined audiovisual information provides more reliable performance and allows us to understand the story content of movies in more detail. To compare the usefulness of the proposed framework, several experiments were conducted and the results were obtained by using visual features only (77.89% for precision; 72.10% for recall), audio features only (62.52% for precision; 48.93% for recall) and combined audiovisual (90.35% for precision; 90.65% for recall).
This is an invited presentation at the Content-based Multimedia Indexing (CBMI) workshop 2009, in Chania, Greece, June 2009
http://www.cbmi2009.org/practitioner
It focusses on the results of the PrestoPrime and NoTube projects
What considerations need to be considered in order to make video accessible to all users? This presentation considers the law, standards and ways to make your video more accessible when used online.
PowerPoint presentation "iMovie on the iPad" given by Fran Feeley on October 19, 2012, at the Illinois School Library Media Association Conference at Pheasant Run Resort, Saint Charles, IL.
Video Accessibility Toolkit for Success in a Virtual Environment3Play Media
Discover why video accessibility can transform the way you communicate in a virtual environment. In this session, you will learn how to use video accessibility to create an inclusive environment, while also improving your SEO, brand experience, and engagement.
Location Embeddings for Next Trip RecommendationRaphael Troncy
Joint work wih Amadeus presenting a recommender system for your next destination using knowledge graphs and deep learning network, presented at the LocWeb 2019 Workshop colocated with TheWebConf 2019 (San Francisco, USA)
NERD: an open source platform for extracting and disambiguating named entitie...Raphael Troncy
"NERD: an open source platform for extracting and disambiguating named entities in very diverse documents" - Keynote Talk given at the NLP&DBpedia International Workshop (NLP&DBpedia), 22 October 2013
Deep-linking into Media Assets at the Fragment Level SMAM 2013Raphael Troncy
"Deep-linking into Media Assets at the Fragment Level: Specification, Model and Applications" - Keynote Talk given at the International Workshop on Semantic Music and Media (SMAM), 21 October 2013
Semantics at the multimedia fragment level SSSW 2013Raphael Troncy
"Semantics at the multimedia fragment level or how enabling the remixing of online media" - Invited Talk given at the Semantic Web Summer School (SSSW), 12 July 2013
MediaFinder: Collect, Enrich and Visualize Media Memes Shared by the CrowdRaphael Troncy
"MediaFinder: Collect, Enrich and Visualize Media Memes Shared by the Crowd", talk given at the 2nd Real Time Analysis and Mining of Social Streams Workshop (RAMSS) colocated with WWW 2013, Rio de Janeiro, Brazil
EventMedia Live: Exploring Events Connections in Real-Time to Enhance ContentRaphael Troncy
"EventMedia Live: Exploring Events Connections in Real-Time to Enhance Content" presented at the Semantic Web Challenge, Open Track, of the 11th International Semantic Web Conference, Boston, USA, November 2012
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Epistemic Interaction - tuning interfaces to provide information for AI support
Towards Collaborative Annotation for Video Accessibility
1. Towards Collaborative
Annotation for
Video Accessibility
Pierre-Antoine Champin, Benoît Encelle,
Magali O. Beldame, Yannick Prié
Nick Evans and
Raphaël Troncy <raphael.troncy@eurecom.fr>
2. The consortium
Dailymotion (Paris, FR) : video sharing website
Promotes HTML 5 using the video tag, http://openvideo.dailymotion.com/
LIRIS (Lyon, FR) : CS research group
Silex Team: expertise in semantic web, annotation models, video annotation
and HCI for disabled people
EURECOM (Sophia Antipolis, FR) : research center in
communications systems
Multimedia team: expertise in multimedia analysis (speaker
diarization/recognition, speech recognition) and semantic web
INS HEA + school (Lyon, FR)
Experiences in physical disabilities: blindness, visual impairment, deafness
and hearing Loss
Blind and death high-school students
26/04/2010 - Towards Collaborative Annotations for Video Accessibility - W4A 2010, Raleigh, USA -2
3. Goals and Motivations
What is required to make video accessible on the Web?
How to increase the number of accessible videos?
Technologies:
Annotating: automatic (speech transcription) and manual (social
collaborative annotation tool)
Addressing: pointing to, retrieving, transmitting only parts of media
Rendering: video visualization for the impaired, Braille output
Expected benefits for:
disabled people, getting better access to video
video provider, reaching a wider audience
the Web in general, using semantic annotations
26/04/2010 - Towards Collaborative Annotations for Video Accessibility - W4A 2010, Raleigh, USA -3
4. Accessibility Features for Visually
Impaired and Blind People
Man’s actions Put on his shoes Walk in the street
Son’s actions Look his mother
Characters The mother, her son The son, the man The man and his friend
Scenery In the shop In the street
Annotations multimodal presentation
Annotations depends on video context
and user preferences
Audio Auditory Audio Braille
track icons description
26/04/2010 - Towards Collaborative Annotations for Video Accessibility - W4A 2010, Raleigh, USA -4
5. Accessibility Features for Deaf People
Mother‘s dialogues How are you ?
Son’s dialogues Hi mom Fine and you ?
Sound Car horn
Annotations presentation
Annotations depends on video cointext
and user preferences
Video Subtitles Surtitles
track
26/04/2010 - Towards Collaborative Annotations for Video Accessibility - W4A 2010, Raleigh, USA -5
6. Producing Video Annotations
Automatic annotations Social annotations
Speaker diarization
Who spoke and When? Annotation corrections,
Speech recognition enhancement
Transcription Audio description
(for visually impaired)
Annotations
Mother How are you ? Annotations
Son Ho mom Fine Mother How are you ?
Son Hi mom Fine and you ?
Sound Car horn
26/04/2010 - Towards Collaborative Annotations for Video Accessibility - W4A 2010, Raleigh, USA -6
7. Braille Rendering
The Advene prototype emulation views
Enriched
Media Player
Timeline
with typed
annotations
7
8. Preliminary study (1/2)
Semi-structured interviews with blind users (n=2)
Participant’s habits when watching programs with audio description
Audio description process
Multimodal presentations of descriptions
Requirements:
R1: generate additional descriptions and provide unobtrusive access
to descriptions (tactile access for blind Braille readers)
R2: descriptions at various level of granularity and verbosity
R3: use system’s multimodal output to provide two or more
descriptions (e.g. speech synthesis and Braille display)
26/04/2010 - Towards Collaborative Annotations for Video Accessibility - W4A 2010, Raleigh, USA -8
9. Preliminary study (2/2)
Goal: see whether we can use auditory icons to convey
the rhythm of the editing of a movie to blind users
e.g.: sound of a locomotive arriving from the right to convey the
concept of a traveling from right to left
Experiment and questionnaires (n=16+9)
Viewing with headsets of 5 min of Ratatouille,
http://www.imdb.com/title/tt0382932/
Results:
Rhythm and movie dynamic better perceived
Usefulness of auditory icons but must be limited (5 max) and be very
different from the main soundtrack of the movie
Editing cues: change of scenes, camera movement, flashback (e.g. NCIS)
Audio zoom (e.g. Survivor)
26/04/2010 - Towards Collaborative Annotations for Video Accessibility - W4A 2010, Raleigh, USA -9
10. ACAV Architecture
Benchmarking: Sphinx, HTK,
Julius
26/04/2010 - Towards Collaborative Annotations for Video Accessibility - W4A 2010, Raleigh, USA - 10
11. Media Fragments URI
Provide URI-based
mechanisms for uniquely
identifying fragments for
media objects on the Web,
such as video, audio, and
images.
Photo credit: Robert Freund
26/04/2010 - Towards Collaborative Annotations for Video Accessibility - W4A 2010, Raleigh, USA - 11
13. Conclusion
ACAV will bring:
Dedicated annotation schemas for video accessibility
Social network model for video annotations
Web integration of state of the art speech technologies
GUI models for authoring and rendering video
annotations
Media Fragments reference implementation
Open source Braille plugin for most used Web browsers
http://www.acavideo.fr/
26/04/2010 - Towards Collaborative Annotations for Video Accessibility - W4A 2010, Raleigh, USA - 13