Much thanks to everyone who participated in our latest creative collab project, "Media Muse." We asked you all to show up online via Twitter, Facebook and, here, on our blog, to tell us how you would finish this sentence: "You're unchained when..." We then invited some of our favorite illustrators, cartoonists and designers to create an image based on the lines that spoke to them. Your liberated imaginations inspired some incredible pieces by brilliant artists.
Much thanks to everyone who participated in our latest creative collab project, "Media Muse." We asked you all to show up online via Twitter, Facebook and, here, on our blog, to tell us how you would finish this sentence: "You're unchained when..." We then invited some of our favorite illustrators, cartoonists and designers to create an image based on the lines that spoke to them. Your liberated imaginations inspired some incredible pieces by brilliant artists.
Video Data Visualization System : Semantic Classification and Personalization ijcga
Ā
We present in this paper an intelligent video data visualization tool, based on semantic classification, for
retrieving and exploring a large scale corpus of videos. Our work is based on semantic classification
resulting from semantic analysis of video. The obtained classes will be projected in the visualization space.
The graph is represented by nodes and edges, the nodes are the keyframes of video documents and the
edges are the relation between documents and the classes of documents. Finally, we construct the userās
profile, based on the interaction with the system, to render the system more adequate to its preferences.
Video Data Visualization System : Semantic Classification and Personalization ijcga
Ā
We present in this paper an intelligent video data visualization tool, based on semantic classification, for retrieving and exploring a large scale corpus of videos. Our work is based on semantic classification resulting from semantic analysis of video. The obtained classes will be projected in the visualization space. The graph is represented by nodes and edges, the nodes are the keyframes of video documents and the
edges are the relation between documents and the classes of documents. Finally, we construct the userās profile, based on the interaction with the system, to render the system more adequate to its preferences.
A Mobile Audio Server enhanced with Semantic Personalization CapabilitiesUniversity of Piraeus
Ā
This paper presents a mobile audio server enhanced with personalization capabilities. The server as well as the client is implemented over the Android platform. It provides mobility to the systemās parts. The metadata which support personalization are separated into two categories: the metadata describing user preferences stored at each client and the resource adaptation metadata stored at the server. The multimedia models MPEG-21 and MPEG-7 are used to describe metadata information. The Web Ontology Language (OWL) is used to produce and manipulate the relative semantic descriptions about the metadata. The mobile server promotes audio tracks to the clients according to their preferences.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
Ā
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more āmechanicalā approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
Ā
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. Whatās changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Video Data Visualization System : Semantic Classification and Personalization ijcga
Ā
We present in this paper an intelligent video data visualization tool, based on semantic classification, for
retrieving and exploring a large scale corpus of videos. Our work is based on semantic classification
resulting from semantic analysis of video. The obtained classes will be projected in the visualization space.
The graph is represented by nodes and edges, the nodes are the keyframes of video documents and the
edges are the relation between documents and the classes of documents. Finally, we construct the userās
profile, based on the interaction with the system, to render the system more adequate to its preferences.
Video Data Visualization System : Semantic Classification and Personalization ijcga
Ā
We present in this paper an intelligent video data visualization tool, based on semantic classification, for retrieving and exploring a large scale corpus of videos. Our work is based on semantic classification resulting from semantic analysis of video. The obtained classes will be projected in the visualization space. The graph is represented by nodes and edges, the nodes are the keyframes of video documents and the
edges are the relation between documents and the classes of documents. Finally, we construct the userās profile, based on the interaction with the system, to render the system more adequate to its preferences.
A Mobile Audio Server enhanced with Semantic Personalization CapabilitiesUniversity of Piraeus
Ā
This paper presents a mobile audio server enhanced with personalization capabilities. The server as well as the client is implemented over the Android platform. It provides mobility to the systemās parts. The metadata which support personalization are separated into two categories: the metadata describing user preferences stored at each client and the resource adaptation metadata stored at the server. The multimedia models MPEG-21 and MPEG-7 are used to describe metadata information. The Web Ontology Language (OWL) is used to produce and manipulate the relative semantic descriptions about the metadata. The mobile server promotes audio tracks to the clients according to their preferences.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
Ā
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more āmechanicalā approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
Ā
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. Whatās changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Ā
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
zkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex ProofsAlex Pruden
Ā
This paper presents Reef, a system for generating publicly verifiable succinct non-interactive zero-knowledge proofs that a committed document matches or does not match a regular expression. We describe applications such as proving the strength of passwords, the provenance of email despite redactions, the validity of oblivious DNS queries, and the existence of mutations in DNA. Reef supports the Perl Compatible Regular Expression syntax, including wildcards, alternation, ranges, capture groups, Kleene star, negations, and lookarounds. Reef introduces a new type of automata, Skipping Alternating Finite Automata (SAFA), that skips irrelevant parts of a document when producing proofs without undermining soundness, and instantiates SAFA with a lookup argument. Our experimental evaluation confirms that Reef can generate proofs for documents with 32M characters; the proofs are small and cheap to verify (under a second).
Paper: https://eprint.iacr.org/2023/1886
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
Ā
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Ā
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Le nuove frontiere dell'AI nell'RPA con UiPath Autopilotā¢UiPathCommunity
Ā
In questo evento online gratuito, organizzato dalla Community Italiana di UiPath, potrai esplorare le nuove funzionalitĆ di Autopilot, il tool che integra l'Intelligenza Artificiale nei processi di sviluppo e utilizzo delle Automazioni.
š Vedremo insieme alcuni esempi dell'utilizzo di Autopilot in diversi tool della Suite UiPath:
Autopilot per Studio Web
Autopilot per Studio
Autopilot per Apps
Clipboard AI
GenAI applicata alla Document Understanding
šØāš«šØāš» Speakers:
Stefano Negro, UiPath MVPx3, RPA Tech Lead @ BSP Consultant
Flavio Martinelli, UiPath MVP 2023, Technical Account Manager @UiPath
Andrei Tasca, RPA Solutions Team Lead @NTT Data
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Ā
Monitoring and observability arenāt traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current companyās observability stack.
While the dev and ops silo continues to crumbleā¦.many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
Ā
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties ā USA
Expansion of bot farms ā how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks ā Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Ā
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Enhancing Performance with Globus and the Science DMZGlobus
Ā
ESnet has led the way in helping national facilitiesāand many other institutions in the research communityāconfigure Science DMZs and troubleshoot network issues to maximize data transfer performance. In this talk we will present a summary of approaches and tips for getting the most out of your network infrastructure using Globus Connect Server.
1. Web-based semantic browsing of video collections using
multimedia ontologies
Marco Bertini, Gianpaolo DāAmico, Andrea Ferracani,
Marco Meoni and Giuseppe Serra
Media Integration and Communication Center, University of Florence, Italy
{bertini, damico, ferracani, meoni, serra}@dsi.uniļ¬.it
http://www.micc.uniļ¬.it/
SUBMITTED to ACM MULTIMEDIA 2010 DEMO PROGRAM
ABSTRACT
In this technical demonstration we present a novel web-based
tool that allows a user friendly semantic browsing of video
collections, based on ontologies, concepts, concept relations
and concept clouds. The system is developed as a Rich Inter-
net Application (RIA) to achieve a fast responsiveness and
ease of use that can not be obtained by other web appli-
cation paradigms, and uses streaming to access and inspect
the videos. Users can also use the tool to browse the con-
tent of social and media sharing sites like YouTube, Flickr
and Twitter, accessing these external resources through the
ontologies used in the system. The tool has won the second
prize in the Adobe YouGC1
contest, in the RIA category.
Categories and Subject Descriptors
H.3.3 [Information Storage and Retrieval]: Information
Search and RetrievalāSearch process; H.3.5 [Information
Storage and Retrieval]: Online Information Servicesā
Web-based services
General Terms
Algorithms, Experimentation
Keywords
Video retrieval, browsing, ontologies, web services
1. INTRODUCTION
Currently, the most common approach to access and in-
spect a video collection is by using a video search engine.
Typically such systems are based on lexicons of semantic
concepts, presented as lists or trees, and let to perform
keyword-based queries [1]. These systems are generally desk-
top applications or have simple web interfaces that show the
1
http://www.adobeyougc.com/
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for proļ¬t or commercial advantage and that copies
bear this notice and the full citation on the ļ¬rst page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior speciļ¬c
permission and/or a fee.
MMā10, October 25ā29, 2010, Firenze, Italy.
Copyright 2010 ACM X-XXXXX-XX-X/XX/XX ...$10.00.
results of the query as a ranked list of keyframes [2,3]. Video
browsing tools are developed aiming more at summarization
of video content, as in [4] where diļ¬erent visual features are
used to provide an overview of the content of a single video,
or aiming at the suggestion of new query terms, as in [5].
In other approaches, e.g. [6], the content of a video collec-
tion is clustered according to some visual features, and users
browse the clusters to inspect the various instances of a con-
cept. Similarly to the interfaces of video search engines, also
these browsing tools are desktop based applications or more
rarely form-based web applications, that have relatively lim-
ited user interaction and simple presentation of results as
lists and tables. Finally, all these systems are designed to
work only on a single repository of videos, missing the op-
portunities to exploit the large amount of multimedia data
now available on the web from multimedia sharing sites like
Youtube and Flickr.
In this demonstration we present a web video browsing
system that allows semantic access to video collections of
diļ¬erent domains (e.g. broadcast news and cultural heritage
documentaries), with advanced visualization techniques de-
rived from the ļ¬eld of Information Visualization [7], with
the goal of making large and complex content more accessi-
ble and usable to the end-users. The user interface was de-
signed in order to optimize comprehension of the structure
of the ontology used to model a domain, and to integrate
diverse information sources within the same presentation.
This objective is achieved using graph representation [8,9],
that maximizes data comprehension and relations analysis.
The system uses also concept clouds to summarize the con-
tent of a collection, a form of data presentation that has
now become extremely familiar to web users. Finally our
web system, using the Rich Internet Application paradigm
(RIA), does not require any installation and provides a re-
sponsive user interface.
2. THE SYSTEM
The tool provides means to explore archives of diļ¬erent
video domains, inspecting the relations between the concepts
of the ontology and providing direct access to the video in-
stances of these concepts. The interface aims at bringing
some graphical elements typical of web 2.0 interfaces, such
as the tag cloud, to the exploration of video archives. The
user starts selecting concepts from a ātag cloudā, than in-
spects the ontology that describes the video domain, shown
as a graph with diļ¬erent types of relations, and inspects the
instances of the concepts that are annotated (see Fig. 1a ).
2. (a) (b)
Figure 1: Browsing interface: a) a view of part of the ontology (concepts related to āpeopleā); b) inspection
of some instances of the āfaceā concept.
Since the presentation of the full graph of the ontology would
not be eļ¬cient only the concepts that are nearby (in terms
of number of relations) the selected one are shown. Users
can select a threshold of this distance and also diļ¬erent au-
tomatic layout algorithms for the for the spatial positioning
of the nodes of the graph, to better understand the concepts
relations. The ātag cloudā shows the most frequent concepts
in the video collection.
Results of the browsing are shown in the interface in a
compact form, to show both ontology structure and video
clips. For each video clip that contains an instance of a con-
cept is shown the thumbnail in a mini video player, that
let users immediately watch the video (Fig. 1b). These
thumbnails and videos are obtained from the video stream-
ing server, and for each of them also programme metadata
(such as title, broadcaster, etc.) are provided. Users can
play the video sequence and, if interested, zoom in each re-
sult to watch it in a larger player that provides more details
on the video metadata as shown in Fig. 2. Below every ele-
ment of the ontology we show buttons to access the related
contents of the concept from diļ¬erent sources, diļ¬ering from
each other by color. The objective is the enrichment of infor-
mation with associated material in order to provide the user
with a larger number of information sources to improve in-
formation completeness and increase userās knowledge. Our
system uses the publicly available APIs of YouTube as source
of video contents, Flickr for images and pictures and Twit-
ter for social and real-time content. An example of this
functionality is shown in Fig. 3. The interface was designed
to let users interact with the presentation of the browsing
results, allowing drag & drop of the concepts to correct er-
rors of automatic layout positioning, and delving deep into
the concepts related to a certain element by double clicking
on the represented concept. The graph is animated with
smooth transitions between diļ¬erent visualizations [10].
A back-end search engine analyzes the structure of the
ontology used to annotate the videos, translating the ver-
bose ontology representation into a more compact and man-
ageable XML representation, that can be easily parsed and
processed by the user interface that runs within a browser
plugin. The ontology has been created automatically from
a ļ¬at lexicon, using WordNet to create concept relations
(is a, is part of and has part). The co-occurrence rela-
tion has been obtained analysing the ground-truth annota-
tions of the TRECVid 2005 training set. The ontology is
modelled following the Dynamic Pictorially Enriched On-
tology model [11], that includes both concepts and visual
concept prototypes. These prototypes represent the diļ¬er-
ent visual modalities in which a concept can manifest; they
can be selected by the users to perform query by example.
Concepts, concepts relations, video annotations and visual
concept prototypes are deļ¬ned using the standard Web On-
tology Language (OWL) so that the ontology can be eas-
ily reused and shared. The back-end search engine uses
SPARQL, the W3C standard ontology query language, to
create simpliļ¬ed views of the ontology structure.
The system frontend is based on the Rich Internet Appli-
cation paradigm, using a client side Flash virtual machine
which can execute instructions on the client computer. RIAs
can avoid the usually slow and synchronous loop for user in-
teractions, typical of web based environments that use only
the HTML widgets available to standard browsers. This
allows to implement a visual querying mechanism that ex-
hibits a look and feel approaching that of a desktop envi-
ronment, with the fast response that is expected by users.
Another advantage of this solution regards the deployment
of the application, since installation is not required, because
the application is updated only on the server; moreover it
can run anywhere, regardless of what operating system is
used, provided that a browser with Flash plugin is avail-
able. The user interface is written in the Action Script 3.0
programming language, using Adobe Flex. The graphical
representation of the graph is made using the open source
visual analytics framework Birdeye Ravis [12].
The system backend is currently based on open source
tools (i.e. Apache Tomcat and Red 5 video streaming server)
or freely available commercial tools (Adobe Media Server
has a free developer edition). The RTMP video streaming
protocol is used. The search engine that provides access
to the ontology structure and concepts instances is devel-
oped in Java and supports multiple ontologies and ontology
reasoning services. Audio-visual concepts are automatically
annotated using the VidiVideo annotation engine [2] or the
3. automatic annotation tools of the IM3I project2
. To deal
with limitations in the number of streaming connections to
the streaming server while maintaining a fast interface re-
sponse, a caching strategy has been adopted. All the mod-
ules of the system are connected using HTTP POST, XML
and SOAP web services.
Figure 2: Large streaming video player: the user
can expand the mini video players to better inspect
each instance of the ontology concepts and analyze
the video metadata. The video player shows the
position of the concept within the whole video.
The system ranked second in the Adobe YouGC contest,
in the Rich Internet Application category.
3. DEMONSTRATION
We demonstrate the browsing functionalities of the sys-
tem in diļ¬erent video domains: broadcast news and cultural
heritage documentaries. We show how to navigate the video
collections using the ontology, with its concepts and concepts
relations, and with the concept clouds. We demonstrate also
how the browsing can be expanded from the video collections
to include related material from other sources; the same on-
tology used for video browsing is used also to access videos
on YouTube, images on Flickr and tweets on Twitter.
Acknowledgments.
This work was partially supported by the EU IST Vidi-
Video project (www.vidivideo.info - contract FP6-045547)
and EU IST IM3I project (http://www.im3i.eu/ - contract
FP7-222267). The authors thank Nicola Martorana for his
help in software development.
4. REFERENCES
[1] A.F. Smeaton, P. Over, and W. Kraaij. High-level
feature detection from video in TRECVid: a 5-year
retrospective of achievements. Multimedia Content
Analysis, Theory and Applications, pages 151ā174,
2009.
[2] Cees G. M. Snoek, Koen E. A. van de Sande, Ork
de Rooij, Bouke Huurnink, Jasper R. R. Uijlings,
Michiel van Liempt, Miguel Bugalho, Isabel Trancoso,
Fei Yan, Muhammad A. Tahir, Krystian Mikolajczyk,
Josef Kittler, Maarten de Rijke, Jan-Mark
2
http://www.im3i.eu
(a)
(b)
(c)
Figure 3: Searching related material in external col-
lections: the user can extend the browsing to other
repositories or social sites, seeing the instances of
an ontology concept in a) YouTube, b) Flickr and c)
Twitter.
Geusebroek, Theo Gevers, Marcel Worring, Dennis C.
Koelma, and Arnold W. M. Smeulders. The MediaMill
TRECVID 2009 semantic video search engine. In
Proceedings of the 7th TRECVID Workshop,
Gaithersburg, USA, November 2009.
[3] A. Natsev, J.R. Smith, J. TeĖsiĀ“c, L. Xie, R. Yan,
W. Jiang, and M. Merler. IBM Research
TRECVID-2008 video retrieval system. In Proceedings
of the 6th TRECVID Workshop, 2008.
4. [4] Klaus Schoeļ¬mann and Laszlo Boeszoermenyi. Video
browsing using interactive navigation summaries. In
CBMI ā09: Proceedings of the 2009 Seventh
International Workshop on Content-Based Multimedia
Indexing, pages 243ā248, Washington, DC, USA, 2009.
IEEE Computer Society.
[5] Thierry Urruty, Frank Hopfgartner, David Hannah,
Desmond Elliott, and Joemon M. Jose. Supporting
aspect-based video browsing: analysis of a user study.
In CIVR ā09: Proceeding of the ACM International
Conference on Image and Video Retrieval, pages 1ā8,
New York, NY, USA, 2009. ACM.
[6] W. Bailer, W. Weiss, G. Kienast, G. Thallinger, and
W. Haas. A video browsing tool for content
management in postproduction. International Journal
of Digital Multimedia Broadcasting, 2010.
[7] Stuart K. Card, Jock Mackinlay, and Ben
Shneiderman. Readings in Information Visualization:
Using Vision to Think. Morgan Kaufmann, January
1999.
[8] E. Di Giacomo, W. Didimo, L. Grilli, and G. Liotta.
Graph visualization techniques for web clustering
engines. Transactions on Visualization and Computer
Graphics, 13(2):294ā304, 2007.
[9] Ivan Herman, Guy MelanĀøcon, and M. Scott Marshall.
Graph visualization and navigation in information
visualization: A survey. IEEE Transactions on
Visualization and Computer Graphics, 6(1):24ā43,
2000.
[10] K. Misue, P. Eades, W. Lai, and K. Sugiyama. Layout
adjustment and the mental map. Journal of Visual
Languages & Computing, 6(2):183ā210, 1995.
[11] Marco Bertini, Alberto Del Bimbo, Giuseppe Serra,
Carlo Torniai, Rita Cucchiara, Costantino Grana, and
Roberto Vezzani. Dynamic pictorially enriched
ontologies for digital video libraries. IEEE
MultiMedia, 16(2):42ā51, Apr/Jun 2009.
[12] Birdeye information visualization and visual analytics
library, http://code.google.com/p/birdeye/.