• Like
  • Save
IBC Futurezone 2012 - ON:meedi:a presents flexible media management and publishing
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

IBC Futurezone 2012 - ON:meedi:a presents flexible media management and publishing

  • 87 views
Published

IBC 2012 took place from 07 to 11 September 2012 in Amsterdam with a conference and exhibition for professionals engaged in the creation, management and delivery of electronic media and entertainment …

IBC 2012 took place from 07 to 11 September 2012 in Amsterdam with a conference and exhibition for professionals engaged in the creation, management and delivery of electronic media and entertainment content worldwide.

We presented the poster "Flexible media management and publishing" at the Future Zone the latest developments of the ON:meedi:a ecosystem.

Published in Technology
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
    Be the first to like this
No Downloads

Views

Total Views
87
On SlideShare
0
From Embeds
0
Number of Embeds
1

Actions

Shares
Downloads
0
Comments
0
Likes
0

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide

Transcript

  • 1. Flexible media management and publishing Alexandru Stan (as@in-two.com), George Ioannidis (gi@in-two.com) IN2 search interfaces development Ltd, UKAbstract Challenges from creative Admin ComponentsThis poster showcases state-of-the-art professionals Monitor and manage processing pipelines in real-time and distribute the computing loadresearch results on flexible media managementand publishing tools. The service is based on a ▪Flexible solutions; to different machines.service-oriented architecture which, through ▪Meaningful and fast annotation tools;its distributed architecture, serves to reduce ▪Semi-automatic systems allowing the editingintegration costs and provide better scalability. of automatically inserted annotations;At the core of the system resides a powerful ▪Manage collections from one place;media-processing framework which does notonly consider the individual media modalities ▪Make publishing collections easy;but also fuses the results of each processor, ▪Provide search beyond text and metadata;effectively enabling powerful multimodal ▪Minimise costs;analysis, topic detection and tracking which ▪Make it web-based.provide for richer and better annotation of themultimedia content. The interface layer Multimedia handlingexposes a number of tools for semi-automatic Automatic multimediatime-based annotation and semantic-aware ON:meedi:a workflows are constructedsearch. Content aggregation and smooth from user definablemultimedia interaction is enabled by a Move away from the traditional monolithic processing pipelines.publishing environment that exploits solution and build systems based on Servicerepository content by the authoring of flexible Oriented Architectures: Example ofuser interfaces, whereby each interface ▪ Build workflows by chaining pipelines pipeline chaining:element can be fully configured regarding its ▪ Make everything run in a web browserlayout, the data element it represents, and the For better content-based analysis, make use offunctionality it provides. multimodal fusion of extracted descriptors. Build a CMS-like authoring environment to info@in-two.com publish collections directly from the repository, @in2_tweet with full flexibility in terms of layouts and data http://in-two.com objects used.
  • 2. Video AudioVisual content analysis is performed on two levels: syntactic ▪Audio Segmentation (audio(low-level features, e.g. colour of the pixels, texture, and motion) segmentation, audio classification,and semantic (object or event detection from combining low speaker identification);and mid-level features). Low-level features used: MPEG-7 (SCD, ▪Audio Language Identification MediaCLD, EHD), video segment, dominant HSV colour, dominant Processor (EN, DE, ES, PT);scene motion, camera motion. Semantic annotation of images ▪Audio Transcription Media Processorand videos is performed using a set of concept detectors that using automatic speech recognition.have been trained for specific concepts and that can be trainedby end users, according to their needs and video domains.Concept detectors selection based on performance and Conclusionflexibility. The R&D work shows new possibilities for: ▪Customised, configurable and flexibleSemantic analysis is independent from syntactic analysis and media processing, analysing thanks to athus the services can be executed in parallel to exploit the service-oriented backend;distributed SOA architecture. ▪Web-based multi-user annotation ofWeb-based interfaces that exposes rich functionality: media segments with free text, keywords ▪ Multi-user support with different provider-defined and by using formalised knowledge; access levels (reflecting the actors involved); ▪Fast and easy publishing of media ▪ Automatic semantic annotation with support for collections on the web; always synced user-validation of generated annotations; with the repository backend; ▪ Ontology-powered backend that also can handle ▪Multiple presentation views on content keywords and free text annotations; collections. ▪ Support for geo-metadata.Searching and Browsing Authoring and Publishing Contact: Alexandru Stan, Programme Manager IN2 search interfaces development LtdPerform complex queries using different query User friendly interface which enables content as@in-two.commodalities (free text, semantic concepts using holders to import a media collection, extend its @in2_tweet | http://in-two.comBoolean and temporal relations and query by characteristics and publish elaborate workflowvisual example). patterns in order to to enrich, search and view the contents of a collection.Query results videos streaming from the time-code when the search criteria was found. the ecosystem where media lives info@onmeedia.com | @onmeedia | http://onmeedia.com Acknowledgement: The ON:meedi:a ecosystem is a further development of the IM3I project and its demonstration and deployment is co-financed by the IM3I+ project. The research leading to these results has received funding from the European Unions FP7 Seventh Framework Programme managed by REA - Research Executive Agency, under grant agreement No 222267 (IM3I) and No 286838 (IM3I+). RTD work on multimedia fusion and audio analysis and the research leading to these results has received funding from the European Union’s Seventh Framework Programme managed by REA-Research Executive Agency (FP7/2007-2013) under grant agreement no 262428 (euTV)