Authors/Presenters: Vasileios Mezaris and Benoit Huet.
Video hyperlinking is the introduction of links that originate from pieces of video material and point to other relevant content, be it video or any other form of digital content. The tutorial presents the state of the art in video hyperlinking approaches and in relevant enabling technologies, such as video analysis and multimedia indexing and retrieval. Several alternative strategies, based on text, visual and/or audio information are introduced, evaluated and discussed, providing the audience with details on what works and what doesn’t on real broadcast material.
Fast object re-detection and localization in video for spatio-temporal fragme...LinkedTV
Fast object re-detection and localization in video for spatio-temporal fragment creation, Jul. 2013, San Jose, California, USA. Talk provided by Vasileios Mezaris.
Authors/Presenters: Vasileios Mezaris and Benoit Huet.
Video hyperlinking is the introduction of links that originate from pieces of video material and point to other relevant content, be it video or any other form of digital content. The tutorial presents the state of the art in video hyperlinking approaches and in relevant enabling technologies, such as video analysis and multimedia indexing and retrieval. Several alternative strategies, based on text, visual and/or audio information are introduced, evaluated and discussed, providing the audience with details on what works and what doesn’t on real broadcast material.
Semantics at the multimedia fragment level SSSW 2013Raphael Troncy
"Semantics at the multimedia fragment level or how enabling the remixing of online media" - Invited Talk given at the Semantic Web Summer School (SSSW), 12 July 2013
An Ad-hoc Smart Gateway Platform for the Web of Things (IEEE iThings 2013 Bes...Darren Carlson
The Web of Things (WoT) aims to extend the Web into the physical world by promoting the adoption of Web protocols by situated services and smart objects (ambient artifacts). However, real-world ambient artifacts often adopt proprietary and/or non-Web protocols, making them invisible to Web search engines and inaccessible to conventional Web agents. Smart Gateways have been proposed as a way to “Web-enable” proprietary ambient artifacts through intermediary proxy nodes; however, the requisite infrastructure is difficult to deploy at Web scale. To address such challenges, we are developing Ambient Dynamix (Dynamix): a plug-and-play context framework for mobile devices, which enables Web agents to interoperate with non-Web ambient artifacts – directly from the browser. In this paper, we describe how Dynamix can be used to transform the user’s device into an ad-hoc Smart Gateway in-situ, enabling Web applications (in the device’s browser) to seamlessly interact with non-Web ambient artifacts in the physical environment. We describe an operational prototype implementation, which enables Web apps to discover and control nearby UPnP and AirPlay media devices uniformly. We also present a performance evaluation that indicates the prototype imposes low processing and memory overhead, and is suitable for deployment on many commodity mobile devices.
Av Relaties: Zoeken En Contextualisering In Linkedtv En AxesLinkedTV
The talk was delivered by by Lotte Belice Baltussen, Sound and Vision at the iMMovator Cross Media Café "Uit het lab", 12 February 2013, the Media Park in Hilversum, The Netherlands.
More information please visit: http://bit.ly/10gYc7L
LinkedTV. Engaging TV viewers with AudioVisual heritage on second screens EUscreen
'LinkedTV. Engaging TV viewers with AudioVisual heritage on second screens' by Lyndon Nixon (MODUL University, Vienna) and Lotte Belice Baltussen (Sound and Vision, Hilversum) - a presentation held at EUscreenXL Rome Conference 'From Audience to User: Engaging with Audiovisual Heritage Online' (http://blog.euscreen.eu/conference-programme).
Fast object re-detection and localization in video for spatio-temporal fragme...LinkedTV
Fast object re-detection and localization in video for spatio-temporal fragment creation, Jul. 2013, San Jose, California, USA. Talk provided by Vasileios Mezaris.
Authors/Presenters: Vasileios Mezaris and Benoit Huet.
Video hyperlinking is the introduction of links that originate from pieces of video material and point to other relevant content, be it video or any other form of digital content. The tutorial presents the state of the art in video hyperlinking approaches and in relevant enabling technologies, such as video analysis and multimedia indexing and retrieval. Several alternative strategies, based on text, visual and/or audio information are introduced, evaluated and discussed, providing the audience with details on what works and what doesn’t on real broadcast material.
Semantics at the multimedia fragment level SSSW 2013Raphael Troncy
"Semantics at the multimedia fragment level or how enabling the remixing of online media" - Invited Talk given at the Semantic Web Summer School (SSSW), 12 July 2013
An Ad-hoc Smart Gateway Platform for the Web of Things (IEEE iThings 2013 Bes...Darren Carlson
The Web of Things (WoT) aims to extend the Web into the physical world by promoting the adoption of Web protocols by situated services and smart objects (ambient artifacts). However, real-world ambient artifacts often adopt proprietary and/or non-Web protocols, making them invisible to Web search engines and inaccessible to conventional Web agents. Smart Gateways have been proposed as a way to “Web-enable” proprietary ambient artifacts through intermediary proxy nodes; however, the requisite infrastructure is difficult to deploy at Web scale. To address such challenges, we are developing Ambient Dynamix (Dynamix): a plug-and-play context framework for mobile devices, which enables Web agents to interoperate with non-Web ambient artifacts – directly from the browser. In this paper, we describe how Dynamix can be used to transform the user’s device into an ad-hoc Smart Gateway in-situ, enabling Web applications (in the device’s browser) to seamlessly interact with non-Web ambient artifacts in the physical environment. We describe an operational prototype implementation, which enables Web apps to discover and control nearby UPnP and AirPlay media devices uniformly. We also present a performance evaluation that indicates the prototype imposes low processing and memory overhead, and is suitable for deployment on many commodity mobile devices.
Av Relaties: Zoeken En Contextualisering In Linkedtv En AxesLinkedTV
The talk was delivered by by Lotte Belice Baltussen, Sound and Vision at the iMMovator Cross Media Café "Uit het lab", 12 February 2013, the Media Park in Hilversum, The Netherlands.
More information please visit: http://bit.ly/10gYc7L
LinkedTV. Engaging TV viewers with AudioVisual heritage on second screens EUscreen
'LinkedTV. Engaging TV viewers with AudioVisual heritage on second screens' by Lyndon Nixon (MODUL University, Vienna) and Lotte Belice Baltussen (Sound and Vision, Hilversum) - a presentation held at EUscreenXL Rome Conference 'From Audience to User: Engaging with Audiovisual Heritage Online' (http://blog.euscreen.eu/conference-programme).
How Open Data Can Enhance Interactive TelevisionLinkedTV
The presentation was delivered by Lyndon Nixon, STI International Consulting and Research GmbH, Austria, during the ngnlab.eu Workshop http://ngnlab.eu/index.php/ngnlabeu-workshop, held in Bratislava during September 20th, 2012. The workshop was co-located with the 5th joint IFIP Wireless and Mobile Networking Conference (WMNC 2012 http://wmnc.fiit.stuba.sk.
Purpose of the workshop is bringing together researchers and experts from academia as well as from business which came from Germany, Nederlands, Spain, Austria and Slovakia.
Remixing Media on the Semantic Web (ISWC 2014 Tutorial) Pt 1 Media Fragment S...LinkedTV
In this session we will introduce the W3C Media Fragment URI specification, highlighting how media fragments can be incorporated into known media description schema, with a focus on the W3C Media Ontology and the Open Annotation Model. We will also discuss extensions to these ontologies to more richly link media fragments to the concepts they represent, re-using Linked Data as a Web-wide knowledge graph about concepts. We will briefly demonstrate various approaches to visual, audio and textual analysis in order to generate meaningful media fragments out of a media resource, as well as look at available annotation tools for semantically describing online media. Finally, we show how existing text around media (subtitles, transcripts) can be used for fragment annotation through Named Entity Recognition services (NERD) and a combined approach for generating a semantic description of media from analysis, metadata and entity recognition (TV2RDF).
Remixing Media on the Semantic Web (ISWC2014 Tutorial) Pt 2 Linked Media: An...LinkedTV
The second session looks at how using Linked Data principles for media fragment annotation publication and retrieval (Linked Media) can enable online media fragment re-use:
Introducing the Linked Media principles
Publishing Linked Media using dedicated multimedia RDF repositories
Retrieval of media resources that illustrate linked data concepts
Using the Linked Data graph to find relevant links between distinct media assets (examples with SPARQL)
Retrieval of links between annotated media to enable topical browsing (using the TVEnricher service)
Examples of Linked Media at scale: VideoLyzard and HyperTED
Implementation of Hyperlinks in videos with HTML5LinkedTV
This presentation was presented by Rolf Fricke, Condat AG, during Xinnovations 2012 in Humboldt University in Berlin on September 11th, 2012.
The main objective of the LinkedTV project is the integration of hyperlinks in videos to open up new possibilities for the interactive, seamless usage of video on the Web. One challenge is the placement of tags and hyperlinks above the video layer, which should be closely associated to the underlying media fragments for persons or objects shown in the video. As the media fragments dynamically appear, move and disappear a precise synchronization of the overlays and related media fragments is needed. We plan to implement these features together with further user interface features on the basis of the HTML5 elements video, CSS3 and web workers. As we target WebTV as well as Broadcast TV, we plan to provide a restricted HbbTV 1.1 implementation for TV-sets, but we finally expect to profit from a HTML5 integration in upcoming HbbTV releases.
Authors/Presenters: Vasileios Mezaris and Benoit Huet.
Video hyperlinking is the introduction of links that originate from pieces of video material and point to other relevant content, be it video or any other form of digital content. The tutorial presents the state of the art in video hyperlinking approaches and in relevant enabling technologies, such as video analysis and multimedia indexing and retrieval. Several alternative strategies, based on text, visual and/or audio information are introduced, evaluated and discussed, providing the audience with details on what works and what doesn’t on real broadcast material.
Survey of Semantic Media Annotation Tools - towards New Media Applications wi...LinkedTV
Semantic annotation of media resources has been a focus in research since many years, the closing of the "semantic gap" being seen as key to signicant improvements in media retrieval and browsing and
enabling new media applications and services. However, current tools and services exhibit varied approaches which do not easily integrate and
act as a barrier to wider uptake of semantic annotation of online multimedia.
In this paper, we outline the Linked Media principles which can help form a consensus on media annotation approaches, survey current media annotation tools against these principles and present two emerging
toolsets which can support Linked Media conformant annotation, closing with a call to future semantic media annotation tools and services to follow the same principles and ensure the growth of a Linked Media
layer of semantic descriptions of online media which can be an enabler to richer future online media services.
LinkedTV - an added value enrichment solution for AV content providersLinkedTV
Linked Television is offering a solution for audiovisual content owners to semi-automatically enrich media with links to additional information and content related to objects and topics in the program and build client applications which access this data and provide new added value services to consumers.
Kurzer Vortrag über ein Konzept zu einem verlinkten Fernsehdienst über HbbTV. Der Dienst wird im Rahmen des EU-geförderten Forschungs- und Entwicklungsprojekt LinkedTV (www.linkedtv.eu) u.a. beim Rundfunk Berlin-Brandenburg, rbb, in Zusammenarbeit mit internationalen Partnern entwickelt und getestet.
TV newscasts report about the latest event-related facts oc- curring in the world. Relying exclusively on them is, however, insufficient to fully grasp the context of the story being reported. In this paper, we propose an approach that retrieves and analyzes related documents from the Web to automatically generate semantic annotations that provide viewers and experts comprehensive information about the news. Using different Semantic Web and information retrieval techniques, we generate what we call Semantic Snapshot of a Newscast (NSS)
An introduction to HbbTV Hybrid Broadcast Broadband TV. What is it? How does it work? Red button and #HbbTV Service examples.
Presented at Thailand's Engineering Expo November 29, 2014 and at Thailand's Set-top box Committee's Public Hearing at Thailand's Engineering Institute (EIT) November 20, 2014.
View our MVNO SERVICE PRESENTATION for an easy read on some of the MVNO services we offer
★ MVNO/MNO NEGOTIATIONS
✓ Negotiation of MVNO wholesale agreement
✓ Negotiation of additional terms
✓ Contract review and advice
★ PRODUCT AND SALES PLANNING
✓ Product/Service Planning
✓ Market Segmentation and data
✓ Market forecasts and modelling
✓ Marketing Planning and costing
★ DEVELOPMENT OF BUSINESS PLAN
✓ Corporate Strategy
✓ Financial planning and modelling
✓ Investment analysis
✓ Operation planning
✓ Product and Marketing
★ LICENSE APPLICATION
✓ Advise and experience in relation to submitting for MVNO license
✓ Business planning and modelling with a focus on the application
✓ Preparation of License Application documentation as required by the telecom regulator
★ MVNO WORKSHOP
For more information please visit: www.yozzo.com
Yozzo's annual free report and info graphic with: Figures, tables, information and statistics about Thailand's Telecom Market end of 2015 ★
✔ Blended MOU
✔ Blended ARPU
✔ MVNO in Thailand
✔ 4G subscribers in Thailand
✔ Mobile Revenue per/minute
✔ AIS Highlights end of 2015
✔ DTAC Highlights end of 2015
✔ True Move Highlights end of 2015
✔ Mobile Internet usage in Thailand
✔ Thailand’s Mobile Subscriber Growth
✔ Smartphone sales in Thailand 2015
✔ Mobile Operator Market Shares 2015
✔ Amount of smartphone users in Thailand
✔ Types of Internet connections in Thailand
✔ Thailand’s Mobile user’s consumption and more…
¹ MVNO Definition: http://www.yozzo.com/mvno-wiki/mvno-definition
² The History of MVNO | http://www.yozzo.com/mvno-wiki/the-history-of-mvno | August 2016 | Yozzo.com
³ Why MVNOs in Thailand have failed: http://www.yozzo.com/news-and-information/mvno-mobile-operator-s/why-mvnos-in-thailand-have-failed
✔ ส่วนแบ่งการตลาดของบริการโทรศัพท์เคลื่อนที่ (ร้อยละ) : MOBILE MARKET SHARE %
✔ จำนวนผู้ใช้บริการโทรศัพท์เคลื่อนที่ (MOBILE SUBSCRIBERS)
✔ สัดส่วนรายรับของผู้ให้บริการในตลาดบริการโทรศัพท์เคลื่อนที่ ( MOBILE REVENUE % )
✔ อัตราการขยายตัวของผู้ใช้บริการโทรศัพท์เคลื่อนที่ (ร้อยละ) : MOBILE GROWTH RATE %
✔ อัตราการเข้าถึง (การใช้) บริการโทรศัพท์เคลื่อนที่ : MOBILE PENETRATION
✔ รายรับเฉลี่ยจากบริการโทรศัพท์เคลื่อนที่โดยรวมการเชื่อมต่อ (บาท/เลขหมาย/เดือน) : MOBILE ARPU EXCLUDED IC
✔ MOBILE MOU (MINUTE/MONTH)
✔ รายรับเฉลี่ยจากการให้บริการโทรศัพท์เคลื่อนที่ต่อนาที : MOBILE REVENUE PER MINUTE (RPM)
✔ สัดส่วนบริการเสียงและไม่ใช่เสียง (Mobile Non-voice/Voice Ratio)
✔ จำนวนผู้ใช้บริการโทรศัพท์ประจำที่ (Fixed Line Subscribers)
✔ สัดส่วนการเข้าถึง (การใช้) บริการโทรศัพท์ประจำที่ต่อจำนวนประชากร (ร้อยละ) : Fixed Line Penetration per Population %
✔ สัดส่วนการเข้าถึง (การใช้) บริการโทรศัพท์ประจำที่ต่อจำนวนครัวเรือน(ร้อยละ) : Fixed Line Penetration per Household %
✔ เปรียบเทียบสัดส่วนจำนวนผู้ใช้บริการระหว่างโทรศัพท์เคลื่อนที่
Re-using Media on the Web tutorial: Media Fragment Creation and AnnotationMediaMixerCommunity
Explain approaches to visual, audio and textual media analysis to automatically generate meaningful media fragments out of a media resource. Demonstrate latest results in the areas of video fragmentation, visual conceopt and event detection, face detection, object re-detection, and the use of speech recognition and keyword extraction from text for supporting multimedia analysis.
How Open Data Can Enhance Interactive TelevisionLinkedTV
The presentation was delivered by Lyndon Nixon, STI International Consulting and Research GmbH, Austria, during the ngnlab.eu Workshop http://ngnlab.eu/index.php/ngnlabeu-workshop, held in Bratislava during September 20th, 2012. The workshop was co-located with the 5th joint IFIP Wireless and Mobile Networking Conference (WMNC 2012 http://wmnc.fiit.stuba.sk.
Purpose of the workshop is bringing together researchers and experts from academia as well as from business which came from Germany, Nederlands, Spain, Austria and Slovakia.
Remixing Media on the Semantic Web (ISWC 2014 Tutorial) Pt 1 Media Fragment S...LinkedTV
In this session we will introduce the W3C Media Fragment URI specification, highlighting how media fragments can be incorporated into known media description schema, with a focus on the W3C Media Ontology and the Open Annotation Model. We will also discuss extensions to these ontologies to more richly link media fragments to the concepts they represent, re-using Linked Data as a Web-wide knowledge graph about concepts. We will briefly demonstrate various approaches to visual, audio and textual analysis in order to generate meaningful media fragments out of a media resource, as well as look at available annotation tools for semantically describing online media. Finally, we show how existing text around media (subtitles, transcripts) can be used for fragment annotation through Named Entity Recognition services (NERD) and a combined approach for generating a semantic description of media from analysis, metadata and entity recognition (TV2RDF).
Remixing Media on the Semantic Web (ISWC2014 Tutorial) Pt 2 Linked Media: An...LinkedTV
The second session looks at how using Linked Data principles for media fragment annotation publication and retrieval (Linked Media) can enable online media fragment re-use:
Introducing the Linked Media principles
Publishing Linked Media using dedicated multimedia RDF repositories
Retrieval of media resources that illustrate linked data concepts
Using the Linked Data graph to find relevant links between distinct media assets (examples with SPARQL)
Retrieval of links between annotated media to enable topical browsing (using the TVEnricher service)
Examples of Linked Media at scale: VideoLyzard and HyperTED
Implementation of Hyperlinks in videos with HTML5LinkedTV
This presentation was presented by Rolf Fricke, Condat AG, during Xinnovations 2012 in Humboldt University in Berlin on September 11th, 2012.
The main objective of the LinkedTV project is the integration of hyperlinks in videos to open up new possibilities for the interactive, seamless usage of video on the Web. One challenge is the placement of tags and hyperlinks above the video layer, which should be closely associated to the underlying media fragments for persons or objects shown in the video. As the media fragments dynamically appear, move and disappear a precise synchronization of the overlays and related media fragments is needed. We plan to implement these features together with further user interface features on the basis of the HTML5 elements video, CSS3 and web workers. As we target WebTV as well as Broadcast TV, we plan to provide a restricted HbbTV 1.1 implementation for TV-sets, but we finally expect to profit from a HTML5 integration in upcoming HbbTV releases.
Authors/Presenters: Vasileios Mezaris and Benoit Huet.
Video hyperlinking is the introduction of links that originate from pieces of video material and point to other relevant content, be it video or any other form of digital content. The tutorial presents the state of the art in video hyperlinking approaches and in relevant enabling technologies, such as video analysis and multimedia indexing and retrieval. Several alternative strategies, based on text, visual and/or audio information are introduced, evaluated and discussed, providing the audience with details on what works and what doesn’t on real broadcast material.
Survey of Semantic Media Annotation Tools - towards New Media Applications wi...LinkedTV
Semantic annotation of media resources has been a focus in research since many years, the closing of the "semantic gap" being seen as key to signicant improvements in media retrieval and browsing and
enabling new media applications and services. However, current tools and services exhibit varied approaches which do not easily integrate and
act as a barrier to wider uptake of semantic annotation of online multimedia.
In this paper, we outline the Linked Media principles which can help form a consensus on media annotation approaches, survey current media annotation tools against these principles and present two emerging
toolsets which can support Linked Media conformant annotation, closing with a call to future semantic media annotation tools and services to follow the same principles and ensure the growth of a Linked Media
layer of semantic descriptions of online media which can be an enabler to richer future online media services.
LinkedTV - an added value enrichment solution for AV content providersLinkedTV
Linked Television is offering a solution for audiovisual content owners to semi-automatically enrich media with links to additional information and content related to objects and topics in the program and build client applications which access this data and provide new added value services to consumers.
Kurzer Vortrag über ein Konzept zu einem verlinkten Fernsehdienst über HbbTV. Der Dienst wird im Rahmen des EU-geförderten Forschungs- und Entwicklungsprojekt LinkedTV (www.linkedtv.eu) u.a. beim Rundfunk Berlin-Brandenburg, rbb, in Zusammenarbeit mit internationalen Partnern entwickelt und getestet.
TV newscasts report about the latest event-related facts oc- curring in the world. Relying exclusively on them is, however, insufficient to fully grasp the context of the story being reported. In this paper, we propose an approach that retrieves and analyzes related documents from the Web to automatically generate semantic annotations that provide viewers and experts comprehensive information about the news. Using different Semantic Web and information retrieval techniques, we generate what we call Semantic Snapshot of a Newscast (NSS)
An introduction to HbbTV Hybrid Broadcast Broadband TV. What is it? How does it work? Red button and #HbbTV Service examples.
Presented at Thailand's Engineering Expo November 29, 2014 and at Thailand's Set-top box Committee's Public Hearing at Thailand's Engineering Institute (EIT) November 20, 2014.
View our MVNO SERVICE PRESENTATION for an easy read on some of the MVNO services we offer
★ MVNO/MNO NEGOTIATIONS
✓ Negotiation of MVNO wholesale agreement
✓ Negotiation of additional terms
✓ Contract review and advice
★ PRODUCT AND SALES PLANNING
✓ Product/Service Planning
✓ Market Segmentation and data
✓ Market forecasts and modelling
✓ Marketing Planning and costing
★ DEVELOPMENT OF BUSINESS PLAN
✓ Corporate Strategy
✓ Financial planning and modelling
✓ Investment analysis
✓ Operation planning
✓ Product and Marketing
★ LICENSE APPLICATION
✓ Advise and experience in relation to submitting for MVNO license
✓ Business planning and modelling with a focus on the application
✓ Preparation of License Application documentation as required by the telecom regulator
★ MVNO WORKSHOP
For more information please visit: www.yozzo.com
Yozzo's annual free report and info graphic with: Figures, tables, information and statistics about Thailand's Telecom Market end of 2015 ★
✔ Blended MOU
✔ Blended ARPU
✔ MVNO in Thailand
✔ 4G subscribers in Thailand
✔ Mobile Revenue per/minute
✔ AIS Highlights end of 2015
✔ DTAC Highlights end of 2015
✔ True Move Highlights end of 2015
✔ Mobile Internet usage in Thailand
✔ Thailand’s Mobile Subscriber Growth
✔ Smartphone sales in Thailand 2015
✔ Mobile Operator Market Shares 2015
✔ Amount of smartphone users in Thailand
✔ Types of Internet connections in Thailand
✔ Thailand’s Mobile user’s consumption and more…
¹ MVNO Definition: http://www.yozzo.com/mvno-wiki/mvno-definition
² The History of MVNO | http://www.yozzo.com/mvno-wiki/the-history-of-mvno | August 2016 | Yozzo.com
³ Why MVNOs in Thailand have failed: http://www.yozzo.com/news-and-information/mvno-mobile-operator-s/why-mvnos-in-thailand-have-failed
✔ ส่วนแบ่งการตลาดของบริการโทรศัพท์เคลื่อนที่ (ร้อยละ) : MOBILE MARKET SHARE %
✔ จำนวนผู้ใช้บริการโทรศัพท์เคลื่อนที่ (MOBILE SUBSCRIBERS)
✔ สัดส่วนรายรับของผู้ให้บริการในตลาดบริการโทรศัพท์เคลื่อนที่ ( MOBILE REVENUE % )
✔ อัตราการขยายตัวของผู้ใช้บริการโทรศัพท์เคลื่อนที่ (ร้อยละ) : MOBILE GROWTH RATE %
✔ อัตราการเข้าถึง (การใช้) บริการโทรศัพท์เคลื่อนที่ : MOBILE PENETRATION
✔ รายรับเฉลี่ยจากบริการโทรศัพท์เคลื่อนที่โดยรวมการเชื่อมต่อ (บาท/เลขหมาย/เดือน) : MOBILE ARPU EXCLUDED IC
✔ MOBILE MOU (MINUTE/MONTH)
✔ รายรับเฉลี่ยจากการให้บริการโทรศัพท์เคลื่อนที่ต่อนาที : MOBILE REVENUE PER MINUTE (RPM)
✔ สัดส่วนบริการเสียงและไม่ใช่เสียง (Mobile Non-voice/Voice Ratio)
✔ จำนวนผู้ใช้บริการโทรศัพท์ประจำที่ (Fixed Line Subscribers)
✔ สัดส่วนการเข้าถึง (การใช้) บริการโทรศัพท์ประจำที่ต่อจำนวนประชากร (ร้อยละ) : Fixed Line Penetration per Population %
✔ สัดส่วนการเข้าถึง (การใช้) บริการโทรศัพท์ประจำที่ต่อจำนวนครัวเรือน(ร้อยละ) : Fixed Line Penetration per Household %
✔ เปรียบเทียบสัดส่วนจำนวนผู้ใช้บริการระหว่างโทรศัพท์เคลื่อนที่
Re-using Media on the Web tutorial: Media Fragment Creation and AnnotationMediaMixerCommunity
Explain approaches to visual, audio and textual media analysis to automatically generate meaningful media fragments out of a media resource. Demonstrate latest results in the areas of video fragmentation, visual conceopt and event detection, face detection, object re-detection, and the use of speech recognition and keyword extraction from text for supporting multimedia analysis.
Cortana Analytics Workshop: Real-Time Data Processing -- How Do I Choose the ...MSAdvAnalytics
Benjamin Wright-Jones, Simon Lidberg. Are you interested in near real-time data processing but confused about Azure capabilities and product positioning? Spark, StreamInsight, Storm (HDInsight) and Stream Analytics offer ways to ingest data but there is uncertainty about when and how we should use these capabilities. For example, what are the differences and key solution design decision points? Come to this session to learn about current and new near real-time data processing engines. Go to https://channel9.msdn.com/ to find the recording of this session.
Sumo Logic QuickStart Webinar - Jan 2016Sumo Logic
QuickStart your Sumo Logic service with this exclusive webinar. At these monthly live events you will learn how to capitalize on critical capabilities that can amplify your log analytics and monitoring experience while providing you with meaningful business and IT insights
Cloud-native application monitoring powered by Riverbed and ElasticsearchRichard Juknavorian
Learn improved performance testing for Cloud-native applications by integrating Elasticsearch with Riverbed application performance monitoring (APM). The objective was to create realistic performance testing that was representative of real-world usage of the application.
Ml based detection of users anomaly activities (20th OWASP Night Tokyo, English)Yury Leonychev
This is a English slides of my presentation about machine learning implementation for model web application. Some advices for developers, which decided to create the same implementation in real production environment.
Environment Canada's Data Management ServiceSafe Software
A brief history in TimeSeries data at Environment Canada. An Enterprise view of how FME can be integrated into departmental data management activities.
What is going on? Application Diagnostics on Azure - Copenhagen .NET User GroupMaarten Balliauw
We all like building and deploying cloud applications. But what happens once that’s done? How do we know if our application behaves like we expect it to behave? Of course, logging! But how do we get that data off of our machines? How do we sift through a bunch of seemingly meaningless diagnostics? In this session, we’ll look at how we can keep track of our Azure application using structured logging, AppInsights and AppInsights analytics to make all that data more meaningful.
SWORD : simple web service offering repository deposit; Open Repositories 2008, Southampton; Julie Allinson
This paper presents an overview of a JISC (Joint Information Systems Committee) activity to scope, define and develop a deposit specification for use across the repositories space, which has come to fruition within the SWORD (Simple Web service Offering Repository Deposit) project 1. It will look both at the background and how this piece of work came to pass, the movement from informal working group to funded project, the lightweight project construction and the resulting protocol and technical outputs. The paper will also consider the future of SWORD and look at some of the activity which has already galvanised around the project outputs.
LinkedTV Deliverable 9.3 Final LinkedTV Project ReportLinkedTV
This document comprises the final report of LinkedTV. It includes a publishable summary of the project's scientific results and technological outcomes, a plan for use and dissemination of foreground IP and a list of dissemination activities (publications and events)
LinkedTV Deliverable 6.5 - Final evaluation of the LinkedTV ScenariosLinkedTV
The deliverable presents the results of evaluating the final
scenario demonstrators LinkedNews and LinkedCulture in the LinkedTV project. We tested specifically user satisfaction with the enriched TV experience we enabled for cultural heritage and news TV programs. We also supported the evaluation of other aspects of the LinkedTV technologies in the trials, specifically the personalization and content curation.
LinkedTV Deliverable 5.7 - Validation of the LinkedTV ArchitectureLinkedTV
The LinkedTV architecture lays the foundation for the
LinkedTV system. It consists of the integrating platform for the end-to-end functionality, the backend components and the supporting client components. Since the architecture of a software system has a fundamental impact on quality
attributes, it is important to evaluate its design. The document at hand reports on the validation of the LinkedTV architecture.
LinkedTV Deliverable 4.7 - Contextualisation and personalisation evaluation a...LinkedTV
This deliverable covers all the aspects of evaluation of the overall LinkedTV personalization workflow, as well as re-evaluations of techniques where newer technology and / or algorithmic capacity offer new insight into the general performance. The implicit contextualized personalization workflow, the implicit uncontextualized workflow in the premises of the final LinkedTV application, the advances
in context tracking given new technologies emerged and the outlook of video recommendation beyond LinkedTV is measured and analyzed in this document.
LinkedTV Deliverable 3.8 - Design guideline document for concept-based presen...LinkedTV
This document presents guidelines on how to setup enriched video experiences.
We provide user-centric guidelines on the named entities that should be detected and selected to effectively enrich video news broadcasts. This is presented in the form of a user study.
We selected 5 news videos and manually extracted the
candidate entities from various sources, such as the transcript, visual content and related articles. An expert was asked to also provide interesting entities for the videos. The resulting 99 candidate entities were presented to 50 participants via an online survey. The participants rated the level of interestingness of the entities and the usefulness of
information from Wikipedia about these entities. Analysis of
the results shows that users prefer entities of the type
organization and person and have little interest for entities of the type location. They also indicate that subtitles are not
enough as a source of interesting entities and that the amount of interesting entities can be improved by the combined use of subtitles with entities extracted from related articles or entities suggested by an expert. The expert suggestions showed to be more accurate than any other source of entities. Wikipedia seems to be a suitable source of additional information about the entities in the news, but should be complemented with additional sources.
We provide engineering guidelines on how to present,
aggregate and process content for TV program companion
applications. We describe the content processing pipeline that was developed in WP3 to feed the content for the LinkedNews and Linked Culture demonstrators. This shows how content from the Web can be re-purposed to enrich videos by extracting the core display content and presenting it in a uniform way to the user.
LinkedTV Deliverable 2.7 - Final Linked Media Layer and EvaluationLinkedTV
This deliverable presents the evaluation of content annotation and content enrichment systems that are part of the final tool set developed within the LinkedTV consortium. The evaluations were performed on both the Linked News and Linked Culture trial content, as well as on other content annotated for this purpose. The evaluation spans three languages: German (Linked News), Dutch (Linked
Culture) and English. Selected algorithms and tools were also subject to benchmarking in two international contests: MediaEval 2014 and TAC’14. Additionally, the Microposts 2015 NEEL Challenge is being organized with the support of LinkedTV.
LinkedTV Deliverable 1.6 - Intelligent hypervideo analysis evaluation, final ...LinkedTV
This deliverable describes the conducted evaluation activities for assessing the performance of a number of developed methods for intelligent hypervideo analysis and the usability of the implemented Editor Tool for supporting video annotation and enrichment. Based on the performance evaluations reported in D1.4 regarding a set of LinkedTV analysis components, we extended our experiments for assessing the effectiveness of newer versions of these methods as well as of entirely new techniques, concerning the accuracy and the time efficiency
of the analysis. For this purpose, in-house experiments and participations at international benchmarking activities were made, and the outcomes are reported in this deliverable. Moreover, we present the results of user trials regarding the developed Editor Tool, where groups of experts assessed its usability and the supported functionalities, and
evaluated the usefulness and the accuracy of the implemented video segmentation approaches based on the analysis requirements of the LinkedTV scenarios. By this deliverable we complete the reporting of WP1 evaluations that aimed to assess the efficiency of the developed
multimedia analysis methods throughout the project, according to the analysis requirements of the LinkedTV scenarios.
LinkedTV Deliverable 5.5 - LinkedTV front-end: video player and MediaCanvas A...LinkedTV
The LinkedTV media player and API has evolved from a single player and limited API in version 1 to a toolkit to allow rapid development and creation of different kind of applications within the HTML5 / multiscreen space. The main reason for this transition is that during the course of the Linked TV project different partners had different requirements for their scenarios. Instead of trying to fit all these requirements into one player and, most likely, compromise on the functionalities of the scenarios we wanted to offer something that would allow all partners a satisfiable solution.
Therefore the Springfield Multiscreen Toolkit, or short SMT, has been developed. The aim for the SMT was to allow flexibility for developing multiscreen applications. Also from a commercial point of view a toolkit with examples is more interesting than a pure player as it gives the freedom of developing new ideas with the LinkedTV platform.
LinkedTV tools for Linked Media applications (LIME 2015 workshop talk)LinkedTV
A brief introduction to tools from the LinkedTV project which can be used together to build new media applications based on conceptual linking of media fragments.
LinkedTV Deliverable D4.6 Contextualisation solution and implementationLinkedTV
This deliverable presents the WP4 contextualisation final im-plementation. As contextualization has a high impact on all the other modules of WP4 (especially personalization and recom-mendation), the deliverable intends to provide a picture of the final WP4 workflow implementation.
LinkedTV Deliverable D3.7 User Interfaces selected and refined (version 2)LinkedTV
This report describes the LinkedTV user interfaces. Based on the results user studies and the initial evaluation of the year 2 prototype we selected and refined the interfaces. We selected a single screen application that uses HbbTV technology to provide additional information about a TV program as an overlay on the TV broadcast. In addition, we worked towards TV program companion applications that are tailored for two domains: news and cultural heritage. With these applications we demonstrate different types of interaction modes, such as synchronized content on a second screen, and bookmarking chapters combined with the exploration of related content after the program. The interfaces are built on top of the Multiscreen Toolkit. We created a component-based infrastructure that allows us to quickly create tailored companion applications by reusing and configuring interface components. In the final part of the project we finalize this approach and test it by applying it to a new domain.
LinkedTV Deliverable D2.6 LinkedTV Framework for Generating Video Enrichments...LinkedTV
This deliverable describes the final LinkedTV framework that provides a set of possible enrichment resources for seed video content using techniques such as text and web mining, information extraction and information retrieval technologies. The enrichment content is obtained from four type of sources: a) by crawling and indexing web sites described in a white list specified by the content partners,
b) by querying the API or SPARQL endpoint of the Europeana digital library network which is publicly exposed, c) by querying multiple social networking APIs, d) by hyperlinking to other parts of TV programs within the same collection using a Solr index. This deliverable
also describes an additional content annotation functionality, namely labelling enrichment (as well as seed) content with thematic topics, as well as the process of exposing content annotations to this module and to the filtering services of LinkedTV’s personalization workflow. We illustrate the enrichment workflow for the two main scenarios of LinkedTV which have lead to the development of the LinkedCulture and LinkedNews applications, which respectively use the TVEnricher and TVNewsEnricher enrichment services. The original title of this deliverable from the DoW was Advanced concept labelling by complementary Web mining.
LinkedTV Deliverable D1.5 The Editor Tool, final release LinkedTV
This document reports on the design and implementation of the final version of the editor tool (ET) v2.0, where its purpose is to serve the program editing teams of broadcasters that have adopted LinkedTV’s interactive television solution into their workflow. Two of these teams are currently represented in the LinkedTV project, namely the RBB team and the AVROTROS team (formerly known as AVRO).
The main purpose of the ET is to provide a means to correct and curate automatically generated annotations and hyperlinks created by the audiovisual and textual analysis technologies developed in WP 1 and 2 of the LinkedTV project. Without the intervention of human editors to correct this data, there is a reasonable risk of exposing inappropriate, incorrect or irrelevant information to the viewers of a LinkedTV interactive broadcast.
LinkedTV Deliverable D1.4 Visual, text and audio information analysis for hyp...LinkedTV
Having extensively evaluated the performance of the technologies included in the first release of WP1 multimedia analysis tools, using content from the LinkedTV scenarios and by participating in international benchmarking activities, concrete decisions regarding the
appropriateness and the importance of each individual method or combination of methods were made, which, combined with an updated list of information needs for each scenario, led to a new set of analysis requirements that had to be addressed through the release of the final set of analysis techniques of WP1. To this end, coordinated efforts on three directions, including
(a) the improvement of a number of methods in terms of accuracy and time efficiency,
(b) the development of new technologies and (c) the definition of synergies between methods for obtaining new types of information via multimodal processing, resulted in the final bunch of multimedia analysis methods for video hyperlinking. Moreover, the different developed analysis modules have been integrated into a web-based infrastructure, allowing the fully automatic linking of the multitude of WP1 technologies and the overall LinkedTV platform.
LinkedTV D8.6 Market and Product Survey for LinkedTV Services and TechnologyLinkedTV
D8.6 presents the results of the market analysis for LinkedTV products and services and consists of
two parts: an overall analysis of current and future
developments in the TV and digital video market and a specific market analysis of potential LinkedTV customers and competitors. Based on the market analysis it was possible to provide a first rough estimation of the LinkedTV market potential and to position LinkedTV on the market.
This deliverable presents the LinkedTV Public Demonstrator which will be an online, publicly accessible Website collecting showcases of the key project outputs which form together our LinkedTV solution: the Editor Tool, Platform and Player, complemented by demonstrations of the provision of this solution for the content of two European broadcasters: the LinkedCulture and LinkedNews scenario demonstrators.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Leading Change strategies and insights for effective change management pdf 1.pdf
Video Hyperlinking Tutorial (Part C)
1. Information Technologies Institute
Centre for Research and Technology Hellas
Video Hyperlinking
Part C: Insights into Hyperlinking Video Content
Benoit Huet
EURECOM
(Sophia-Antipolis, France)
IEEE ICIP’14 Tutorial, Oct. 2014 ACM MM’14 Tutorial, Nov. 2014
2. Information Technologies Institute 3.2
Centre for Research and Technology Hellas
Overview
•
Introduction – overall motivation
•
The General Framework
•
Indexing Video for Hyperlinking
–
Apache Solr
•
Evaluation Measures
•
Challenge 1: Temporal Granularity
–
Feature Alignment and Index Granularity
•
Challenge 2: Crafting the Query
–
Selecting Keywords
–
Selecting Visual Concepts
•
Hyperlinking Evaluation: MediaEval S&H
•
Hyperlinking Demos and LinkedTV Video
•
Conclusion and Outlook
•
Additional Reading
3. Information Technologies Institute 3.3
Centre for Research and Technology Hellas
Motivation
•
Why Video Hyperlinking?
–
Linking multimedia documents with related content
–
Automatic Hyperlink Creation
•
Different from Search (no user query)
•
Query automatically crafted from source document content
•
Outreach
–
Recommendation system
–
Second screen applications
4. Information Technologies Institute 3.4
Centre for Research and Technology Hellas
Insights in Hyperlinking
•
Hyperlinking
–
Creating “links” between media
•Video Hyperlinking
–video to video
–video fragment to video fragment
5. Information Technologies Institute 3.5
Centre for Research and Technology Hellas
Characterizing - Video
•
Video
–
Title / Episode
–
Cast
–
Synopsis / Summary
–
Broadcast channel
–
Broadcast date
–
URI
–
Named Entities
6. Information Technologies Institute 3.6
Centre for Research and Technology Hellas
Characterizing – Video Fragment
•
Video Fragment
–
Temporal location (Start and End)
–
Subtitles / Transcripts
–
Named Entities
–
Visual Concepts
–
Events
–
OCR
–
Character / Person
7. Information Technologies Institute 3.7
Centre for Research and Technology Hellas
General framework
Video Dataset
Segmentation
Feature Extraction
Indexing
Video Anchor Fragment
Feature Selection
Retrieval
Personalisation
•
Index Creation
•Hyperlinking
8. Information Technologies Institute 3.8
Centre for Research and Technology Hellas
Search and Hyperlinking Framework
BroadCast Media
Metadata (Subtitles,..)
Lucene/Solr
Media DB
Solr Index
Content Analysis
Title Cast Channel Subtitles Transcript 1
Transcript 2
…
Shots
Scene
OCR
Visual concepts
9. Information Technologies Institute 3.9
Centre for Research and Technology Hellas
Indexing Video for Hyperlinking
•
Indexing systems:
–
Apache Lucene/Solr
–
TerrierIR
–
ElasticSearch
–
Xapian
–
…
•
Popular for text-based indexing/search/retrieval
•
How to use index video for hyperlinking?
10. Information Technologies Institute 3.10
Centre for Research and Technology Hellas
Solr Indexing
•
Solr engine (Apache Lucene) for data indexing
–
Index at different temporal granularities (shot, scene, sliding window)
–
Index different features at each temporal granularity (metadata, ocr, transcripts, visual concepts)
•
All information stored in a unified structured way
–
flexible tool to perform search and hyperlinking
http://lucene.apache.org/solr/
11. Information Technologies Institute 3.11
Centre for Research and Technology Hellas
Solr indexing – Sample Schema
•
Schema = structure of document using fields of different types
•
Fields:
–
name
–
Type (see next slide)
–
indexed=“true|false”
–
stored=“true|false”
–
multiValued=“true|false"
–
required=“true|false"
12. Information Technologies Institute 3.12
Centre for Research and Technology Hellas
Solr indexing – Sample Schema
•
Fields type:
–
text (analysed, stopword removal, etc…)
–
string (not analysed)
–
date
–
float
–
int
•
uniqueKey – unique document id
14. Information Technologies Institute 3.14
Centre for Research and Technology Hellas
Solr Indexing – Sample Document
<?xml version="1.0" encoding="UTF-8"?>
<add>
<doc>
<field name="videoId">20080506_183000_bbcfour_pop_goes_the_sixties</field>
<field name="subtitle">SCREAMING APPLAUSE Subtitles by Red Bee Media Ltd E-mail subtitling@bbc.co.uk HELICOPTER WHIRRS TRAIN SPEEDS SIREN WAILS ENGINE REVS Your town, your street, your home - it's all in our database. New technology means it's easyto pay your TV licence and impossible to hide if you don't. KNOCKING</field>
<field name="serie_title">Pop Goes the Sixties</field>
<field name="short_synopsis">A colourful nugget of pop by The Shadows, mined from the BBC's archive.</field>
<field name="description">The Shadows play their song Apache in a classic performance from the BBC's archives.</field>
<field name="duration">300</field>
<field name="episode_title">The Shadows</field>
<field name="channel">BBC Four</field>
<field name="cast" />
<field name="synopsis" />
<field name="shots_number">14</field>
<field name="keywords">SCREAMING SPEEDS HELICOPTER WHIRRS REVS KNOCKING WAILS ENGINE SIREN APPLAUSE TV TRAIN Ltd E-mail Bee Subtitles Media Red</field>
</doc>
</add>
15. Information Technologies Institute 3.15
Centre for Research and Technology Hellas
Solr Indexing
•
Analysis step:
–
Dependent on each type
–
Automatically performed: tokenization, removing stop words, etc…
–
It creates tokens that are added to the index
•
inverted index
•
query is made on tokens
18. Information Technologies Institute 3.18
Centre for Research and Technology Hellas
Solr Query
•
Very easy with web interface
•
Query can be made through http request
–
http://localhost:8983/solr/collection_mediaEval/select?q=text:(Children out on poetry trip Exploration of poetry by school children Poem writing)
19. Information Technologies Institute 3.19
Centre for Research and Technology Hellas
Evaluation measures
•
Search
–
Mean Reciprocal Rank (MRR): assesses the rank of the relevant segment
20. Information Technologies Institute 3.20
Centre for Research and Technology Hellas
Evaluation measures
•
Search
–
Mean Reciprocal Rank (MRR): assesses the rank of the relevant segment
–
Mean Generalized Average Precision (mGAP): takes into account starting time of the segment
–
Mean Average Segment Precision (MASP): measures both ranking and segmentation of relevant segments
21. Information Technologies Institute 3.21
Centre for Research and Technology Hellas
Evaluation measures
•
Hyperlinking
–
Precision at rank n: how many relevant segment appear in the top n results
–
Mean Average Precision (MAP)
–
taking temporal segment to target offset into account
Aly, R., Ordelman, R. J.F., Eskevich, M., Jones, G. J.F., Chen, S. Linking Inside a Video Collection - What and How to Measure? In Proceedings of ACM WWW International Conference on World Wide Web Companion. ACM, Rio de Janeiro, Brazil, 457-460.
22. Information Technologies Institute 3.22
Centre for Research and Technology Hellas
Challenge 1: Temporal Granularity
Content Analysis
BroadCast Media
Metadata (Subtitles,..)
Lucene/Solr
Media DB
Solr Index
Program level: title, cast,…
Audio-frame level: transcripts, subtitles…
Shot/Keyframe level: visual concepts, OCR
23. Information Technologies Institute 3.23
Centre for Research and Technology Hellas
Challenge 1: Temporal Granularity
•
Aligning features with different temporal granularity
–
Shots and Scenes
–
Aligned by construction
Subtitles
Shots
Scenes
24. Information Technologies Institute 3.24
Centre for Research and Technology Hellas
Challenge 1: Temporal Granularity
•
Aligning features with different temporal granularity
–
Subtitles and Scenes
–
CONFLICT!
Subtitles
Shots
Scenes
25. Information Technologies Institute 3.25
Centre for Research and Technology Hellas
Challenge 1: Temporal Granularity
•
Aligning features with different temporal granularity
–
Subtitles and Scenes
–
Alignment based on feature start
Subtitles
Shots
Scenes
26. Information Technologies Institute 3.26
Centre for Research and Technology Hellas
Challenge 1: Temporal Granularity
•
Aligning features with different temporal granularity
–
Subtitles and Scenes
–
Alignment based on feature end
Subtitles
Shots
Scenes
27. Information Technologies Institute 3.27
Centre for Research and Technology Hellas
Challenge 1: Temporal Granularity
•
Aligning features with different temporal granularity
–
Subtitles and Scenes
–
Feature duplication (bias?)
Subtitles
Shots
Scenes
28. Information Technologies Institute 3.28
Centre for Research and Technology Hellas
Challenge 1: Temporal Granularity
•
Aligning features with different temporal granularity
–
Subtitles and Scenes
–
Alignment based on temporal overlap
Subtitles
Shots
Scenes
>
<
29. Information Technologies Institute 3.29
Centre for Research and Technology Hellas
Performance Impact - Alignment
Scene-Subtitle-End
Scene-Subtitle-Begin
Scene-Subtitle-Duplicate
Scene-Subtitle-Overlap
31. Information Technologies Institute 3.31
Centre for Research and Technology Hellas
Challenge 1: Discussion
•
Subtitle to scene Alignment:
–
Similar performance across approaches
–
Slight advantage to align using segment start
•
Granularity Impact
–
Shots are too short
–
Scenes better reflect user’s requirements
32. Information Technologies Institute 3.32
Centre for Research and Technology Hellas
Let’s Hyperlink!
Content Analysis
BroadCast Media
Metadata (Subtitles,..)
Lucene/Solr
Media DB
Solr Index
<anchor>
<anchorId>anchor_1</anchorId>
<fileName>v20080511_203000_bbctwo_TopGear</fileName>
<startTime>13.07</startTime>
<endTime>14.03</endTime>
</anchor>
33. Information Technologies Institute 3.33
Centre for Research and Technology Hellas
Challenge 2 : Crafting the Query
Content Analysis
BroadCast Media
Metadata (Subtitles,..)
Lucene/Solr
Media DB
Solr Index
<anchor>
<anchorId>anchor_1</anchorId>
<fileName>v20080511_203000_bbctwo_TopGear</fileName>
<startTime>13.07</startTime>
<endTime>14.03</endTime>
</anchor>
Query crafted from the anchor
Extract text from subtitles aligned with the anchor
Identify relevant visual concepts from the subtitles
Select visual concepts occurring in the anchor
34. Information Technologies Institute 3.34
Centre for Research and Technology Hellas
Challenge 2a : Keyword Selection
•
Long anchor may generate long text query
•
Important Keyword (or Entities) should be favored
35. Information Technologies Institute 3.35
Centre for Research and Technology Hellas
Challenge 2a : Keyword Selection
•
Keyword extraction based on term frequency-inverse document frequency (TF IDF) approach
•
IDF computed on English news, with curated stop words (~200 entries)
•
Incorporates Snowball stemming (as part of the Lucene project)
•
50 weighted keywords per documents, singletons removed
•
Keyword Gluing for frequencies larger than 2
S. Tschöpel and D. Schneider. A lightweight keyword and tag-cloud retrieval´algorithm for automatic speech recognition transcripts. In Proc. ISCA, 2010, Japan.
37. Information Technologies Institute 3.37
Centre for Research and Technology Hellas
Challenge 2b: Visual concept generality
Content Analysis
BroadCast Media
Metadata (Subtitles,..)
Lucene/Solr
Media DB
Solr Index
No training data for visual concepts
Use 151 visual concept detectors trained on TrecVid
39. Information Technologies Institute 3.39
Centre for Research and Technology Hellas
Solr Query
•
How to include the visual concepts in Solr?
–
Using float typed fields
–
<field name=“Animal" type=“float" indexed="true" stored=“true" multiValued=“false" required="true"/>
–
<field name=“Animal">0.74</field>
–
<field name=“Building">0.12</field>
•
Query can be made through http request
–
http://localhost:8983/solr/collection_mediaEval/select?q=text:(cow+in+a+farm)+Animal:[0.5+TO+1] +Building:[0.2+TO+1]
40. Information Technologies Institute 3.40
Centre for Research and Technology Hellas
Challenge 2b: Visual concept detectors confidence
Content Analysis
BroadCast Media
Metadata (Subtitles,..)
Lucene/Solr
Media DB
Solr Index
No training data for visual concepts
Use 151 visual concept detectors trained on TrecVid
Unknown performance
41. Information Technologies Institute 3.41
Centre for Research and Technology Hellas
Challenge 2b: Visual concept detector confidence
•
100 top images for the concept “Animal”
•
58 out of 100 are manually evaluated as valid
•
Confidence w = 0,58
42. Information Technologies Institute 3.42
Centre for Research and Technology Hellas
Challenge 2c: Map keywords to visual concepts
Farm
Shells
Exploration
Poem
Animal
House
Memories
Animal
Birds
Insect
Cattle
Dogs
Building
School
Church
Flags
Mountain
WordNet Mapping
keywords
visual concepts
43. Information Technologies Institute 3.43
Centre for Research and Technology Hellas
Mapping keywords to visual concepts
•
Concepts mapped to the keyword "Castle”
•
Semantic similarity computed using the “Lin” distance
Concept
Windows
Plant
Court
Church
Building
β
0.4533
0.4582
0.5115
0.6123
0.701
44. Information Technologies Institute 3.44
Centre for Research and Technology Hellas
Fusing Text and Visual Scores
Text-based scores
Lucene indexing
Visual-based scores
WordNet
similarity
Selected concepts
Ranking
Fusion
One score for each scene (t) fi=tiα +vi1−α
One score for each scene (v):
Computed from the scores of the selected concepts for each scene
viq=wc×vsicc∈C'qΣ
45. Information Technologies Institute 3.45
Centre for Research and Technology Hellas
Challenge 2c: Performance Results
•
Low impact of visual concept detector confidence (w)
•
Significant improvement can be achieved by combining only mapped concepts with θ ≥ 0.3.
•
Best performance is obtained when θ ≥ 0.8 (gain ≈ 11-12%).
w=1.0
w=confidence(c)
B. Safadi, M. Sahuguet and B. Huet, When textual and visual information join forces for multimedia retrieval, ICMR 2014, April 1-4, 2014, Glasgow, Scotland
46. Information Technologies Institute 3.46
Centre for Research and Technology Hellas
Challenge 2d: Visual Concept Selection
•
151 Visual Concept scores characterize each shots
•
Anchors may refer to 1 or more shots
•
Selection of relevant shots for the anchors using a threshold
•
For those selected visual concepts identify a good search threshold
47. Information Technologies Institute 3.47
Centre for Research and Technology Hellas
Visual Concept Selection Performance
•
MAP
Solr queriesConcepts selection 0.10.20.30.40.50.60.70.80.90.10.08920.03160.05580.08420.11830.1680.19140.19190.18980.20.17410.13660.11520.13120.15030.17770.19220.19190.18980.30.1840.18190.18060.16520.17310.18480.19270.19190.18980.40.18740.18830.19140.18680.18890.18970.19370.19190.18980.50.18750.18740.18860.19280.19370.18960.19390.19190.18980.60.18920.18840.18860.19130.19310.19460.19520.19230.18980.70.19010.19010.19010.1910.19170.19430.19480.19050.18910.80.19350.19350.19350.19430.19470.19590.19540.19640.190.90.19460.19460.19460.19520.19530.19620.19610.19580.1945
49. Information Technologies Institute 3.49
Centre for Research and Technology Hellas
Visual Concept Selection Performance
•
Precision@5
Solr queriesConcepts selection 0.10.20.30.40.50.60.70.80.90.10.55330.260.31330.460.54670.660.70.73330.73330.20.720.66670.52670.62670.640.70.70670.73330.73330.30.68670.720.70670.64670.70.72670.70670.73330.73330.40.70.70.72670.69330.71330.74670.71330.73330.73330.50.71330.71330.70670.720.740.740.71330.73330.73330.60.72670.72670.72670.73330.73330.740.71330.73330.73330.70.720.720.720.72670.73330.73330.71330.73330.73330.80.740.740.740.740.740.75330.74670.740.740.90.740.740.740.740.740.75330.75330.75330.74
51. Information Technologies Institute 3.51
Centre for Research and Technology Hellas
Visual Concept Selection Performance
•
Precision@10
Solr queriesConcepts selection 0.10.20.30.40.50.60.70.80.90.10.40330.16670.23330.32330.43670.550.60330.61670.62670.20.57330.50.430.49670.510.57330.60670.61670.62670.30.60330.57330.57670.570.55670.59670.60670.61670.62670.40.590.58670.60.590.60.60670.60670.61670.62670.50.590.590.59670.60.590.60.610.61670.62670.60.610.610.610.610.60670.59330.610.61330.62670.70.610.610.610.610.610.59670.61330.61330.62330.80.61670.61670.61670.620.62330.61330.62330.62670.62330.90.630.630.630.63330.63330.630.63670.63670.6333
53. Information Technologies Institute 3.53
Centre for Research and Technology Hellas
Visual Concept Selection Performance
•
Precision@20
Solr queriesConcepts selection 0.10.20.30.40.50.60.70.80.90.10.26830.1050.170.22670.30330.40170.440.44830.440.20.41670.3450.30330.33830.39330.43170.440.44830.440.30.4350.43330.43170.4050.42330.44170.440.44830.440.40.44330.43670.44330.44330.44330.44330.44170.44830.440.50.4450.44170.44170.44670.45830.44830.44170.44830.440.60.44670.4450.4450.450.45670.44830.44170.44830.440.70.45330.45330.45330.4550.45830.45830.44170.44830.43830.80.45170.45170.45170.45170.45330.45170.4450.44830.440.90.450.450.450.450.450.44830.44830.44830.4483
55. Information Technologies Institute 3.55
Centre for Research and Technology Hellas
Challenge 2e: Combining Visual Concept Selection and Fusion
•
Logic (AND/OR) vs Fusion (weighted sum)
•
Text vs Visual Concepts weight
•
Visual Concept selection threshold
57. Information Technologies Institute 3.57
Centre for Research and Technology Hellas
Challenge 2e: Combining Visual Concept Selection and Fusion
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
0,08
0,1
0,12
0,14
0,16
0,18
0,2
0,22
0,24
0,1
0,2
0,3
0,4
0,5
0,6
0,7
0,8
0,9
Text vs Visual Concept Fusion Weight
Visual Concept Selection Threshold
MAP
0,22-0,24
0,2-0,22
0,18-0,2
0,16-0,18
0,14-0,16
0,12-0,14
0,1-0,12
0,08-0,1
58. Information Technologies Institute 3.58
Centre for Research and Technology Hellas
Challenge 2: Discussion
•
Keyword selection is important
•
Mapping text with visual concepts isn’t straight forward
–
But can boost performance
•
Visual concept detector confidence has limited effect on performance
•
Selecting visual concepts from the anchor is easier that mapping from text
59. Information Technologies Institute 3.59
Centre for Research and Technology Hellas
Hyperlinking Evaluation
•
Evaluate LinkedTV / MediaMixer Technologies for Analysing and Connecting together video fragments with related content
•
Relevance to users
•
Large-scale video collection
MediaEval Benchmarking Initiative for Multimedia Evaluation The "multi" in multimedia: speech, audio, visual content, tags, users, context
60. Information Technologies Institute 3.60
Centre for Research and Technology Hellas
The MediaEval Search and Hyperlinking Task
•
Information seeking in a video dataset: retrieving video/media fragments
Eskevich, M., Aly, R., Ordelman, R., Chen, S., Jones, G. J.F. The Search and Hyperlinking Task at MediaEval 2013. In Proceedings of the MediaEval 2013 Multimedia Benchmark Workshop, CEUR-WS.org, 1043, ISSN: 1613-0073. Barcelona, Spain, 2013.
61. Information Technologies Institute 3.61
Centre for Research and Technology Hellas
The MediaEval Search and Hyperlinking Task
•
The 2013 dataset: 2323 BBC videos of different genres (440 programs)
62. Information Technologies Institute 3.62
Centre for Research and Technology Hellas
The MediaEval Search and Hyperlinking Task
•
The 2013 dataset: 2323 BBC videos of different genres (440 programs)
–
~1697h of video + audio
–
Two types of ASR transcript (LIUM/LIMSI)
–
Manual subtitle
–
Metadata (channel, cast, synopsis, etc…)
–
Shot boundaries and keyframes
–
Face detection and similarity information
–
Concept detection
63. Information Technologies Institute 3.63
Centre for Research and Technology Hellas
The 2013 MediaEval Search and Hyperlinking Task
•
Search: find a known segment in the collection given a query (text)
<top>
<itemId>item_18</itemId>
<queryText>What does a ball look like when it hits the wall during Squash</queryText>
<visualCues>ball hitting a wall in slow motion</visualCues>
</top>
•
Hyperlinking: find relevant segments relatively to an “anchor” segment (+- context)
<anchor>
<anchorId>anchor_1</anchorId>
<startTime>13.07</startTime>
<endTime>13.22</endTime>
<item>
<fileName>v20080511_203000_bbcthree_little_britain</fileName>
<startTime>13.07</startTime>
<endTime>14.03</endTime>
</item>
</anchor>
64. Information Technologies Institute 3.64
Centre for Research and Technology Hellas
The 2013 MediaEval Search and Hyperlinking Task
•
Queries are user generated for both search and hyperlinking
–
Search: 50 queries from 29 users
•
Known-item: the target is known to be in the dataset
–
Hyperlinking: 98 anchors
•
Evaluation:
–
For search, searched segments are pre-defined
–
For hyperlinking, crowd-sourcing
–
(on 30 anchors only)
65. Information Technologies Institute 3.65
Centre for Research and Technology Hellas
MediaEval 2013 Submissions
•
Search Runs:
–
scenes-S(-U,-I): scenes search using only textual features from subtitles (I and U: transcript type)
–
scenes-noC (-C): scenes search using textual (and visual) features
–
cl10-noC (-C) : temporal shot clustering within a video using textual features (and visual cues).
66. Information Technologies Institute 3.66
Centre for Research and Technology Hellas
Search Results
•
Best performance obtained with scenes
•
Impact of visual concept: smaller than expected
Run
MRR
mGAP
MASP
scenes-C
0.324931
0.187194
0.199647
scenes-noC
0.324603
0.186916
0.199237
scenes-S
0.338594
0.182194
0.210934
scenes-I
0.261996
0.144708
0.158552
scenes-U
0.268045
0.152094
0.164817
cl10-C
0.294770
0.154178
0.181982
cl10-noC
0.286806
0.149530
0.171888
68. Information Technologies Institute 3.68
Centre for Research and Technology Hellas
Example Search and Result
•
Text query : what to cook with everyday ingredients on a budget, denise van outen, john barrowman, ainsley harriot, seabass, asparagus,ostrich, mushrooms, sweet potato, mango, tomatoes
•
Visual cues: denise van outen, john barrowman, ainsley harriot, seabass, asparagus,ostrich, mushrooms, sweet potato, mango, tomatoes
Expected Anchor
20080506_153000_bbctwo_ready_steady_cook.webm#t=67,321
Scenes
20080506_153000_bbctwo_ready_steady_cook.webm#t=48,323
cl10
20080506_153000_bbctwo_ready_steady_cook.webm#t=1287,1406
69. Information Technologies Institute 3.69
Centre for Research and Technology Hellas
MediaEval 2013 Submissions
•
Hyperlinking Runs:
–
LA-scenes (-cl10/-MLT): only information from the anchor is used
–
LC-scenes (-cl10/-MLT): a segment containing the anchor is used (context)
70. Information Technologies Institute 3.70
Centre for Research and Technology Hellas
2013 Hyperlinking Results
•
Scenes offer the best results
•
Using context (LC) improves performances
•
Precision at rank n decreases with n
Run
MAP
P-5
P-10
P-20
LA cl10
0.0337
0.3467
0.2533
0.1517
LA MLT
0.1201
0.4200
0.4200
0.3217
LA scenes
0.1196
0.6133
0.5133
0.3400
LC cl10
0.0550
0.4600
0.4000
0.2167
LC MLT
0.1820
0.5667
0.5667
0.4300
LC scenes
0.1654
0.6933
0.6367
0.4333
72. Information Technologies Institute 3.72
Centre for Research and Technology Hellas
The Search and Hyperlinking Demo
Content Analysis
BroadCast Media
Metadata (Subtitles)
Lucene/Solr
Media DB
Solr Index
WebService
(HTML5/AJAX/PHP)
User Interface
74. Information Technologies Institute 3.74
Centre for Research and Technology Hellas
Conclusions and Outlook
•
Scenes offer the best temporal granularity
•
Actual algorithm based on visual features only
•
Future work: including semantic and audio features
•
Importance of Context
•
Visual features integration is challenging
•
Visual concept detectors (accuracy and coverage)
•
Combination of multimodal features
•
Mapping between text/entities and visual concepts
•
Person identification
75. Information Technologies Institute 3.75
Centre for Research and Technology Hellas
Contributors
•
Mrs Mathilde Sahuguet (EURECOM/DailyMotion)
•
Dr. Bahjat Safadi (EURECOM)
•
Mr Hoang-An Le (EURECOM)
•
Mr Quoc-Minh Bui (EURECOM)
•
LinkedTV Partners (CERTH/ITI, UEP, Fraunhofer IAIS)
76. Information Technologies Institute 3.76
Centre for Research and Technology Hellas
Additional Reading
•
E. Apostolidis, V. Mezaris, M. Sahuguet, B. Huet, B. Cervenkova, D. Stein, S. Eickeler, J.-L. Redondo Garcia, R. Troncy, L. Pikora, "Automatic fine-grained hyperlinking of videos within a closed collection using scene segmentation", Proc. ACM Multimedia (MM'14), Orlando, FL, US, 3-7 Nov. 2014.
•
B. Safadi, M. Sahuguet and B. Huet, When textual and visual information join forces for multimedia retrieval, ICMR 2014, ACM International Conference on Multimedia Retrieval, April 1-4, 2014, Glasgow, Scotland
•
M. Sahuguet and B. Huet. Mining the Web for Multimedia-based Enriching. Multimedia Modeling MMM 2014, 20th International Conference on MultiMedia Modeling, 8-10th January 2014, Dublin, Ireland
•
M. Sahuguet, B. Huet, B. Cervenkova, E. Apostolidis, V. Mezaris, D. Stein, S. Eickeler, J-L. Redondo Garcia, R. Troncy, L. Pikora. LinkedTV at MediaEval 2013 search and hyperlinking task, MEDIAEVAL 2013, Multimedia Benchmark Workshop, October 18-19, 2013, Barcelona, Spain
•
Stein, D.; Öktem, A.; Apostolidis, E.; Mezaris, V.; Redondo García, J. L.; Troncy, R.; Sahuguet, M. & Huet, B., From raw data to semantically enriched hyperlinking: Recent advances in the LinkedTV analysis workflow, NEM Summit 2013, Networked & Electronic Media, 28-30 October 2013, Nantes, France
•
W. Bailer, M. Lokaj, and H. Stiegler. Context in video search: Is close-by good enough when using linking? In ACM ICMR, Glasgow, UK, April 1-4 2014.
•
C. A. Bhatt, N. Pappas, M. Habibi, et al. Multimodal reranking of content-based recommendations for hyperlinking video snippets. In ACM ICMR, Glasgow, UK, April 1-4 2014.
•
D. Stein, S. Eickeler, R. Bardeli, et al. Think before you link! Meeting content constraints when linking television to the web. In NEM Summit 2013, 28-30, October 2013, Nantes, France.
•
P. Over, G. Awad, M. Michel, et al. TRECVID 2012 An overview of the goals, tasks, data, evaluation mechanisms and metrics. In Proc. of TRECVID 2012. NIST, USA, 2012.
•
M. Eskevich, G. Jones, C. Wartena, M. Larson, R. Aly, T. Verschoor, and R. Ordelman. Comparing retrieval effectiveness of alternative content segmentation methods for Internet video search. In Content-Based Multimedia Indexing (CBMI), 2012.
77. Information Technologies Institute 3.77
Centre for Research and Technology Hellas
Additional Reading
•
Lei Pang, Wei Zhang, Hung-Khoon Tan, and Chong-Wah Ngo. 2012. Video hyperlinking: libraries and tools for threading and visualizing large video collection. In Proceedings of the 20th ACM international conference on Multimedia (MM '12). ACM, New York, NY, USA, 1461-1464.
•
A. Habibian, K. E. van de Sande, and C. G. Snoek. Recommendations for Video Event Recognition Using Concept Vocabularies. In Proceedings of the 3rd ACM Conference on International Conference on Multimedia Retrieval, ICMR ’13, pages 89–96, Dallas, Texas, USA, April 2013.
•
A. Hauptmann, R. Yan, W.-H. Lin, M. Christel, and H. Wactlar. Can High-Level Concepts Fill the Semantic Gap in Video Retrieval? A Case Study With Broadcast News. Multimedia, IEEE Transactions on, 9(5):958–966, 2007.
•
A. W. M. Smeulders, M. Worring, S. Santini, A. Gupta, and R. Jain. Content-based image retrieval at the end of the early years. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(12):1349–1380, 2000.
•
A. Rousseau, F. Bougares, P. Deleglise, H. Schwenk, and Y. Estev. LIUM's systems for the IWSLT 2011 Speech Translation Tasks. In Proceedings of IWSLT 2011, San Francisco, USA, 2011.
•
Gauvain, J.-L., Lamel, L. and Adda, G., 2002. The LIMSI broadcast news transcription system. Speech Communication 37, 89- 108
•
C. Fellbaum, editor. WordNet: an electronic lexical database. MIT Press, 1998.
•
Carles Ventura, Marcel Tella-Amo, Xavier Giro-I-Nieto, “UPC at MediaEval 2013 Hyperlinking Task”, Proceedings of the MediaEval 2013 Multimedia Benchmark Workshop, Barcelona, Spain, October 18-19, 2013.
•
Camille Guinaudeau, Anca-Roxana Simon, Guillaume Gravier, Pascale Sébillot, “HITS and IRISA at MediaEval 2013: Search and Hyperlinking Task” , Proceedings of the MediaEval 2013 Multimedia Benchmark Workshop, Barcelona, Spain, October 18-19, 2013.
•
Mathilde Sahuguet, Benoit Huet, Barbora Červenková, Evlampios Apostolidis, Vasileios Mezaris, Daniel Stein, Stefan Eickeler, Jose Luis Redondo Garcia, Lukáš Pikora, “LinkedTV at MediaEval 2013 Search and Hyperlinking Task” , Proceedings of the MediaEval 2013 Multimedia Benchmark Workshop, Barcelona, Spain, October 18-19, 2013.
78. Information Technologies Institute 3.78
Centre for Research and Technology Hellas
Additional Reading
•
Tom De Nies, Wesley De Neve, Erik Mannens, Rik Van de Walle, “Ghent University-iMinds at MediaEval 2013: An Unsupervised Named Entity-based Similarity Measure for Search and Hyperlinking” , Proceedings of the MediaEval 2013 Multimedia Benchmark Workshop, Barcelona, Spain, October 18-19, 2013.
•
Fabrice Souvannavong, Bernard Mérialdo, Benoit Huet, Video content modeling with latent semantic analysis, CBMI 2003, 3rd International Workshop on Content-Based Multimedia Indexing, September 22-24, 2003, Rennes, France
•
Itheri Yahiaoui, Bernard Merialdo, Benoit Huet, Comparison of multiepisode video summarization algorithms, EURASIP Journal on applied signal processing, 2003
•
Chidansh Bhatt, Nikolaos Pappas, Maryam Habibi, Andrei Popescu-Belis, “Idiap at MediaEval 2013: Search and Hyperlinking Task” , Proceedings of the MediaEval 2013 Multimedia Benchmark Workshop, Barcelona, Spain, October 18-19, 2013.
•
Petra Galuščáková, Pavel Pecina, “CUNI at MediaEval 2013 Search and Hyperlinking Task” , Proceedings of the MediaEval 2013 Multimedia Benchmark Workshop, Barcelona, Spain, October 18-19, 2013.
•
Shu Chen, Gareth J.F. Jones, Noel E. O'Connor, “DCU Linking Runs at MediaEval 2013: Search and Hyperlinking Task” , Proceedings of the MediaEval 2013 Multimedia Benchmark Workshop, Barcelona, Spain, October 18-19, 2013.
•
Michal Lokaj, Harald Stiegler, Werner Bailer, “TOSCA-MP at Search and Hyperlinking of Television Content Task” , Proceedings of the MediaEval 2013 Multimedia Benchmark Workshop, Barcelona, Spain, October 18-19, 2013.
•
Bahjat Safadi, Mathilde Sahuguet, Benoit Huet, Linking text and visual concepts semantically for cross modal multimedia search, 21st IEEE International Conference on Image Processing, October 27-30, 2014, Paris, France
Indexing Systems
•
http://lucene.apache.org/solr/
•
http://terrier.org/
•
http://www.elasticsearch.org/
•
http://xapian.org
Projects
•
LinkedTV: Television linked to the web. http://www.linkedtv.eu/
•
MediaMixer: Community set-up and networking for the remixing of online media fragments. http://www.mediamixer.eu/
•
Axes: Access to audiovisual archives. http://www.axes-project.eu
79. Information Technologies Institute 3.79
Centre for Research and Technology Hellas
Thank you!
More information: http://www.eurecom.fr/~huet benoit.huet@eurecom.fr