This presentation proposes in-video personalisation using object-based media for the insertion of externally sourced video placements into broadcast content (real-time or on-demand). Placement content is selected in accordance with viewer profiles, and may be produced independently of the source video.
Using this this technology user-object interaction is also possible. Use cases, include personalised product placement, training, education and accessibility for hearing and visually impaired viewers. The technology specifically fits existing distribution platforms (cable, satellite, DTTV, IPTV/OTT) with minimal infrastructure upgrade.
CE-HTML provides a standardized structure for virtual keyboards and remote user interfaces on consumer electronics. It has been implemented on devices like set-top boxes, smartphones, and smart TVs from manufacturers like Motorola, Philips, and LG. Protocols like CEA-2014 and NotifSocket enable dynamic updates and event handling for remote user interfaces across multiple connected devices.
The document is a presentation by Nick Verkroost of Value Partners on Project Canvas, an initiative in the UK to establish an open internet-connected TV platform. The presentation provides an introduction to Project Canvas, explaining the drivers behind it, an overview of the technical standard and user experience, and findings from a market impact assessment of Canvas. It analyzes how Canvas could positively impact the TV, video-on-demand, and internet service provider markets in the UK while securing investment in digital terrestrial television. The presentation concludes with lessons learned from Project Canvas including the need for public sector involvement to drive industry standards and engagement across the value chain.
On demand web services for users of Munich´s public transport systemChris Bleuel
Mobility is a vital part of human life. Mobility needs apparent in different ways by e.g. individual traffic, local public means of transport, or mixed systems like going by train combined with bike- or car-sharing.
Individual traffic in particular causes major environment problems and problems of traffic congestion especially in urban areas. Strengthening public transport is often favored, but this preference is not reflected in the number of local transport users. There is space for more effectiveness!
This phenomenon can be explained by the fact that a central problem involved in local public transport usage is its complexity. The need is to improve the amount of real time information and to transform it into a accessibly on demand service.
Chris Bleuel presents solutions and strategies including real time information and ticketing for a mobile support system of Munich‘s local public transport system. The presentation shows transfer solutions between public static information and personalised and on demand information.
See how the buildout of medianets—media-optimized IP networks—unleashes new capabilities and cost savings through every aspect of rich media production, contribution, distribution, and consumption, from the point of ingest all the way
to the customer screens.
The document provides an overview of the status of MPEG-4 developments and the AIC Initiative. It discusses the goals, history, and architecture of MPEG-4, which aims to code audio-visual objects and scenes to enable interactivity. MPEG-4 extends existing architectures like MPEG-2 and IP to new environments through tools like an interactive scene description and support for new content types and delivery formats. Profiles and levels are defined to suit different applications. Carriage of MPEG-4 over MPEG-2 and IP is also addressed.
The document discusses using collections like ArrayList in Java. It explains that collections allow storing an arbitrary number of objects and provide functionality to add, remove and iterate over items. The document also demonstrates how to use an ArrayList to organize music files by adding, getting and listing files. Iteration over collections using a for-each loop is described. Generic classes are introduced as a way to specify the type of objects a collection contains.
MPEG-4 BIFS is a scene description language standardized in 1999 that builds on VRML by adding 2D vector graphics and other capabilities. A BIFS scene is composed of a tree of nodes with graphical, audio, and other types of nodes connected by routes. Properties can be single values or arrays, and different node types are used to describe shapes, materials, and other visual elements in a scene. BIFS supports animation through nodes like timers and interpolators connected by routes, as well as interactivity through listener nodes routing events to targets.
MPEG-4 BIFS and MPEG-2 TS: Latest developments for digital radio servicesCyril Concolato
This document summarizes recent developments in MPEG-4 BIFS and MPEG-2 transport streams for digital radio services. It describes new nodes added to BIFS like CacheTexture, EnvironmentTest, and KeyNavigator. It also covers extensions to enable carriage of MPEG-4 over MPEG-2 transport streams in a backward compatible way without requiring an object descriptor stream.
CE-HTML provides a standardized structure for virtual keyboards and remote user interfaces on consumer electronics. It has been implemented on devices like set-top boxes, smartphones, and smart TVs from manufacturers like Motorola, Philips, and LG. Protocols like CEA-2014 and NotifSocket enable dynamic updates and event handling for remote user interfaces across multiple connected devices.
The document is a presentation by Nick Verkroost of Value Partners on Project Canvas, an initiative in the UK to establish an open internet-connected TV platform. The presentation provides an introduction to Project Canvas, explaining the drivers behind it, an overview of the technical standard and user experience, and findings from a market impact assessment of Canvas. It analyzes how Canvas could positively impact the TV, video-on-demand, and internet service provider markets in the UK while securing investment in digital terrestrial television. The presentation concludes with lessons learned from Project Canvas including the need for public sector involvement to drive industry standards and engagement across the value chain.
On demand web services for users of Munich´s public transport systemChris Bleuel
Mobility is a vital part of human life. Mobility needs apparent in different ways by e.g. individual traffic, local public means of transport, or mixed systems like going by train combined with bike- or car-sharing.
Individual traffic in particular causes major environment problems and problems of traffic congestion especially in urban areas. Strengthening public transport is often favored, but this preference is not reflected in the number of local transport users. There is space for more effectiveness!
This phenomenon can be explained by the fact that a central problem involved in local public transport usage is its complexity. The need is to improve the amount of real time information and to transform it into a accessibly on demand service.
Chris Bleuel presents solutions and strategies including real time information and ticketing for a mobile support system of Munich‘s local public transport system. The presentation shows transfer solutions between public static information and personalised and on demand information.
See how the buildout of medianets—media-optimized IP networks—unleashes new capabilities and cost savings through every aspect of rich media production, contribution, distribution, and consumption, from the point of ingest all the way
to the customer screens.
The document provides an overview of the status of MPEG-4 developments and the AIC Initiative. It discusses the goals, history, and architecture of MPEG-4, which aims to code audio-visual objects and scenes to enable interactivity. MPEG-4 extends existing architectures like MPEG-2 and IP to new environments through tools like an interactive scene description and support for new content types and delivery formats. Profiles and levels are defined to suit different applications. Carriage of MPEG-4 over MPEG-2 and IP is also addressed.
The document discusses using collections like ArrayList in Java. It explains that collections allow storing an arbitrary number of objects and provide functionality to add, remove and iterate over items. The document also demonstrates how to use an ArrayList to organize music files by adding, getting and listing files. Iteration over collections using a for-each loop is described. Generic classes are introduced as a way to specify the type of objects a collection contains.
MPEG-4 BIFS is a scene description language standardized in 1999 that builds on VRML by adding 2D vector graphics and other capabilities. A BIFS scene is composed of a tree of nodes with graphical, audio, and other types of nodes connected by routes. Properties can be single values or arrays, and different node types are used to describe shapes, materials, and other visual elements in a scene. BIFS supports animation through nodes like timers and interpolators connected by routes, as well as interactivity through listener nodes routing events to targets.
MPEG-4 BIFS and MPEG-2 TS: Latest developments for digital radio servicesCyril Concolato
This document summarizes recent developments in MPEG-4 BIFS and MPEG-2 transport streams for digital radio services. It describes new nodes added to BIFS like CacheTexture, EnvironmentTest, and KeyNavigator. It also covers extensions to enable carriage of MPEG-4 over MPEG-2 transport streams in a backward compatible way without requiring an object descriptor stream.
The document provides an overview of selected current activities within MPEG, including requirements and timelines. It discusses the Mobile Visual Search work item which aims to enable efficient transmission of local image features for mobile visual search applications. It also outlines the MPEG Media Transport work item which focuses on efficient delivery of media to enable content and network adaptive streaming. Additionally, it summarizes the Advanced IPTV Terminal work item and its goal of defining elementary services and protocols to enable interoperability.
The document provides an overview of MPEG-4, a standard that offers both advanced audio and video codecs as well as tools for combining multimedia such as audio, video, graphics and interactivity. MPEG-4's codecs provide high compression efficiency, with its AVC video codec offering half the bitrate of MPEG-2 for similar quality. Its tools allow for rich interactive media experiences by combining different media types. Manufacturers and operators have adopted MPEG-4 due to its excellent performance, open development process, compatibility between implementations, and ability to encode once and play anywhere.
The document provides an overview of IPTV (Internet Protocol Television), describing what IPTV is compared to internet TV, what VOD (Video On Demand) is compared to IPTV, common middleware and video codecs used, common IPTV/VOD models, and other factors to consider like digital rights management and user experience. Key aspects of IPTV covered include it being digital TV delivered over managed networks using internet protocols, providing a TV-like quality of service, and enabling features like video on demand and personalization not available with traditional broadcast TV.
IPTV delivers television programming over broadband internet using internet protocols. It requires a subscription and set-top box. The number of IPTV subscribers is forecast to hit 93 million worldwide by 2011. IPTV offers advantages for highly targeted interactive ads that can provide comprehensive analytics. It enables a more personalized relationship between consumers and brands. Challenges include infrastructure costs, market demand between urban and suburban areas, and developing sustainable revenue and content models.
I Minds2009 Future Media Prof Rik Van De Walle (Ibbt Mm Lab U Gent)imec.archive
This document discusses key trends and research challenges in future media. It identifies several trends such as the growth of video traffic approaching the "Zettabyte Era", emerging high-definition user generated content, and recognition of many object classes in images and video. Research challenges include advanced video coding for high resolutions, distributed video coding for mobile applications, and multimedia data analysis. The research group has achieved several projects, spin-off companies, patents, and standardization contributions in these areas of future media.
This presentation is meant to discuss the basics of video compression like DCT, Color space conversion, Motion Compensation etc. It also discusses the standards like H.264, MPEG2, MPEG4 etc.
The document defines multimedia and its key elements. It discusses how multimedia involves various media like text, graphics, audio, video and animation. It also explains how multimedia applications allow nonlinear interactivity for users to navigate content. Common file formats and authoring tools for developing multimedia are also covered.
The document discusses the benefits of exercise for both physical and mental health. It notes that regular exercise can reduce the risk of diseases like heart disease and diabetes, improve mood, and reduce feelings of stress and anxiety. The document recommends that adults get at least 150 minutes of moderate exercise or 75 minutes of vigorous exercise per week to gain these benefits.
This document provides a survey of adaptive 360-degree video streaming solutions, challenges, and opportunities. It discusses current solutions for streaming 360-degree video over dynamic networks in a viewport-independent, viewport-dependent, and tile-based manner. It also analyzes research challenges for on-demand and live 360-degree video streaming and discusses standardization efforts to ensure interoperability and deployment at scale. The document concludes by outlining future research opportunities enabled by 360-degree video streaming.
Consumers’ expectation is rising for highly visual and engaging content. Using a seasoned systems integrator and Agile methodology, organizations can effectively and quickly test and deploy multiscreen video delivery, thus meeting consumer expectations.
High Efficiency of Media Processing Amos K.Amos Kohn
This document discusses challenges with STB-based media personalization for cable operators and proposes a network-based alternative. STB-based personalization is problematic due to the variety of legacy STBs in homes with limited capabilities, the high costs of more powerful STBs, insufficient infrastructure to support advanced features, bandwidth overload on the access network, and threats to retaining subscribers. A network-based approach could address these issues by performing media processing before content reaches STBs, allowing operators to reuse existing infrastructure for a unified experience across devices while lowering costs and retaining customers. The document outlines coding tools like object-based structures and scalable encoding that could enable such a network-based personalization solution.
VRSafety is a solution by Bit Space Development Ltd. that helps businesses build interactive learning experiences using virtual reality. Utilize technology like the HTC Vive and Oculus Rift to send new entrants onto the job site for a sense of presence and for hazard identification and safety training.
The document proposes a Hybrid Layered Video (HLV) encoding scheme for mobile multimedia applications. The scheme has two components: (1) a sketch-based representation that uses parametric curves to represent object outlines, called Generative Sketch-based Video (GSV); and (2) a texture component with three layers - a low-quality base layer, medium-quality mid-layer, and original-quality highest layer. Different combinations of the GSV and texture layers provide varying quality and resource usage profiles. The scheme aims to enable computer vision tasks on mobile devices in a bandwidth- and power-efficient manner.
3 d video coding & streaming real time of hdEmpirix
1) The document discusses 3D video coding and streaming of HD/3D video content. It covers topics such as stereoscopy, 3D display technologies, 3D video content creation, and 3D video coding methods.
2) The author's objective is to achieve efficient and reliable transmission of 3D videos over noisy channels using joint source and channel coding (JSCC). This considers both redundancy reduction through compression and error protection through channel coding.
3) The author is involved in several ongoing 3D video projects and their work focuses on determining efficient and reliable JSCC techniques for 3D video, analyzing 3D video coding tools, and defining quality parameters for stereoscopic video streaming on mobile.
3 d video coding & streaming real time of hdEmpirix
1) The document discusses 3D video coding and streaming of HD/3D video content. It covers topics such as stereoscopy, 3D display technologies, 3D video content creation, and 3D video coding methods.
2) The author's objective is to achieve efficient and reliable transmission of 3D videos over noisy channels using joint source and channel coding (JSCC). This considers both redundancy reduction through compression and error protection through channel coding.
3) The author is involved in several ongoing 3D video projects and their work focuses on determining efficient and reliable JSCC techniques for 3D video, analyzing 3D video coding tools, and defining quality parameters for stereoscopic video streaming on mobile.
Digital transformation and customer careMiguel Mello
WebRTC and Sippo can enable real-time communications like audio, video, file sharing and screen sharing across multiple devices. Sippo allows these capabilities in any browser or device. It also supports in-app customer service to reduce costs and provide better customer experiences. Sippo can enable use cases like remote assistance, document reviews, identity verification and telehealth through real-time audio and video calls.
Michael Branam has over 30 years of experience in software development, product management, and architecture design for digital cable, IPTV, and home automation technologies. He holds 21 US patents related to innovations in these fields. Currently, he is the Lead Product Strategy Manager at AT&T Digital Life where he is responsible for developing future product visions and roadmaps.
Michael Branam has over 30 years of experience in software development, product management, and business development roles related to interactive television, home automation, and security products. He has 21 US patents granted and experience leading teams that developed applications for platforms like IPTV, Microsoft Mediaroom, and the AT&T Digital Life platform. Currently, he is the Lead Product Strategy Manager at AT&T Digital Life where he is responsible for the future product strategy vision and roadmap.
The document discusses integrating third party applications and virtual reality systems with the Access Grid network for distributed collaboration. It describes the Access Grid architecture and implementation, and provides several options for integration, including: (1) sharing data and URLs through the Access Grid venue system, (2) developing new shared applications, (3) creating new node services, and (4) using existing multicast streams. It also discusses developing a VR Access Grid client and integrating it with the e-science GridSphere system. Future work may include using the ECT framework for node management and creating a more flexible 3D Access Grid client.
A FRAMEWORK FOR MOBILE VIDEO STREAMING AND VIDEO SHARING IN CLOUDJournal For Research
The transmission of data has grown over years in all the streams of technology. Video and image data plays a very important position in communication around the globe. The usage of Medias over mobile devices had exploded years ago in technology. However, the usage of traditional network connecting protocols and the service providers are providing lack of quality in services. As the number of users who uses mobile phones is increasing day by day the video traffic over network is also increasing thereby causes disruption in the service which is caused by low bandwidth. Due to this disruption the wireless cannot able to satisfy the users demand for video streaming which eventually causes long buffering time. Influencing cloud computing knowledge to gain advantage over this issue we suggest two solutions. i) Mobile Video Streaming (MoV) and Social Video Sharing (SoV). MoV will create a private cloud for each mobile user which adjusts the bit rate based on return value using scalable video coding technique to improve the scalability and efficient utilization of bandwidth. SoV uses the agent to pre fetch the video data for effective sharing and to reduce the buffering time.
Video conferencing allows people in different locations to communicate face-to-face in real-time. It works by using microphones, webcams, displays, and software to capture and transmit video and audio streams between participants. There are two types: point-to-point calls between two locations, and multi-point calls between three or more locations. As demand for video conferencing grows, solutions need robust, scalable infrastructure to deliver high quality experiences across networks using standards like H.264 and SIP.
The document provides an overview of selected current activities within MPEG, including requirements and timelines. It discusses the Mobile Visual Search work item which aims to enable efficient transmission of local image features for mobile visual search applications. It also outlines the MPEG Media Transport work item which focuses on efficient delivery of media to enable content and network adaptive streaming. Additionally, it summarizes the Advanced IPTV Terminal work item and its goal of defining elementary services and protocols to enable interoperability.
The document provides an overview of MPEG-4, a standard that offers both advanced audio and video codecs as well as tools for combining multimedia such as audio, video, graphics and interactivity. MPEG-4's codecs provide high compression efficiency, with its AVC video codec offering half the bitrate of MPEG-2 for similar quality. Its tools allow for rich interactive media experiences by combining different media types. Manufacturers and operators have adopted MPEG-4 due to its excellent performance, open development process, compatibility between implementations, and ability to encode once and play anywhere.
The document provides an overview of IPTV (Internet Protocol Television), describing what IPTV is compared to internet TV, what VOD (Video On Demand) is compared to IPTV, common middleware and video codecs used, common IPTV/VOD models, and other factors to consider like digital rights management and user experience. Key aspects of IPTV covered include it being digital TV delivered over managed networks using internet protocols, providing a TV-like quality of service, and enabling features like video on demand and personalization not available with traditional broadcast TV.
IPTV delivers television programming over broadband internet using internet protocols. It requires a subscription and set-top box. The number of IPTV subscribers is forecast to hit 93 million worldwide by 2011. IPTV offers advantages for highly targeted interactive ads that can provide comprehensive analytics. It enables a more personalized relationship between consumers and brands. Challenges include infrastructure costs, market demand between urban and suburban areas, and developing sustainable revenue and content models.
I Minds2009 Future Media Prof Rik Van De Walle (Ibbt Mm Lab U Gent)imec.archive
This document discusses key trends and research challenges in future media. It identifies several trends such as the growth of video traffic approaching the "Zettabyte Era", emerging high-definition user generated content, and recognition of many object classes in images and video. Research challenges include advanced video coding for high resolutions, distributed video coding for mobile applications, and multimedia data analysis. The research group has achieved several projects, spin-off companies, patents, and standardization contributions in these areas of future media.
This presentation is meant to discuss the basics of video compression like DCT, Color space conversion, Motion Compensation etc. It also discusses the standards like H.264, MPEG2, MPEG4 etc.
The document defines multimedia and its key elements. It discusses how multimedia involves various media like text, graphics, audio, video and animation. It also explains how multimedia applications allow nonlinear interactivity for users to navigate content. Common file formats and authoring tools for developing multimedia are also covered.
The document discusses the benefits of exercise for both physical and mental health. It notes that regular exercise can reduce the risk of diseases like heart disease and diabetes, improve mood, and reduce feelings of stress and anxiety. The document recommends that adults get at least 150 minutes of moderate exercise or 75 minutes of vigorous exercise per week to gain these benefits.
This document provides a survey of adaptive 360-degree video streaming solutions, challenges, and opportunities. It discusses current solutions for streaming 360-degree video over dynamic networks in a viewport-independent, viewport-dependent, and tile-based manner. It also analyzes research challenges for on-demand and live 360-degree video streaming and discusses standardization efforts to ensure interoperability and deployment at scale. The document concludes by outlining future research opportunities enabled by 360-degree video streaming.
Consumers’ expectation is rising for highly visual and engaging content. Using a seasoned systems integrator and Agile methodology, organizations can effectively and quickly test and deploy multiscreen video delivery, thus meeting consumer expectations.
High Efficiency of Media Processing Amos K.Amos Kohn
This document discusses challenges with STB-based media personalization for cable operators and proposes a network-based alternative. STB-based personalization is problematic due to the variety of legacy STBs in homes with limited capabilities, the high costs of more powerful STBs, insufficient infrastructure to support advanced features, bandwidth overload on the access network, and threats to retaining subscribers. A network-based approach could address these issues by performing media processing before content reaches STBs, allowing operators to reuse existing infrastructure for a unified experience across devices while lowering costs and retaining customers. The document outlines coding tools like object-based structures and scalable encoding that could enable such a network-based personalization solution.
VRSafety is a solution by Bit Space Development Ltd. that helps businesses build interactive learning experiences using virtual reality. Utilize technology like the HTC Vive and Oculus Rift to send new entrants onto the job site for a sense of presence and for hazard identification and safety training.
The document proposes a Hybrid Layered Video (HLV) encoding scheme for mobile multimedia applications. The scheme has two components: (1) a sketch-based representation that uses parametric curves to represent object outlines, called Generative Sketch-based Video (GSV); and (2) a texture component with three layers - a low-quality base layer, medium-quality mid-layer, and original-quality highest layer. Different combinations of the GSV and texture layers provide varying quality and resource usage profiles. The scheme aims to enable computer vision tasks on mobile devices in a bandwidth- and power-efficient manner.
3 d video coding & streaming real time of hdEmpirix
1) The document discusses 3D video coding and streaming of HD/3D video content. It covers topics such as stereoscopy, 3D display technologies, 3D video content creation, and 3D video coding methods.
2) The author's objective is to achieve efficient and reliable transmission of 3D videos over noisy channels using joint source and channel coding (JSCC). This considers both redundancy reduction through compression and error protection through channel coding.
3) The author is involved in several ongoing 3D video projects and their work focuses on determining efficient and reliable JSCC techniques for 3D video, analyzing 3D video coding tools, and defining quality parameters for stereoscopic video streaming on mobile.
3 d video coding & streaming real time of hdEmpirix
1) The document discusses 3D video coding and streaming of HD/3D video content. It covers topics such as stereoscopy, 3D display technologies, 3D video content creation, and 3D video coding methods.
2) The author's objective is to achieve efficient and reliable transmission of 3D videos over noisy channels using joint source and channel coding (JSCC). This considers both redundancy reduction through compression and error protection through channel coding.
3) The author is involved in several ongoing 3D video projects and their work focuses on determining efficient and reliable JSCC techniques for 3D video, analyzing 3D video coding tools, and defining quality parameters for stereoscopic video streaming on mobile.
Digital transformation and customer careMiguel Mello
WebRTC and Sippo can enable real-time communications like audio, video, file sharing and screen sharing across multiple devices. Sippo allows these capabilities in any browser or device. It also supports in-app customer service to reduce costs and provide better customer experiences. Sippo can enable use cases like remote assistance, document reviews, identity verification and telehealth through real-time audio and video calls.
Michael Branam has over 30 years of experience in software development, product management, and architecture design for digital cable, IPTV, and home automation technologies. He holds 21 US patents related to innovations in these fields. Currently, he is the Lead Product Strategy Manager at AT&T Digital Life where he is responsible for developing future product visions and roadmaps.
Michael Branam has over 30 years of experience in software development, product management, and business development roles related to interactive television, home automation, and security products. He has 21 US patents granted and experience leading teams that developed applications for platforms like IPTV, Microsoft Mediaroom, and the AT&T Digital Life platform. Currently, he is the Lead Product Strategy Manager at AT&T Digital Life where he is responsible for the future product strategy vision and roadmap.
The document discusses integrating third party applications and virtual reality systems with the Access Grid network for distributed collaboration. It describes the Access Grid architecture and implementation, and provides several options for integration, including: (1) sharing data and URLs through the Access Grid venue system, (2) developing new shared applications, (3) creating new node services, and (4) using existing multicast streams. It also discusses developing a VR Access Grid client and integrating it with the e-science GridSphere system. Future work may include using the ECT framework for node management and creating a more flexible 3D Access Grid client.
A FRAMEWORK FOR MOBILE VIDEO STREAMING AND VIDEO SHARING IN CLOUDJournal For Research
The transmission of data has grown over years in all the streams of technology. Video and image data plays a very important position in communication around the globe. The usage of Medias over mobile devices had exploded years ago in technology. However, the usage of traditional network connecting protocols and the service providers are providing lack of quality in services. As the number of users who uses mobile phones is increasing day by day the video traffic over network is also increasing thereby causes disruption in the service which is caused by low bandwidth. Due to this disruption the wireless cannot able to satisfy the users demand for video streaming which eventually causes long buffering time. Influencing cloud computing knowledge to gain advantage over this issue we suggest two solutions. i) Mobile Video Streaming (MoV) and Social Video Sharing (SoV). MoV will create a private cloud for each mobile user which adjusts the bit rate based on return value using scalable video coding technique to improve the scalability and efficient utilization of bandwidth. SoV uses the agent to pre fetch the video data for effective sharing and to reduce the buffering time.
Video conferencing allows people in different locations to communicate face-to-face in real-time. It works by using microphones, webcams, displays, and software to capture and transmit video and audio streams between participants. There are two types: point-to-point calls between two locations, and multi-point calls between three or more locations. As demand for video conferencing grows, solutions need robust, scalable infrastructure to deliver high quality experiences across networks using standards like H.264 and SIP.
A Set-top-Box (STB) is a very common name heard in the consumer electronics market. It is a device that is attached to a Television for enhancing its functions or the quality of its functions. On the other side, the STB is connected to an external source of signal, like satellite, cable, terrestrial or internet. The STB processes the signal it receives, turns it into content, which is then displayed on the television screen or other display device. There are different types of STBs based on what kind of signals it can receive and what kind of processing it can do. The most widely used STBs are DVB STBs, which receive DVB (Digital Video Broadcast) transmission.
1) Arneb is a web-based video annotation tool that allows multiple human annotators to collaboratively annotate videos with structured semantic annotations using ontologies.
2) The tool was developed for the EU VidiVideo project to generate ground truth annotations for training automatic video annotation systems.
3) Annotations can be exported in MPEG-7 and OWL ontology formats to provide interoperability, and the tool has been used to annotate over 25,000 annotations of broadcast video by professional archivists.
Movico has designed a range of offerings – products, solutions and services to help content owners move their video inventory to digital platforms and publish/monetize the same. Movico believes that as valued trusted partners to content owners in this journey, we will be able to build a scalable business for our stakeholders – investors, employees and partners.
Mobile-Based Video Caching Architecture Based on Billboard Manager csandit
Video streaming services are very popular today. Increasingly, users can now access multimedia applications and video playback wirelessly on their mobile devices. However, a significant challenge remains in ensuring smooth and uninterrupted transmission of almost any
size of video file over a 3G network, and as quickly as possible in order to optimize bandwidth consumption. In this paper, we propose to position our Billboard Manager to provide an optimal transmission rate to enable smooth video playback to a mobile device user connected to
a 3G network. Our work focuses on serving user requests by mobile operators from cached resource managed by Billboard Manager, and transmitting the video files from this pool. The
aim is to reduce the load placed on bandwidth resources of a mobile operator by routing away as much user requests away from the internet for having to search a video and, subsequently, if located, have it transferred back to the user.
How Open Data Can Enhance Interactive TelevisionLinkedTV
The presentation was delivered by Lyndon Nixon, STI International Consulting and Research GmbH, Austria, during the ngnlab.eu Workshop http://ngnlab.eu/index.php/ngnlabeu-workshop, held in Bratislava during September 20th, 2012. The workshop was co-located with the 5th joint IFIP Wireless and Mobile Networking Conference (WMNC 2012 http://wmnc.fiit.stuba.sk.
Purpose of the workshop is bringing together researchers and experts from academia as well as from business which came from Germany, Nederlands, Spain, Austria and Slovakia.
The document discusses enhanced or interactive television (ETV) and the components involved in creating and delivering ETV applications. It describes ETV as video programming with an interactive application bound to it. Key components discussed include ETV application programs and data, signaling, stream events/triggers, media timeline, and application servers. It also outlines the process of authoring ETV applications and distributing them through various networks.
Similar to OBJECT-MEDIA: FROM PERSONALISATION TO A SEAMLESS TV/VR CONVERGENCE (20)
Ready to Unlock the Power of Blockchain!Toptal Tech
Imagine a world where data flows freely, yet remains secure. A world where trust is built into the fabric of every transaction. This is the promise of blockchain, a revolutionary technology poised to reshape our digital landscape.
Toptal Tech is at the forefront of this innovation, connecting you with the brightest minds in blockchain development. Together, we can unlock the potential of this transformative technology, building a future of transparency, security, and endless possibilities.
Discover the benefits of outsourcing SEO to Indiadavidjhones387
"Discover the benefits of outsourcing SEO to India! From cost-effective services and expert professionals to round-the-clock work advantages, learn how your business can achieve digital success with Indian SEO solutions.
Gen Z and the marketplaces - let's translate their needsLaura Szabó
The product workshop focused on exploring the requirements of Generation Z in relation to marketplace dynamics. We delved into their specific needs, examined the specifics in their shopping preferences, and analyzed their preferred methods for accessing information and making purchases within a marketplace. Through the study of real-life cases , we tried to gain valuable insights into enhancing the marketplace experience for Generation Z.
The workshop was held on the DMA Conference in Vienna June 2024.
HijackLoader Evolution: Interactive Process HollowingDonato Onofri
CrowdStrike researchers have identified a HijackLoader (aka IDAT Loader) sample that employs sophisticated evasion techniques to enhance the complexity of the threat. HijackLoader, an increasingly popular tool among adversaries for deploying additional payloads and tooling, continues to evolve as its developers experiment and enhance its capabilities.
In their analysis of a recent HijackLoader sample, CrowdStrike researchers discovered new techniques designed to increase the defense evasion capabilities of the loader. The malware developer used a standard process hollowing technique coupled with an additional trigger that was activated by the parent process writing to a pipe. This new approach, called "Interactive Process Hollowing", has the potential to make defense evasion stealthier.
OBJECT-MEDIA: FROM PERSONALISATION TO A SEAMLESS TV/VR CONVERGENCE
1. This presentation proposes in-video personalisation using object-
based media for the insertion of externally sourced video
placements into broadcast content (real-time or on-demand).
Placement content is selected in accordance with viewer profiles,
and may be produced independently of the source video.
FROM
PERSONALISATION
TO A SEAMLESS
TV/VR
CONVERGENCE
Jeremy Foss, Birmingham City University,
UK
jeremy.foss@bcu.ac.uk
Alexandre Ulisses, MOG Technologies,
Portugal
alexandre.ulisses@mog-technologies.com
Nicolas Monnoyer, Big Bad Wolf, Belgium
n.monnoyer@bigbadwolf.be
Go To Introduction and
overview
Using this this technology user-object interaction is also possible. Use cases, include personalised product placement,
training, education and accessibility for hearing and visually impaired viewers. The technology specifically fits existing
distribution platforms (cable, satellite, DTTV, IPTV/OTT) with minimal infrastructure upgrade.
Birmingham City University, MOG Technologies and Big Bad Wolf are all involved in the development of the
production, imaging and delivery technologies to realise this next generation of a seamless TV and VR convergence.
This presentation – given at the International Broadcasting
Conference Futurezone, IBC 2016, Amsterdam, September 2016
2. Viewer Interaction
TV - VR Seamless
Services
Delivery Platform
Go to …
Metadata, Scene Graphing
Production Challenges,
Profiling
Personalised Interactive TV
This concept has been presented in which video objects are
selected in accordance with a viewer’s profile (click for details)
and delivered to the user for insertion into a broadcast video.
The proposal also allows that placements are allowed to be
manipulated by users for additional interaction, including with
collaborative groups across the network.
Example applications are:
• advertising, specifically for personalised product placement
• accessibility, where the programme may be augmented with
audio objects to assist visually impaired viewers, or video
objects to assist hearing impaired viewers
• entertainment, including collaborative team interaction for
personalized games in the broadcast domain
• education, including personalized documentaries.
Video Objects - selection for the viewer
Personalised placements are achieved with user profiling (see
more here). Profiles are accrued on the basis of a number of
available parameters including demographics, geography, TV
consumption behavior, online behavior including social network
activities if made available.
Continued on next page click here
OBJECT-MEDIA: FROM PERSONALISATION TO A SEAMLESS TV/VR CONVERGENCE
The placement insertion is the product of matching the source programme, the user profile
and the placement object.
In-Video Personalised Interactive TV
Personalisation of Networked
Video (J D Foss, B Malheiro, J C
Burguillo-Rial); EuroITV, Berlin,
Germany, July 2012
3. Viewer Interaction
TV - VR Seamless
Services
Delivery Platform
Go to …
Metadata, Scene Graphing
Production Challenges,
Profiling
Metadata and Scene Graphing
Descriptive metadata is important to enable this. Source video content
needs descriptions for placement opportunities; the placements
themselves need metadata to enable matching and selection to fit the
source video. The selection would typically be made with a three-way
correlation between the source and placement metadata and the
viewer profile.
The source video may be delivered over one platform (DVB or IP)
whilst the video objects are to be delivered by IP (over the WWW, c.f.
HbbTV). Consequently a scene graph file (see more here) is delivered
with the content to inform the receiving platform (STB, etc.) how the
objects and source video are to be rendered for the final personalised
video. Placement objects may need processing to seamlessly fit the
source video. Birmingham City University are researching this area.
Distribution to the Viewer
The proposal requires minimal changes to existing architectures. The
delivery platform supports all main distribution platforms.
Object based TV and VR
We then extend this proposal: VR objects may be correlated with
object-based TV to lead to a TV / VR seamless convergence for
advertising, entertainment, etc., and so realise a personalized
collaborative broadcast VR experience.
Back to previous page
OBJECT-MEDIA: FROM PERSONALISATION TO A SEAMLESS TV/VR CONVERGENCE
In-Video Personalised Interactive TV
The placement insertion is the product of matching the source programme, the user profile
and the placement object.
4. Production issues for Broadcast VR
The vision for VR in a broadcast context is that at particular points of the programme the
viewer will be able to immerse themselves deeper into various details of the subject with
an interactive experience in VR, and with a number of collaborators for entertainment
experiences or informative programming.
Clearly one of the main challenges is the capture and management of a number of three
dimensional objects required for a full virtual environment. This won’t be possible for all
objects throughout a production. An example could be that much of the production will
remain as a 360º surround playout with appropriate objects available as 3D interactive
objects in the scene. A production challenge here is to maintain the viewer focus on the
plot and key interest features of the production.
A viewer of the initial 2D video will need a seamless experience in selection of the VR
option and the transition to the VR element of the production.
Workflow processes
The production requires object management within the timeline of the video to maintain
coherence in the video and correlation with any related virtual environment.
Personalisation and interactivity requires scene graphing and metadata technologies.
So metadata generation and management will be a major requirement. Identification of
detail in the source video may be real time or in post-production.
Placements for advertising and product placements will typically be generated and
managed for advertising agencies. Metadata is to be derived for items to be promoted
and advertised as available for selection, and will require hosting in web-based libraries.
Challenges in Production
OBJECT-MEDIA: FROM PERSONALISATION TO A SEAMLESS TV/VR CONVERGENCE
Return to …
Introduction and overview
Home page
Viewer Profiling
Audio or video objects are to be selected for placement into the video
playout for in-video personalisation. One or more object candidates for
selection are identified, and user profiles are to be used.
Profiles may be built and managed from user attributes, typically from
demographics, geography, social network usage, TV consumption and
interactive behavior.
An example is for user location which can be identified from IP or Wi-Fi
access point identification, particularly in retail outlets, etc. The location
context can be utilized for personalized promotional messages.
Interactive TV can encourage viewers to impart further information into
the TV domain and to build an increasingly finer grained profile of the
viewer.
Group Profiling
In many cases the programme is viewed by a number of viewers.
Alternatively a group of colleagues may be watching the same
programme across the network. In these cases a group profile may be
realized (and especially to keep viewing family-friendly). This is an
ongoing area of research at Birmingham City University.
5. Selection of Object Placements for the Viewer – Metadata and Scene
Graphing
OBJECT-MEDIA: FROM PERSONALISATION TO A SEAMLESS TV/VR CONVERGENCE
Descriptive Metadata for the source content will define the
time and spatial requirements for placements to be inserted
into the original video. Producers will wish to maintain control
and editorial integrity over acceptable placements from
external sources into their production. These conditions need
to be defined within the source metadata.
Similarly the placement objects also to be defined in
metadata to enable the correlation with (i) the source content
requirements and (ii) the viewer profiles.
MPEG-7 is suggested as a suitable metadata solution, but
may need extensions to support matching external objects into
source content. It is suitably rich and full featured (but
complex)
A Scene Graph document is required to be sent to the viewer
platform so the object can be rendered into the source video at
the required time and positon, and with qualifiers for position,
motion, etc. MPEG-4 Part 11 is an example of scene graphing,
although a new standard should be considered to support
demands of personalised and interactive broadcast media)
Broadcast Workflow Developments to Support
Personalisation and Interaction. Workflows will evolve and
enhancements to standards and processes are discussed
here, including metadata management, scene graph
processing, and coding of objects for delivery.
Scene 1 Scene 1 Scene 2 Scene 2
Scene Graph File – Viewer # 1
Time Code ###
• Placement 1: [Kenworth Truck]
• Position [x, y]
• [ Update info formotion …]
Scene Graph File – Viewer # 2
Time Code ###
• Placement 1: [Aston Martin
• Position [x, y]
• [Update info for motion …]
Placement Metadata File
Type: Aston Martin Vantage
Colour: Bronze
Positioning constraints
Placement Metadata File
Type: Kenworth Truck
Colour: Blue / Violet
Positioning constraints
Placement Metadata File
Type: Covered Wagon
Colour: BeCopyright
Copyright / terms of usage: 1500 inserts
Object Placement Libraries
Video Metadata File
Scene 2 Time Code ### = Placement 1
• Placement 1 – type = car,
• colour: not blue, green
• manufacture: after 2010
• Position [x, y, z]
Source Video
Return to …
Introduction and overview
Home page
6. Viewer Interaction with the Media Objects
Object-based video (and audio) allows placement of individual video
and audio items into a specific position and timing of the video
playout.
Object placement is by simple scene graphing vectors. So with simple
feedback, e.g. from an app, the user can interact with vector data to
reposition the object. This is detailed in the delivery platform
description here.
Applications include interactive entertainment and gaming, and
exploring products for immersive advertising.
Online colleagues can be connected with the same data will also
receive the object vectors and so realise a shared experience.
The Broadcast VR Experience
Object-based video is an intermediate format between TV and VR and
provides a convenient platform for the viewer to transition from a 2D
TV playout to a fully immersive VR experience.
This opens up a path of convergence between personalised,
interactive broadcast TV and VR (with the same collaboration,
interaction and personalisation).
This convergence between TV and VR is discussed further here.
From Interactive TV … Towards a Seamless TV/VR Convergence
OBJECT-MEDIA: FROM PERSONALISATION TO A SEAMLESS TV/VR CONVERGENCE
Interactive scene
shared in a social
group of viewers
Return to …
Introduction and overview
Home page
7. TV VR Seamless Services
OBJECT-MEDIA: FROM PERSONALISATION TO A SEAMLESS TV/VR CONVERGENCE
Advertising and Brand Awareness: As discussed in How webVR is going to
change brand experience we are looking towards VR bringing brand awareness.
With a close relationship with TV, we can deliver a tighter integration of
advertising and programme content, and build in shared-viewer advertising.
Social Media: Sharing User Media (Objects): The experience can also allow
user generated media objects to be fed back into the VR world and shared with
collaborating colleagues, and that this could be linked with the collaborators’
social network activities, photo uploads, etc.
The technical vision here allows a tight integration and seamless services
between TV and VR worlds. Shared, interactive and personalised activities can
be experienced by a range of collaborators using both platforms, and for a
variety of programme types - entertainment, education (e.g. personalised
interactive documentaries), accessibility, etc.
TV to VR “Breakout”: One method of achieving this is for the production to
allow the viewer to choose to “break out” from a TV program and initiate the full
interactive 3D experience, and allow the viewer to explore the virtual world. For
example advertising messages become much more informative when the
viewer is allowed to explore, say, the cockpit of a sports car. VR can also add
an understanding to abstract products, for example, to explore the time and
monetary values of a savings plan. How webVR is going to change brand
experience – Nicolas Monnoyer
(to link to this, see the online presentation, url in the
QR code, below)
Linear TV with object placements;
viewer initiates VR session.
Objects may be transitioned from
TV to VR world.
Video
Server
Return to …
Introduction and overview
Home page
VR
Server
8. Video
Server
WWW
Delivery Platform
The distribution solution addresses all delivery platforms – DVB cable/satellite/terrestrial and IPTV/OTT.
The network equipment requires little additional infrastructure – mainly (i) the object libraries (ii) the
Scene Server; both of these elements are based in the web.
The platform is therefor an advanced version of a Hybrid Broadcast Broadband (Hbb) solution.
We are currently addressing pre-rendered personalisation from cloud-based playouts.
CLICK HERE to continue the description of the platform
The proposed architecture supports in-video personalisation with object
placements; interaction with those placements (if allowed by the production);
initiation of a VR session relating to the video programme.
OBJECT-MEDIA: FROM PERSONALISATION TO A SEAMLESS TV/VR CONVERGENCE
Headend Cable, Satellite, DTTV … Viewer
Return to …
Introduction and overview
Home page
9. ViewerHeadend
Source Video (IP)
Delivery Platform – 1 – Standard Video Playout
Video
Server
DVB
IPTV
Cable, Satellite, DTTV …
Source Video (DVB)
WWW
Scene Graph
Selected Objects
Personalised
Scene Graph
Object
Libraries
Object
Selection
Profiling Viewer feedback, information, etc
Scene
Server
WWW
Phone
App
Viewer Interaction
WWW
VR
Server
Selected Objects – Personalised Virtual World
Source Virtual World
VR Platform
Initiate VR session
and VR interaction
(Synch/Correlation)
This is a standard video playout, using either DVB or IP networks.
Now let’s personalise this video for the viewer … CLICK HERE FOR NEXT
The viewer’s profile is used to select an appropriate placement object from a library. The object is delivered (via IP/WWW) to the
viewer STB for rendering in to the source video
The Scene Server calculates and serves the scene graph relates the object placements to the source video (in terms of spatial
placement, time of entry, etc). CLICK HERE FOR NEXT
The viewer can use an app to send simple data back to the Scene Server to modify the vectors of the placement objects, and so
interact with the placement object.
This data can be shared with a number of colleagues online and so realise collaborative interactivity with content personalised
for the group, and within a broadcast video (potentially real-time) CLICK HERE FOR NEXT
At specific points in the playout, the viewer may initiate a download of the associated VR world relating to the broadcast video. The
VR world may include personalised objects, and interactive, as described above
CLICK HERE TO RETURN
Delivery Platform – 2 – In-Video PersonalisationDelivery Platform – 3 – InteractionDelivery Platform – 4 – VR Broadcast and Personalisation
OBJECT-MEDIA: FROM PERSONALISATION TO A SEAMLESS TV/VR CONVERGENCE