Broadcast and broadband networks continue to be separate worlds in the video consumption business. Some initiatives such as HbbTV have built a bridge between both worlds, but its application is almost limited to providing links over the broadcast channel to content providers’ applications such as Catch-up TV services. When it comes to reality, the user is using either one network or the other.
H2B2VS is a Celtic-Plus project aiming at exploiting the potential of real hybrid networks by implementing efficient synchronization mechanisms and using new video coding standard such as High Efficiency Video Coding (HEVC). The goal is to develop successful hybrid network solutions that enable value added services with an optimum bandwidth usage in each network and with clear commercial applications. An example of the potential of this approach is the transmission of Ultra-HD TV by sending the main content over the broadcast channel and the required complementary information over the broadband network. This technology can also be used to improve the life of handicapped persons: Deaf people receive through the broadband network a sign language translation of a programme sent over the broadcast channel; the TV set then displays this translation in an inset window.
One of the most important contributions of the project is developing and testing synchronization methods between two different networks that offer unequal qualities of service with significant differences in delay and jitter.
In this paper, the main technological project contributions are described, including SHVC, the scalable extension of HEVC and a special focus on the synchronization solution adopted by MPEG and DVB. The paper also presents some of the implemented practical use cases, such as the sign language translation described above, and their performance results so as to evaluate the commercial application of this type of solution.
Requiring only half the bitrate of its predecessor, the new standard – HEVC or H.265 – will significantly reduce the need for bandwidth and expensive, limited spectrum. HEVC (H.265) will enable the launch of new video services and in particular ultra HD television (UHDTV).
State-of-the-art video compression techniques – HEVC/H.265 – can reduce the size of raw video by a factor of about 100 without any noticeable reduction in visual quality. With estimates indicating that compressed real-time video accounts for more than 50 percent of current network traffic, and this figure is set to rise to 90 percent within a few years, HEVC/H.265 will be a welcome relief for network operators.
New services, devices and changing viewing patterns are among the factors contributing to the growth in video traffic as people watch more and more traditional TV and video-streaming services on their mobile devices.
Ericsson has been heavily involved in the standardization of HEVC since it began in 2010, and this Ericsson Review article highlights some of the contributions that have led to the compression efficiency offered by HEVC.
.
Motion Vector Recovery for Real-time H.264 Video StreamsIDES Editor
Among the various network protocols that can be
used to stream the video data, RTP over UDP is the best to do
with real time streaming in H.264 based video streams. Videos
transmitted over a communication channel are highly prone
to errors; it can become critical when UDP is used. In such
cases real time error concealment becomes an important
aspect. A subclass of the error concealment is the motion
vector recovery which is used to conceal errors at the decoder
side. Lagrange Interpolation is the fastest and a popular
technique for the motion vector recovery. This paper proposes
a new system architecture which enables the RTP-UDP based
real time video streaming as well as the Lagrange
interpolation based real time motion vector recovery in H.264
coded video streams. A completely open source H.264 video
codec called FFmpeg is chosen to implement the proposed
system. Proposed implementation was tested against the
different standard benchmark video sequences and the
quality of the recovered videos was measured at the decoder
side using various quality measurement metrics.
Experimental results show that the real time motion vector
recovery does not introduce any noticeable difference or
latency during display of the recovered video.
The 131st WG 11 (MPEG) meeting was held online, 29 June – 3 July 2020
Table of Contents
WG11 (MPEG) Announces VVC – the Versatile Video Coding Standard
Point Cloud Compression – WG11 (MPEG) promotes a Video-based Point Cloud Compression Technology to the FDIS stage
MPEG-H 3D Audio – WG11 (MPEG) promotes Baseline Profile for 3D Audio to final stage
Call for Proposals on Technologies for MPEG-21 Contracts to Smart Contracts Conversion
WG11 (MPEG) issues a Call for Proposals on extension and improvements to ISO/IEC 23092 standard series
Widening support for storage and delivery of MPEG-5 EVC
Multi-Image Application Format adds support of HDR
Carriage of Geometry-based Point Cloud Data progresses to Committee Draft
MPEG Immersive Video (MIV) progresses to Committee Draft
Neural Network Compression for Multimedia Applications – WG11 (MPEG) progresses to Committee Draft
WG11 (MPEG) issues Committee Draft of Conformance and Reference Software for Essential Video Coding (EVC)
Requiring only half the bitrate of its predecessor, the new standard – HEVC or H.265 – will significantly reduce the need for bandwidth and expensive, limited spectrum. HEVC (H.265) will enable the launch of new video services and in particular ultra HD television (UHDTV).
State-of-the-art video compression techniques – HEVC/H.265 – can reduce the size of raw video by a factor of about 100 without any noticeable reduction in visual quality. With estimates indicating that compressed real-time video accounts for more than 50 percent of current network traffic, and this figure is set to rise to 90 percent within a few years, HEVC/H.265 will be a welcome relief for network operators.
New services, devices and changing viewing patterns are among the factors contributing to the growth in video traffic as people watch more and more traditional TV and video-streaming services on their mobile devices.
Ericsson has been heavily involved in the standardization of HEVC since it began in 2010, and this Ericsson Review article highlights some of the contributions that have led to the compression efficiency offered by HEVC.
.
Motion Vector Recovery for Real-time H.264 Video StreamsIDES Editor
Among the various network protocols that can be
used to stream the video data, RTP over UDP is the best to do
with real time streaming in H.264 based video streams. Videos
transmitted over a communication channel are highly prone
to errors; it can become critical when UDP is used. In such
cases real time error concealment becomes an important
aspect. A subclass of the error concealment is the motion
vector recovery which is used to conceal errors at the decoder
side. Lagrange Interpolation is the fastest and a popular
technique for the motion vector recovery. This paper proposes
a new system architecture which enables the RTP-UDP based
real time video streaming as well as the Lagrange
interpolation based real time motion vector recovery in H.264
coded video streams. A completely open source H.264 video
codec called FFmpeg is chosen to implement the proposed
system. Proposed implementation was tested against the
different standard benchmark video sequences and the
quality of the recovered videos was measured at the decoder
side using various quality measurement metrics.
Experimental results show that the real time motion vector
recovery does not introduce any noticeable difference or
latency during display of the recovered video.
The 131st WG 11 (MPEG) meeting was held online, 29 June – 3 July 2020
Table of Contents
WG11 (MPEG) Announces VVC – the Versatile Video Coding Standard
Point Cloud Compression – WG11 (MPEG) promotes a Video-based Point Cloud Compression Technology to the FDIS stage
MPEG-H 3D Audio – WG11 (MPEG) promotes Baseline Profile for 3D Audio to final stage
Call for Proposals on Technologies for MPEG-21 Contracts to Smart Contracts Conversion
WG11 (MPEG) issues a Call for Proposals on extension and improvements to ISO/IEC 23092 standard series
Widening support for storage and delivery of MPEG-5 EVC
Multi-Image Application Format adds support of HDR
Carriage of Geometry-based Point Cloud Data progresses to Committee Draft
MPEG Immersive Video (MIV) progresses to Committee Draft
Neural Network Compression for Multimedia Applications – WG11 (MPEG) progresses to Committee Draft
WG11 (MPEG) issues Committee Draft of Conformance and Reference Software for Essential Video Coding (EVC)
This paper deals with the overview of latest video coding standard High-Efficiency Video
Coding (HEVC). Also this work presents a performance comparison of the two latest video coding
standards H.264/MPEG-AVC and H.265/MPEG-HEVC. According to the experimental results, which
were obtained for a whole test set of video sequences by using similar encoding configurations,
H.265/MPEG-HEVC provides significant average bit-rate savings of around 40%.
Keywords: - CABAC, CAVLC, H.264/AVC, HEVC PSNR and SBAC.
Distributed Adaptation Decision-Taking Framework and Scalable Video Coding Tu...mgrafl
Existing and future media ecosystems need to cope with the ever-increasing heterogeneity of networks, devices, and user characteristics collectively referred to as (usage) context. The key to address this problem is media adaptation to various and dynamically changing contexts in order to provide a service quality that is regarded as satisfactory by the end user. The adaptation can be performed in many ways and at different locations, e.g., at the edge and within the network resulting in a substantial number of issues to be integrated within a media ecosystem. This paper describes research challenges, key innovations, target research outcomes, and achievements so far for edge and in-network media adaptation by introducing the concept of Scalable Video Coding (SVC) tunneling.
bitdash - Simple & Easy MPEG-DASH Player for Web and MobileBitmovin Inc
bitdash MPEG-DASH Player for HTML5 using the Media Source Extentions API as well as for Flash-based MPEG-DASH Playback. Using the HTML5 Encrypted Media Extentions it's possible to MPEG-CENC based DRM. That's the right soltution for the next generation of online video services!
Billions of people are watching valuable TV content and advertising on a daily basis.
Different distribution networks transport this content from the content owner to the
consumer. The consumer has the choice to receive a full set of TV channels from many
service providers, be it telco, cable, terrestrial or DTH operators.
Subjective quality evaluation of the upcoming HEVC video compression standard Touradj Ebrahimi
Slides of my presentation at SPIE Optics+Photonics 2012 Applications of Digital Image Processing XXXV, San Diego, August 12-16, 2012
Paper available at: http://infoscience.epfl.ch/record/180494
A Distributed Delivery Architecture for User Generated Content Live Streaming...Alpen-Adria-Universität
Live User Generated Content (UGC) has become very popular in today’s video streaming applications, in particular with gaming and e-sport. However, streaming UGC presents unique challenges for video delivery. When dealing with the technical complexity of managing hundreds or thousands of concurrent streams that are geographically distributed, UGCsystems are forces to made difficult trade-offs with video quality and latency. To bridge this gap, this paper presents a fully distributed architecture for UGC delivery over the Internet, termed QuaLA(joint Quality-Latency Architecture). The proposed architecture aims to jointly optimize video quality and latency for a better user experience and fairness. By using the proximal Jacobi alternating direction method of multipliers(ProxJ-ADMM) technique, QuaLA proposes a fully distributed mechanism to achieve an optimal solution. We demonstrate the effectiveness of the proposed architecture through real-world experiments using the CloudLAB testbed. Experimental results show the outperformance ofQuaLAin achieving high quality with more than 57% improvement while preserving a good level of fairness and respecting a given target latency among all clients compared to conventional client-driven solutions
FLEXIBLE FEC AND FAIR SELECTION OF SCALABLE UNITS FOR HIGH QUALITY VOD STREAM...IJMIT JOURNAL
Providing high quality video on demand (VoD) streaming service over wireless networks is very
challenging due to the limited capacity and error-proneness of the wireless environment. We propose a
flexible forward error correction (FEC) and a fair selection scheme of scalable units that utilize a layered
coding structure of H.264/SVC (scalable video coding). Three error-resilient techniques (e.g., unequal
error protection, FEC, and retransmission) are adapted to minimize the total distortion of VoD streaming
service. For flexible FEC, a rateless FEC code is adopted. The FEC code rates are based on the possible
number of retransmission, the condition of the wireless channel and the layered coding structure of
H.264/SVC for each packet. A theoretical study is performed to show how to utilize the possible number of
retransmission for an adaptive FEC code rate. With fair selection, regular and retransmission-requested
packets compete for resources without fixing the retry limit. Thus, excessive retransmission is prevented and
the proposed scheme effectively provides capacity-limited and delay-constrained VoD streaming services.
For this fair selection of scalable units, we formulate the problem using binary integer programming and
propose an effective low complexity selection algorithm based on a priority index. The proposed algorithm
prioritizes packets according to the priority index and the H.264/SVC structure. We show that the proposed
scheme can minimize the total video distortion compared to other heuristic procedures. Other effects of the
various factors are also considered for the performance of the new scheme
Flexible fec and fair selection of scalable units for high quality vod stream...IJMIT JOURNAL
Providing high quality video on demand (VoD) streaming service over wireless networks is very challenging due to the limited capacity and error-proneness of the wireless environment. We propose a flexible forward error correction (FEC) and a fair selection scheme of scalable units that utilize a layered
coding structure of H.264/SVC (scalable video coding). Three error-resilient techniques (e.g., unequal
error protection, FEC, and retransmission) are adapted to minimize the total distortion of VoD streaming
service. For flexible FEC, a rateless FEC code is adopted. The FEC code rates are based on the possible
number of retransmission, the condition of the wireless channel and the layered coding structure of H.264/SVC for each packet. A theoretical study is performed to show how to utilize the possible number of retransmission for an adaptive FEC code rate. With fair selection, regular and retransmission-requested
packets compete for resources without fixing the retry limit. Thus, excessive retransmission is prevented and
the proposed scheme effectively provides capacity-limited and delay-constrained VoD streaming services.
For this fair selection of scalable units, we formulate the problem using binary integer programming and
propose an effective low complexity selection algorithm based on a priority index. The proposed algorithm
prioritizes packets according to the priority index and the H.264/SVC structure. We show that the proposed
scheme can minimize the total video distortion compared to other heuristic procedures. Other effects of the
various factors are also considered for the performance of the new scheme.
PERFORMANCE EVALUATION OF H.265/MPEG-HEVC, VP9 AND H.264/MPEGAVC VIDEO CODINGijma
This study evaluates the performance of the three latest video codecs H.265/MPEG-HEVC, H.264/MPEGAVC
and VP9. The evaluation is based on both subjective and objective quality metrics. The assessment
metric Double Stimulus Impairment Scale (DSIS) is used to evaluate the subjective quality of the
compressed video sequences. The Peak Signal-to-Noise Ratio (PSNR) metricis used for the objective
evaluation. Moreover, this work studies the effect of frame rate and resolution on the encoders’
performance. The extensive number of experiments are conducted with similar encoding configurations for
the three studied encoders. The evaluation results show that H.265/MPEG-HEVC provides superior bitrate
saving capabilities compared to H.264 and VP9. However, VP9 shows lower encoding time than
H.265/MPEG-HEVC but higher encoding time compared to H.264.
Excerpts from the HEVC / H265 Hands-on course.
This parts of the course explains how to download the reference code (HM) compile it configure it and analyze the video output
Requiring only half the bitrate of its predecessor, the new standard – HEVC or H.265 – will significantly reduce the need for bandwidth and expensive, limited spectrum. HEVC (H.265) will enable the launch of new video services and in particular ultra HD television (UHDTV).
State-of-the-art video compression techniques – HEVC/H.265 – can reduce the size of raw video by a factor of about 100 without any noticeable reduction in visual quality. With estimates indicating that compressed real-time video accounts for more than 50 percent of current network traffic, and this figure is set to rise to 90 percent within a few years, HEVC/H.265 will be a welcome relief for network operators.
New services, devices and changing viewing patterns are among the factors contributing to the growth in video traffic as people watch more and more traditional TV and video-streaming services on their mobile devices.
Ericsson has been heavily involved in the standardization of HEVC since it began in 2010, and this Ericsson Review article highlights some of the contributions that have led to the compression efficiency offered by HEVC.
This paper deals with the overview of latest video coding standard High-Efficiency Video
Coding (HEVC). Also this work presents a performance comparison of the two latest video coding
standards H.264/MPEG-AVC and H.265/MPEG-HEVC. According to the experimental results, which
were obtained for a whole test set of video sequences by using similar encoding configurations,
H.265/MPEG-HEVC provides significant average bit-rate savings of around 40%.
Keywords: - CABAC, CAVLC, H.264/AVC, HEVC PSNR and SBAC.
Distributed Adaptation Decision-Taking Framework and Scalable Video Coding Tu...mgrafl
Existing and future media ecosystems need to cope with the ever-increasing heterogeneity of networks, devices, and user characteristics collectively referred to as (usage) context. The key to address this problem is media adaptation to various and dynamically changing contexts in order to provide a service quality that is regarded as satisfactory by the end user. The adaptation can be performed in many ways and at different locations, e.g., at the edge and within the network resulting in a substantial number of issues to be integrated within a media ecosystem. This paper describes research challenges, key innovations, target research outcomes, and achievements so far for edge and in-network media adaptation by introducing the concept of Scalable Video Coding (SVC) tunneling.
bitdash - Simple & Easy MPEG-DASH Player for Web and MobileBitmovin Inc
bitdash MPEG-DASH Player for HTML5 using the Media Source Extentions API as well as for Flash-based MPEG-DASH Playback. Using the HTML5 Encrypted Media Extentions it's possible to MPEG-CENC based DRM. That's the right soltution for the next generation of online video services!
Billions of people are watching valuable TV content and advertising on a daily basis.
Different distribution networks transport this content from the content owner to the
consumer. The consumer has the choice to receive a full set of TV channels from many
service providers, be it telco, cable, terrestrial or DTH operators.
Subjective quality evaluation of the upcoming HEVC video compression standard Touradj Ebrahimi
Slides of my presentation at SPIE Optics+Photonics 2012 Applications of Digital Image Processing XXXV, San Diego, August 12-16, 2012
Paper available at: http://infoscience.epfl.ch/record/180494
A Distributed Delivery Architecture for User Generated Content Live Streaming...Alpen-Adria-Universität
Live User Generated Content (UGC) has become very popular in today’s video streaming applications, in particular with gaming and e-sport. However, streaming UGC presents unique challenges for video delivery. When dealing with the technical complexity of managing hundreds or thousands of concurrent streams that are geographically distributed, UGCsystems are forces to made difficult trade-offs with video quality and latency. To bridge this gap, this paper presents a fully distributed architecture for UGC delivery over the Internet, termed QuaLA(joint Quality-Latency Architecture). The proposed architecture aims to jointly optimize video quality and latency for a better user experience and fairness. By using the proximal Jacobi alternating direction method of multipliers(ProxJ-ADMM) technique, QuaLA proposes a fully distributed mechanism to achieve an optimal solution. We demonstrate the effectiveness of the proposed architecture through real-world experiments using the CloudLAB testbed. Experimental results show the outperformance ofQuaLAin achieving high quality with more than 57% improvement while preserving a good level of fairness and respecting a given target latency among all clients compared to conventional client-driven solutions
FLEXIBLE FEC AND FAIR SELECTION OF SCALABLE UNITS FOR HIGH QUALITY VOD STREAM...IJMIT JOURNAL
Providing high quality video on demand (VoD) streaming service over wireless networks is very
challenging due to the limited capacity and error-proneness of the wireless environment. We propose a
flexible forward error correction (FEC) and a fair selection scheme of scalable units that utilize a layered
coding structure of H.264/SVC (scalable video coding). Three error-resilient techniques (e.g., unequal
error protection, FEC, and retransmission) are adapted to minimize the total distortion of VoD streaming
service. For flexible FEC, a rateless FEC code is adopted. The FEC code rates are based on the possible
number of retransmission, the condition of the wireless channel and the layered coding structure of
H.264/SVC for each packet. A theoretical study is performed to show how to utilize the possible number of
retransmission for an adaptive FEC code rate. With fair selection, regular and retransmission-requested
packets compete for resources without fixing the retry limit. Thus, excessive retransmission is prevented and
the proposed scheme effectively provides capacity-limited and delay-constrained VoD streaming services.
For this fair selection of scalable units, we formulate the problem using binary integer programming and
propose an effective low complexity selection algorithm based on a priority index. The proposed algorithm
prioritizes packets according to the priority index and the H.264/SVC structure. We show that the proposed
scheme can minimize the total video distortion compared to other heuristic procedures. Other effects of the
various factors are also considered for the performance of the new scheme
Flexible fec and fair selection of scalable units for high quality vod stream...IJMIT JOURNAL
Providing high quality video on demand (VoD) streaming service over wireless networks is very challenging due to the limited capacity and error-proneness of the wireless environment. We propose a flexible forward error correction (FEC) and a fair selection scheme of scalable units that utilize a layered
coding structure of H.264/SVC (scalable video coding). Three error-resilient techniques (e.g., unequal
error protection, FEC, and retransmission) are adapted to minimize the total distortion of VoD streaming
service. For flexible FEC, a rateless FEC code is adopted. The FEC code rates are based on the possible
number of retransmission, the condition of the wireless channel and the layered coding structure of H.264/SVC for each packet. A theoretical study is performed to show how to utilize the possible number of retransmission for an adaptive FEC code rate. With fair selection, regular and retransmission-requested
packets compete for resources without fixing the retry limit. Thus, excessive retransmission is prevented and
the proposed scheme effectively provides capacity-limited and delay-constrained VoD streaming services.
For this fair selection of scalable units, we formulate the problem using binary integer programming and
propose an effective low complexity selection algorithm based on a priority index. The proposed algorithm
prioritizes packets according to the priority index and the H.264/SVC structure. We show that the proposed
scheme can minimize the total video distortion compared to other heuristic procedures. Other effects of the
various factors are also considered for the performance of the new scheme.
PERFORMANCE EVALUATION OF H.265/MPEG-HEVC, VP9 AND H.264/MPEGAVC VIDEO CODINGijma
This study evaluates the performance of the three latest video codecs H.265/MPEG-HEVC, H.264/MPEGAVC
and VP9. The evaluation is based on both subjective and objective quality metrics. The assessment
metric Double Stimulus Impairment Scale (DSIS) is used to evaluate the subjective quality of the
compressed video sequences. The Peak Signal-to-Noise Ratio (PSNR) metricis used for the objective
evaluation. Moreover, this work studies the effect of frame rate and resolution on the encoders’
performance. The extensive number of experiments are conducted with similar encoding configurations for
the three studied encoders. The evaluation results show that H.265/MPEG-HEVC provides superior bitrate
saving capabilities compared to H.264 and VP9. However, VP9 shows lower encoding time than
H.265/MPEG-HEVC but higher encoding time compared to H.264.
Excerpts from the HEVC / H265 Hands-on course.
This parts of the course explains how to download the reference code (HM) compile it configure it and analyze the video output
Requiring only half the bitrate of its predecessor, the new standard – HEVC or H.265 – will significantly reduce the need for bandwidth and expensive, limited spectrum. HEVC (H.265) will enable the launch of new video services and in particular ultra HD television (UHDTV).
State-of-the-art video compression techniques – HEVC/H.265 – can reduce the size of raw video by a factor of about 100 without any noticeable reduction in visual quality. With estimates indicating that compressed real-time video accounts for more than 50 percent of current network traffic, and this figure is set to rise to 90 percent within a few years, HEVC/H.265 will be a welcome relief for network operators.
New services, devices and changing viewing patterns are among the factors contributing to the growth in video traffic as people watch more and more traditional TV and video-streaming services on their mobile devices.
Ericsson has been heavily involved in the standardization of HEVC since it began in 2010, and this Ericsson Review article highlights some of the contributions that have led to the compression efficiency offered by HEVC.
Novel Approach for Evaluating Video Transmission using Combined Scalable Vide...IJECEIAES
One of the main problems in video transmission is the bandwidth fluctuation in wireless channel. Therefore, there is an urgent need to find an efficient bandwidth utilization and method. This research utilizes the Combined Scalable Video Coding (CSVC) which comes from Joint Scalable Video Model (JSVM). In the combined scalable video coding, we implement Coarse Grain Scalability (CGS) and Medium Grain Scalability (MGS). We propose a new scheme in which it can be implemented on Network Simulator II (NS-2) over wireless broadband network. The advantages of this new scheme over the other schemes are more realistic and based on open source program. The result shows that CSVC implementation on MGS mode outperforms CGS mode.
Content distribution to professional users can be easily combined with DTH. As video content is abundant in the video headend, it’s the right place for distributing content rights to, for example, cinema venues via file transfer. This Solution Overview details
how the ST Engineering iDirect’s M6100 Broadcast Satellite Modulator, the MCX7000 Multi-Carrier Satellite Gateway and the Dialog® platform support these three aspects of DTH.
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
PHP Frameworks: I want to break free (IPC Berlin 2024)
H2B2VS (HEVC hybrid broadcast broadband video services) – Building innovative solutions over hybrid networks
1. H2B2VS (HEVC HYBRID BROADCAST BROADBAND VIDEO
SERVICES) – BUILDING INNOVATIVE SOLUTIONS OVER
HYBRID NETWORKS
R. Monnier1
, A. Mourelle2
, J-P. Bernoux3
C. Alberti4
, J. Le Feuvre5
, W. Hamidouche6
1
Thomson Video Networks, France; 2
Hispasat, Spain; 3
TDF, France
4
Ecole Polytechnique Fédérale de Lausanne, Switzerland
5
Institut Mines-Télécom, France; 6
IETR/INSA, France
ABSTRACT
Broadcast and broadband networks continue to be separate worlds in the
video consumption business. Some initiatives such as HbbTV have built a
bridge between both worlds, but its application is almost limited to
providing links over the broadcast channel to content providers’
applications such as Catch-up TV services. When it comes to reality, the
user is using either one network or the other.
H2B2VS is a Celtic-Plus project aiming at exploiting the potential of real
hybrid networks by implementing efficient synchronization mechanisms
and using new video coding standard such as High Efficiency Video
Coding (HEVC). The goal is to develop successful hybrid network
solutions that enable value added services with an optimum bandwidth
usage in each network and with clear commercial applications. An
example of the potential of this approach is the transmission of Ultra-HD
TV by sending the main content over the broadcast channel and the
required complementary information over the broadband network. This
technology can also be used to improve the life of handicapped persons:
Deaf people receive through the broadband network a sign language
translation of a programme sent over the broadcast channel; the TV set
then displays this translation in an inset window.
One of the most important contributions of the project is developing and
testing synchronization methods between two different networks that offer
unequal qualities of service with significant differences in delay and jitter.
In this paper, the main technological project contributions are described,
including SHVC, the scalable extension of HEVC and a special focus on
the synchronization solution adopted by MPEG and DVB. The paper also
presents some of the implemented practical use cases, such as the sign
language translation described above, and their performance results so as
to evaluate the commercial application of this type of solution.
2. INTRODUCTION
Started in January 2013, H2B2VS (1) is a Eureka Celtic-Plus project. It is aiming at
investigating the hybrid distribution of TV programs and services over broadcast and
broadband networks. The technology used for the contents compression is the emerging
video coding standard: High Efficient Video Codec (HEVC).
Coordinated by Thomson Video Networks, the consortium is composed of 21 partners.
The project is organized around four main Work Packages (WP). The goal of the first one
is to study uses cases and business models to guarantee that the work done by the project
is in line with the market’s expectations. WP2 is dealing with the technologies to be
adapted in order to allow HEVC hybrid distribution. All these technologies are integrated in
WP3 to build demonstrators which are used by WP4 in public demonstrations as a basis to
disseminate project’s results throughout the industry. One of the major outcomes of WP4
is also the contribution to the relevant standardization bodies in order to guarantee a
successful commercial deployment of the technology.
MAIN TECHNOLOGICAL CHALLENGES
Among all the challenges the project had to face, two are predominant for the use cases
which are described in this paper. The first one is scalable coding and the second one is
synchronization.
Scalable Video Coding (SHVC)
The Scalable High efficiency Video Coding (SHVC) is the scalable extension (2) of the
HEVC standard. SHVC extension (3) was finalized in July 2014 by the ITU-T Video Coding
Experts Group (VCEG) and the ISO/IEC Moving Picture Experts Group (MPEG) under a
partnership known as Joint Collaborative Team on Video Coding (JCT-VC).
SHVC defines tools to provide spatial, fidelity, bit depth, and color scalability. The SHVC
syntax elements, mostly signalled at the Video Parameter Set (VPS) header, are common
to all HEVC extensions. These syntax elements provide information on the video layers
such as resolution, bit depth and inter-layer dependencies. The SHVC encoder consists of
L HEVC encoders, one encoder to encode each layer with L the number of layers. The first
layer is the Base Layer (BL), followed by L-1 Enhancement Layers. In the case of SHVC
spatial scalability, the Base Layer HEVC encoder encodes a down-sampled version of the
original video and feeds the first Enhanced Layer encoder with the decoded picture and its
Motion Vectors (MV). The enhancement layer encoder l (l=2,…, L) encodes a higher
resolution video, using the decoded picture from lower layer as an additional reference
picture. The inter-layer reference picture is up-sampled and its MVs are up-scaled to
match the resolution of the layer being decoded. Figure 1 shows an example of an SHVC
encoder encoding two layers in spatial scalability configuration. In the case of SNR
scalability, the encoding process remains unchanged except that the picture used for inter-
layer prediction is used without being up-sampled and its MVs up-scaled. As shown in
Figure 1, the outputs from the two encoders are multiplexed to form a conforming SHVC
bit stream.
3. Figure 1 - Block diagram of an SHVC encoder (two layers, 2x spatial scalability)
Synchronization
Combining media streams originating from both broadcast and broadband networks
impacts the end-to-end chain in various ways: The most important ones are
synchronization and buffer management. Synchronization of media streams may depend
on the scenario: A second screen advertisement may not require more than one second
accuracy, while a picture in picture of an alternate view may require around or less than
100 ms. In the most demanding cases such as scalable video coding, synchronization has
to be frame accurate, since media from both streams must be re-aggregated before
entering the decoder.
Most if not all deployed broadcast TV networks rely on the MPEG-2 transport stream to
carry media data. The Transport Stream has been designed at a time where clock
precision of TVs and set-top boxes was not reliable enough to ensure proper audio-visual
buffer management and synchronization of the content and, therefore, uses its own clock,
called Program Clock Reference (PCR), to instruct the receiving entity when data shall be
removed from input demultiplexer or decoder buffers and when decoded data shall be
presented. The PCR and the time stamps associated with each audio or video frames is
not related to the original media time of the content, may not be known by the service
packaging the broadband media streams and may be modified by network equipment such
as transcoders. This is why it cannot be used in a reliable way to synchronize the
broadcast and broadband networks for video services. In order to identify a common,
precise time origin for both broadband and broadcast streams, several approaches have
been used: Audio fingerprinting or audio watermarking tools are commonly used in existing
second-screen application, but they require additional processing of the media, implying
higher energy consumption. They are furthermore not precise enough to achieve frame-
accurate synchronisation for layer-coded video. A simpler approach is to embed a timeline
in the broadcast stream, different from the PCR and carried along by the various
equipment of the broadcast chain. This timeline is usually a media time code or a network
time such as NTP. The frequency at which this timeline is sent usually depends on the
4. synchronization requirement, and may be sent once per video frame for use cases
requiring a high-precision.
The second issue faced in hybrid delivery scenario is buffer management: The broadcast
network has a constant latency while broadband networks suffer from potentially high
jitters and congestion. In order to reduce the effects of congestion, several solutions have
been developed to deliver media streams over either UDP or TCP, most of them relying on
bitrate adaptation of the media. The adaptation can be done at the sender side, as it is
typically the case with RTP streams in VoD or video-conferencing, or at the client side, as
used in video over HTTP (HLS, Smooth Streaming, MPEG-DASH). The bandwidth
adaptation relies on either alternate encoding of the media, or scalable layers of coded
data that can be thinned out of the media stream. Handling of the jitter requires a reception
buffer large enough to smooth latency variations. When synchronizing and de-jittering a
broadcast-broadband setup, the broadcast has to be delayed long enough for the de-jitter
buffer to work. This can be achieved by buffering the broadcast video at the client side, but
this impact an extra start-up delay of the hybrid media session. Alternatively, the broadcast
stream can be buffered before the modulation stage so that broadband streams may be
requested before the corresponding broadcast data arrives at the receiver.
In H2B2VS, the following design choices were made regarding these issues:
The synchronization is achieved by inserting in each broadcast video frame a media
time code.
Most of the buffering needed for the broadband data is given by the delay induced
by the DVB-T2 transmission chain, which is around 2 seconds.
The adaptive broadband network is using MPEG-DASH protocol: The media
segments are encoded at the same time as the broadcast channel, using a
segment length (GOP size) of one or two seconds. When using two seconds, the
de-jittering at the client side relies entirely on the client buffer. For 1s, the DASH
client is able to fetch media segment 1s before their presentation is due, allowing
for a proper de-jittering.
H2B2VS partners did not stop at developing a technology and test it. They were also very
active in standardization: They proposed the so-called “TEMI” time line to MPEG,
accepted in MPEG-2 TS and also adopted by DVB and HbbTV.
MPEG started its activity towards the standardization of a synchronization mechanism of
auxiliary media streams for its Systems technologies (MPEG-2 Transport Stream, ISO
Base Media File Format, MPEG-DASH, MPEG Media Transport) at the 102nd MPEG
meeting in 2012. During this meeting, a solution was proposed by the H2B2VS partners to
enable carriage of DASH timeline values in MPEG-2 TS in order to unambiguously recover
media time information, regardless of MPEG-2 TS time discontinuities, with a low-
overhead of typically 7 kbps for a 60Hz video stream, compared to 90kbps of existing
similar solutions. In the following two years, the H2B2VS partners proposed several
contributions refining the solution, with a strong focus on frame accuracy synchronization.
This activity led to an amendment to the MPEG-2 Systems standard for the introduction of
a new descriptor for the carriage of Timeline and External Media Information (TEMI).
Figure 2 shows how this descriptor is carried in a MPEG-2 TS as a “pointer” to external
media carried by a different channel (e.g. broadband). The synchronization among the two
streams is implemented via the insertion of TEMI descriptors carried in adaptation field of
broadcast Transport Stream packets. A TEMI descriptor essentially contains timestamps
5. and location descriptors providing the URLs of the currently running associated extension,
or the upcoming ones.
Figure 2 –TEMI descriptor in the MPEG-2 Transport Stream
During the work described above, MPEG issued several Liaison statements with DVB in
order to harmonize the synchronization mechanisms used by the two standardization
bodies. The TEMI proposal was positively received and supported by DVB. After the final
publication of the MPEG-2 Systems amendment, the initial synchronization mechanism
where the timeline is carried in the private data bytes of the adaptation field was
superseded by the use of the TEMI descriptor.
Likewise, the HbbTV consortium followed the work done by MPEG and the H2B2VS
partners on the TEMI timeline descriptor and version 2.0 of the HbbTV standard proposes
it as a mean to implement multi-stream synchronization. The section devoted to APIs for
media synchronization provides an example of synchronization of a broadcast channel
with a DASH stream delivered over broadband relying on the use of TEMI descriptor.
USE CASES
Overview Of The Uses Cases Studied By H2B2VS
During the first part of the project, the H2B2VS partners defined twenty use cases where
the use of hybrid and HEVC technologies could lead to new services. The result was the
definition of four categories of use cases:
Picture quality improvement: Basically, the broadcast network carries a standard High
Definition (HD) programme and additional data is sent over the broadband network to
allow the display of an enhanced picture.
Several enhancements are proposed. 4K is the most obvious one. However, studies
done by the project are targeting also High Frame Rate (HFR), High Dynamic Range
(HDR) and Wide Gamut. Where relevant, scalable approaches are considered.
Customized TV: Here, the broadband network is used to increase the experience of the
end user watching a basic broadcast programme.
6. Trick modes (see below) or additional audio services – quality improvement or multi-
language - are some examples. Programme personalization is also envisaged: The
end user would be able, for instance, to choose the end of a drama among several
scenarios or to select in a Formula 1 race the car the picture comes from. Personalized
advertising conveyed by the broadband network and inserted in broadcast programs is
another interesting use case because it would allow targeting the advertising as it is
done today on the Web.
Picture in Picture & 2nd screen: Several use cases are dealing with the display of
alternate/additional contents conveyed by the broadband network, either on the main
screen or on the companion screen.
The end user can follow-up another programme coming through the broadband
network and displayed in a “Picture In Picture” (PIP) way, without the need for a
second tuner. The PIP picture can also be the sign language translation of the main
programme coming through the broadcast network.
Network improvements: Using the broadband network to relieve the broadcast network
is also possible with the technology developed by H2B2VS.
For example, when a severe outage occurs on a satellite reception, the broadband
network can take over to ensure an uninterrupted TV service. To cope with the problem
of spectrum scarcity on terrestrial networks, it can be envisaged to off-load the
terrestrial network by transmitting programmes with a low audience on the broadband
network, on the basis of dynamic audience measurements. Regional variations of a
national programme carried by the broadband network can also be a good solution to
optimize the use of the satellite or terrestrial spectrum.
The twenty use cases were not implemented by the H2B2VS project. Based on business
criteria, partners defined a subset of use cases and prototypes were developed. Some of
these Uses Cases are detailed below.
Sign Language Translation
Today, the penetration of sign language video services is slowed down because everyone,
hearing-impaired or not, is forced to watch them. This drawback comes from the way the
sign language video translation is inserted in the main programme: It is done in the control
room. With H2B2VS’ solution, the broadcast programme remains unchanged: The
broadband network is used to carry the sign language translation. Only hearing-impaired
people will watch it and the PIP insertion is be done by the TV set at end-user’s request.
Of course, the synchronization mechanism is key to ensure that the translation is aligned
with the main programme.
The benefit is twofold. For the end-user as explained above and for the TV channel who
does not have to pay for additional bit rate over the broadcast network to convey the sign
language translation on an extra PID. The solution proposed by the H2B2VS project is
especially interesting in countries where sign language translation is required by law
during a certain amount of time: It allows TV channels to fulfil their duties at a reduced
cost.
Hybrid Distribution Of Scalable 4K
This is a way to offer a 4K (Ultra High Definition) experience to end-users having
connected TVs and watching HD TV programmes on broadcast networks. SHVC is used
7. to send on the broadcast network an HD programme and an enhancement layer on the
broadband network. The terminal uses these two layers to decode and display a 4K video,
thanks to the picture-accurate synchronization mechanism.
This approach is an answer to introduce 4K services on terrestrial networks which suffer
from spectrum scarcity. Further to the high pressure from mobile operators to release
frequencies in the UHF band, in some countries, it seems difficult to introduce 4K services
immediately without modifying substantially the way multiplexes are managed. The
solution proposed by H2B2VS allows to carry-on broadcasting HD programmes and to
offer a 4K enhancement through Internet. This is a way to optimize the bandwidth both on
the broadcast and broadband networks.
Compared to a solution where the full 4K programme would be sent over Internet, bit rate
is saved, increasing thus the number of end-users who can receive 4K programmes. Using
figures of ADSL coverage available for the UK, we can estimate how the number of
households receiving 4K can be increased thanks to hybrid distribution.
Let’s assume the following for the distribution of a scalable 4K programme: The base layer
is broadcast at 3.5 Mbps on the terrestrial network and the enhancement layer is sent at 8
Mbps over the broadband network. To get the same video quality, the bit rate over Internet
should be 11 Mbps to carry a non-scalable 4K programme.
Figure 3 shows the distribution of broadband speeds in the UK (4). Depending on the
population density, the increase of the number of households receiving 4K with a hybrid
technology is different. In overall, the percentage increases from 58% to 70%. In Urban
areas, the improvement is slightly smaller because bit rates such as 11 Mbps are quite
usual: The reach goes from 68% to 78%. A significant higher gain is obtained in rural
areas where the number of subscribers who would be able to receive 4K is doubled, going
from 20% to 40%.
The results may vary from one country to another but this study gives a first idea of the
potential of hybrid distribution technology to introduce 4K services where spectrum scarcity
may be an obstacle.
Figure 3 – Distribution of broadband speeds, by population density
8. Trick Mode (TDF)
When defining the use cases, H2B2VS’s partners took into consideration the flexibility
hybrid receivers can bring to TV consumption. Indeed, it is very important that linear TV,
mostly represented by “broadcast” television, follows viewers’ new practices. One of them
is the “TV when I want” concept. The end-user must be able to withhold a TV programme
and resume it on the same device or on another one, when available, without missing any
part of the programme. The technology developed by H2B2VS allows it with no external
Hard Disk Drive: Everything is in the network.
This is possible because the receiver is able to switch from broadcast reception to
broadband reception. H2B2VS takes advantage of the possibility of the above-mentioned
TEMI timeline. The TV programme is encoded simultaneously for both networks
(broadcast and broadband) and the MPEG2 Transport Stream (MPEG2-TS) includes the
TEMI information. To ensure the quality of the streaming on the broadband network,
DASH adaptive streaming is used. The TEMI information is used to detect and store the
moment when a broadcast programme was stopped and to identify the relevant DASH
segment when the receivers has to switch to broadband when resuming the programme.
The synchronization mechanism allows getting the relevant segment for a quasi-seamless
switch. Tick mode keys such as “Fast Forward”, “Rewind” are also available to navigate in
the streams.
CONCLUSIONS AND FUTURE WORK
H2B2VS is close to its end (November 2015). It paved the way to offer new hybrid video
services. Several demonstrators were developed, allowing the demonstration of several
use cases. It is now time to get a feedback from the different actors of the ecosystem,
including the final users. Partners will thus spend some time demonstrating these use
cases in order to get this feedback, measure their business value and, hopefully, confirm
the market studies done at the beginning of the project. Based on that, some
developments may be carried out outside the project to turn out the H2B2VS prototypes
into products.
REFERENCES
1. http://h2b2vs.epfl.ch/
2. G. J. Sullivan, J. M. Boyce, Y. Chen, J. R. Ohm, and A. Vetro “Standardized
Extensions of High Efficiency Video Coding (HEVC)”, IEEE Journal of Selected
Topics in Signal Processing, vol. 7, no. 6, pp. 1001-1016, December 2013.
3. J. Chen, J. Boyce, Y. Ye, M. Hannuksela, G. J. Sullivan, Y. K. Wang HEVC
Scalable Extensions (SHVC) Draft Text 7, JCTVCR1008, Sapporo, JP, July 2014.
4. Ofcom, Ofcom’s second full analysis of the UK’s communications infrastructures.
Infrastructure Report 2014, 8 December 2014, page 41.
ACKNOWLEDGEMENTS
This works is supported by the European Celtic-Plus project H2B2VS and was partially
funded by Finland, France, Spain, Switzerland and Turkey.