Christian Timmerer, Víctor H. Ortega, José M. González, and Alberto León, Measuring Quality of Experience for MPEG-21-based Cross-Layer Multimedia Content Adaptation, Proceedings of the 1st ACS/IEEE International Workshop on Wireless Internet Services (WISe'08), Doha, Qatar, April 1-4, 2008.
Recent advances in quality of experience in multimedia communicationIMTC
Presentation covers various aspects of defining and measuring of the Quality of Experience in IP Multimedia communications, with emphasis on Video. Presented at IMTC 20th Anniversary Forum
Quality of Experience in Multimedia Systems and Services: A Journey Towards t...Alpen-Adria-Universität
In computing and communications systems, quality is often difficult to define. Attempts to understand this concept date back to Aristotle, who included quality as one of his 10 categories of human apprehension. ISO standard 8402:1986 defines quality as “the totality of features and characteristics of a product or service that bears its ability to satisfy stated or implied needs,” which embraces objective as well as subjective parameters. In practice, however, quality could be compared to the elephant in the famous Indian parable about a group of blind men who each feels a different part of the animal and, thus, they disagree as to what it looks like....
Christian Timmerer, Maria Teresa Andrade, Pedro Carvalho, Davide Rogai, and Giovanni Cordara, The Semantics of MPEG-21 Digital Items Revisited, Proceedings of ACM Multimedia 2008 2nd International Workshop on the Many Faces of Multimedia Semantics, Vancouver, Canada, October 27 - November 1, 2008.
Certified ScrumMaster and Product Owner - Over 10 years of project/product management. consulting, web software development, and client relationship management
Roll-out of the NYU HSL Website and Drupal CMSChris Evjy
This is a presentation I made for a class that describes the planning, marketing and assessment of the new NYU Health Sciences Libraries website. It focuses both on external website users/stakeholders, as well as the affect of adding web content management to the responsibilities of library staff.
Recent advances in quality of experience in multimedia communicationIMTC
Presentation covers various aspects of defining and measuring of the Quality of Experience in IP Multimedia communications, with emphasis on Video. Presented at IMTC 20th Anniversary Forum
Quality of Experience in Multimedia Systems and Services: A Journey Towards t...Alpen-Adria-Universität
In computing and communications systems, quality is often difficult to define. Attempts to understand this concept date back to Aristotle, who included quality as one of his 10 categories of human apprehension. ISO standard 8402:1986 defines quality as “the totality of features and characteristics of a product or service that bears its ability to satisfy stated or implied needs,” which embraces objective as well as subjective parameters. In practice, however, quality could be compared to the elephant in the famous Indian parable about a group of blind men who each feels a different part of the animal and, thus, they disagree as to what it looks like....
Christian Timmerer, Maria Teresa Andrade, Pedro Carvalho, Davide Rogai, and Giovanni Cordara, The Semantics of MPEG-21 Digital Items Revisited, Proceedings of ACM Multimedia 2008 2nd International Workshop on the Many Faces of Multimedia Semantics, Vancouver, Canada, October 27 - November 1, 2008.
Certified ScrumMaster and Product Owner - Over 10 years of project/product management. consulting, web software development, and client relationship management
Roll-out of the NYU HSL Website and Drupal CMSChris Evjy
This is a presentation I made for a class that describes the planning, marketing and assessment of the new NYU Health Sciences Libraries website. It focuses both on external website users/stakeholders, as well as the affect of adding web content management to the responsibilities of library staff.
Nowadays, mobile devices have implemented several transmission technologies which enable access to the Internet and increase the bit rate for data exchange. Despite modern mobile processors and high-resolution displays, mobile devices will never reach the stage of a powerful notebook or desktop system (for example, due to the fact of battery powered CPUs or just concerning the small-sized displays). Due to these limitations, the deliverable content for these devices should be adapted based on their capabilities including a variety of aspects (e.g., from terminal to network characteristics). These capabilities should be described in an interoperable way. In practice, however, there are many standards available and a common mapping model between these standards is not in place. Therefore, in this paper we describe such a mapping model and its implementation aspects. In particular, we focus on the whole delivery context (i.e., terminal capabilities, network characteristics, user preferences, etc.) and investigated the two most prominent state-of-the-art description schemes, namely User Agent Profile (UAProf) and Usage Environment Description (UED).
An Integrated Management Supervisor for End-to-End Management of Heterogeneou...Alpen-Adria-Universität
Christian Timmerer, et.al., An Integrated Management Supervisor for End-to-End Management of Heterogeneous Contents, Networks, and Terminals enabling Quality of Service, Proceedings 2nd European Symposium on Mobile Media Delivery (EuMob) 2008, Oulu, Finland, July 9,2008.
I'm looking for a cofounder for YCombinator Winter 2010. If you are interested email: yc.w10.founder@gmail.com or follow me on Twitter: @YCW10
There are over 1 million paper cutters on desks across America. Unfortunately, the software for these devices is uniformly terrible. This creates a tremendous opportunity for innovation.
An evolving historic technological revolution is under way, which is creating new industries, new products, new services and, unmercifully redefining or even destroying others. It is more powerful, with greater reach and is growing faster than any other media-ecology. Let´s try to understand the Giant!!
The slides provides a brief presentation of the Crisis Response Lab - a research collective located in Gothenburg. Its members represents University of Gothenburg, Viktoria Institute, Chalmers University.
Find the free recorded webinar, which includes a product demo, here: http://www.alfresco.com/about/eventsondemand
This presentation covers:
● The Need for Document Management
● The Two Worlds of Document Management
● The Cost to a Business of Poor Document Management
● Commoditizing and Consumerizing Document Management
● A Day in the Life of a Document
● A Basic Document Model
● Content-as-a-Service
● Really Simple Document Management
Utilizing SharePoint to Improve your businessRobert Crane
You can't expect to sell Windows SharePoint services unless you are actually using it within your own business. This presentation will take you through how an IT business like yours can implement SharePoint to improve your own productivity. You will also discover how to take this knowledge out to customers to create more revenue for your business. Learn how to work smarter with SharePoint. If you are planning on selling SharePoint services, and you should be, then you must attend this session to discover what works and what doesn't with SharePoint inside a business and how to leverage that knowledge to produce better business for yourself and your customers.(PDF of slideshow)
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
VEED: Video Encoding Energy and CO2 Emissions Dataset for AWS EC2 instancesAlpen-Adria-Universität
Video streaming constitutes 65 % of global internet traffic, prompting an investigation into its energy consumption and CO2 emissions. Video encoding, a computationally intensive part of streaming, has moved to cloud computing for its scalability and flexibility. However, cloud data centers’ energy consumption, especially video encoding, poses environmental challenges. This paper presents VEED, a FAIR Video Encoding Energy and CO2 Emissions Dataset for Amazon Web Services (AWS) EC2 instances. Additionally, the dataset also contains the duration, CPU utilization, and cost of the encoding. To prepare this dataset, we introduce a model and conduct a benchmark to estimate the energy and CO2 emissions of different Amazon EC2 instances during the encoding of 500 video segments with various complexities and resolutions using Advanced Video Coding (AVC)
and High-Efficiency Video Coding (HEVC). VEED and its analysis can provide valuable insights for video researchers and engineers to model energy consumption, manage energy resources, and distribute workloads, contributing to the sustainability of cloud-based video encoding and making them cost-effective. VEED is available at Github.
More Related Content
Similar to Measuring Quality of Experience for MPEG-21-based Cross-Layer Multimedia Content Adaptation
Nowadays, mobile devices have implemented several transmission technologies which enable access to the Internet and increase the bit rate for data exchange. Despite modern mobile processors and high-resolution displays, mobile devices will never reach the stage of a powerful notebook or desktop system (for example, due to the fact of battery powered CPUs or just concerning the small-sized displays). Due to these limitations, the deliverable content for these devices should be adapted based on their capabilities including a variety of aspects (e.g., from terminal to network characteristics). These capabilities should be described in an interoperable way. In practice, however, there are many standards available and a common mapping model between these standards is not in place. Therefore, in this paper we describe such a mapping model and its implementation aspects. In particular, we focus on the whole delivery context (i.e., terminal capabilities, network characteristics, user preferences, etc.) and investigated the two most prominent state-of-the-art description schemes, namely User Agent Profile (UAProf) and Usage Environment Description (UED).
An Integrated Management Supervisor for End-to-End Management of Heterogeneou...Alpen-Adria-Universität
Christian Timmerer, et.al., An Integrated Management Supervisor for End-to-End Management of Heterogeneous Contents, Networks, and Terminals enabling Quality of Service, Proceedings 2nd European Symposium on Mobile Media Delivery (EuMob) 2008, Oulu, Finland, July 9,2008.
I'm looking for a cofounder for YCombinator Winter 2010. If you are interested email: yc.w10.founder@gmail.com or follow me on Twitter: @YCW10
There are over 1 million paper cutters on desks across America. Unfortunately, the software for these devices is uniformly terrible. This creates a tremendous opportunity for innovation.
An evolving historic technological revolution is under way, which is creating new industries, new products, new services and, unmercifully redefining or even destroying others. It is more powerful, with greater reach and is growing faster than any other media-ecology. Let´s try to understand the Giant!!
The slides provides a brief presentation of the Crisis Response Lab - a research collective located in Gothenburg. Its members represents University of Gothenburg, Viktoria Institute, Chalmers University.
Find the free recorded webinar, which includes a product demo, here: http://www.alfresco.com/about/eventsondemand
This presentation covers:
● The Need for Document Management
● The Two Worlds of Document Management
● The Cost to a Business of Poor Document Management
● Commoditizing and Consumerizing Document Management
● A Day in the Life of a Document
● A Basic Document Model
● Content-as-a-Service
● Really Simple Document Management
Utilizing SharePoint to Improve your businessRobert Crane
You can't expect to sell Windows SharePoint services unless you are actually using it within your own business. This presentation will take you through how an IT business like yours can implement SharePoint to improve your own productivity. You will also discover how to take this knowledge out to customers to create more revenue for your business. Learn how to work smarter with SharePoint. If you are planning on selling SharePoint services, and you should be, then you must attend this session to discover what works and what doesn't with SharePoint inside a business and how to leverage that knowledge to produce better business for yourself and your customers.(PDF of slideshow)
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
VEED: Video Encoding Energy and CO2 Emissions Dataset for AWS EC2 instancesAlpen-Adria-Universität
Video streaming constitutes 65 % of global internet traffic, prompting an investigation into its energy consumption and CO2 emissions. Video encoding, a computationally intensive part of streaming, has moved to cloud computing for its scalability and flexibility. However, cloud data centers’ energy consumption, especially video encoding, poses environmental challenges. This paper presents VEED, a FAIR Video Encoding Energy and CO2 Emissions Dataset for Amazon Web Services (AWS) EC2 instances. Additionally, the dataset also contains the duration, CPU utilization, and cost of the encoding. To prepare this dataset, we introduce a model and conduct a benchmark to estimate the energy and CO2 emissions of different Amazon EC2 instances during the encoding of 500 video segments with various complexities and resolutions using Advanced Video Coding (AVC)
and High-Efficiency Video Coding (HEVC). VEED and its analysis can provide valuable insights for video researchers and engineers to model energy consumption, manage energy resources, and distribute workloads, contributing to the sustainability of cloud-based video encoding and making them cost-effective. VEED is available at Github.
Addressing climate change requires a global decrease in greenhouse gas (GHG) emissions. In today’s digital landscape, video streaming significantly influences internet traffic, driven by the widespread use of mobile devices and the rising popularity of streaming plat-
forms. This trend emphasizes the importance of evaluating energy consumption and the development of sustainable and eco-friendly video streaming solutions with a low Carbon Dioxide (CO2) footprint. We developed a specialized tool, released as an open-source library called GREEM , addressing this pressing concern. This tool measures video encoding and decoding energy consumption and facilitates benchmark tests. It monitors the computational impact on hardware resources and offers various analysis cases. GREEM is helpful for developers, researchers, service providers, and policy makers interested in minimizing the energy consumption of video encoding and streaming.
Optimal Quality and Efficiency in Adaptive Live Streaming with JND-Aware Low ...Alpen-Adria-Universität
In HTTP adaptive live streaming applications, video segments are encoded at a fixed set of bitrate-resolution pairs known as bitrate ladder. Live encoders use the fastest available encoding configuration, referred to as preset, to ensure the minimum possible latency in video encoding. However, an optimized preset and optimized number of CPU threads for each encoding instance may result in (i) increased quality and (ii) efficient CPU utilization while encoding. For low latency live encoders, the encoding speed is expected to be more than or equal to the video framerate. To this light, this paper introduces a Just Noticeable Difference (JND)-Aware Low latency Encoding Scheme (JALE), which uses random forest-based models to jointly determine the optimized encoder preset and thread count for each representation, based on video complexity features, the target encoding speed, the total number of available CPU threads, and the target encoder. Experimental results show that, on average, JALE yield a quality improvement of 1.32 dB PSNR and 5.38 VMAF points with the same bitrate, compared to the fastest preset encoding of the HTTP Live Streaming (HLS) bitrate ladder using x265 HEVC open-source encoder with eight CPU threads used for each representation. These enhancements are achieved while maintaining the desired encoding speed. Furthermore, on average, JALE results in an overall storage reduction of 72.70%, a reduction in the total number of CPU threads used by 63.83%, and a 37.87% reduction in the overall encoding time, considering a JND of six VMAF points.
In the context of rising environmental concerns, this paper introduces VEEP, an architecture designed to predict energy consumption and CO2 emissions in cloud-based video encoding. VEEP combines video analysis with machine learning (ML)-based energy prediction and real-time carbon intensity, enabling precise estimations of CPU energy usage and CO2 emissions during the encoding process. It is trained on the Video Complexity Dataset (VCD) and encoding results from various AWS EC2 instances. VEEP achieves high accuracy, indicated by an 𝑅2-score of 0.96, a mean absolute error (MAE) of 2.41 × 10−5, and a mean squared error (MSE) of 1.67 × 10−9. An important finding is the potential to reduce emissions by up to 375 times when comparing cloud instances and their locations. These results highlight the importance of considering environmental factors in cloud computing.
In today’s dynamic streaming landscape, where viewers access content on various devices and en- counter fluctuating network conditions, optimizing video delivery for each unique scenario is impera- tive. Video content complexity analysis, content-adaptive video coding, and multi-encoding methods are fundamental for the success of adaptive video streaming, as they serve crucial roles in delivering high-quality video experiences to a diverse audience. Video content complexity analysis allows us to comprehend the video content’s intricacies, such as motion, texture, and detail, providing valuable insights to enhance encoding decisions. By understanding the content’s characteristics, we can effi- ciently allocate bandwidth and encoding resources, thereby improving compression efficiency without compromising quality. Content-adaptive video coding techniques built upon this analysis involve dy- namically adjusting encoding parameters based on the content complexity. This adaptability ensures that the video stream remains visually appealing and artifacts are minimized, even under challenging network conditions. Multi-encoding methods further bolster adaptive streaming by offering faster encoding of multiple representations of the same video at different bitrates. This versatility reduces computational overhead and enables efficient resource allocation on the server side. Collectively, these technologies empower adaptive video streaming to deliver optimal visual quality and uninter- rupted viewing experiences, catering to viewers’ diverse needs and preferences across a wide range of devices and network conditions. Embracing video content complexity analysis, content-adaptive video coding, and multi-encoding methods is essential to meet modern video streaming platforms’ evolving demands and create immersive experiences that captivate and engage audiences. In this light, this dissertation proposes contributions categorized into four classes:
Empowerment of Atypical Viewers via Low-Effort Personalized Modeling of Video...Alpen-Adria-Universität
Quality of Experience (QoE) and QoE models are of an increasing importance to networked systems. The traditional QoE modeling for video streaming applications builds a one-size-fits-all QoE model that underserves atypical viewers who perceive QoE differently. To address the problem of atypical viewers, this paper proposes iQoE (individualized QoE), a method that employs explicit, expressible, and actionable feedback from a viewer to construct a personalized QoE model for this viewer. The iterative iQoE design exercises active learning and combines a novel sampler with a modeler. The chief emphasis of our paper is on making iQoE sample-efficient and accurate.
By leveraging the Microworkers crowdsourcing platform, we conduct studies with 120 subjects who provide 14,400 individual scores. According to the subjective studies, a session of about 22 minutes empowers a viewer to construct a personalized QoE model that, compared to the best of the 10 baseline models, delivers the average accuracy improvement of at least 42% for all viewers and at least 85% for the atypical viewers. The large-scale simulations based on a new technique of synthetic profiling expand the evaluation scope by exploring iQoE design choices, parameter sensitivity, and generalizability.
Empowerment of Atypical Viewers via Low-Effort Personalized Modeling of Vid...Alpen-Adria-Universität
Quality of Experience (QoE) and QoE models are of an increasing importance to networked systems. The traditional QoE modeling for video streaming applications builds a one-size-fits-all QoE model that underserves atypical viewers who perceive QoE differently. To address the problem of atypical viewers, this paper proposes iQoE (individualized QoE), a method that employs explicit, expressible, and actionable feedback from a viewer to construct a personalized QoE model for this viewer. The iterative iQoE design exercises active learning and combines a novel sampler with a modeler. The chief emphasis of our paper is on making iQoE sample-efficient and accurate.
By leveraging the Microworkers crowdsourcing platform, we conduct studies with 120 subjects who provide 14,400 individual scores. According to the subjective studies, a session of about 22 minutes empowers a viewer to construct a personalized QoE model that, compared to the best of the 10 baseline models, delivers the average accuracy improvement of at least 42% for all viewers and at least 85% for the atypical viewers. The large-scale simulations based on a new technique of synthetic profiling expand the evaluation scope by exploring iQoE design choices, parameter sensitivity, and generalizability.
Optimizing Video Streaming for Sustainability and Quality: The Role of Prese...Alpen-Adria-Universität
HTTP Adaptive Streaming (HAS) methods divide a video into smaller segments, encoded at multiple pre-defined bitrates to construct a bitrate ladder. Bitrate ladders are usually optimized per title over several dimensions, such as bitrate, resolution, and framerate. This paper adds a new dimension to the bitrate ladder by considering the energy consumption of the encoding process. Video encoders often have multiple pre-defined presets to balance the trade-off between encoding time, energy consumption, and compression efficiency. Faster presets disable certain coding tools defined by the codec to reduce the encoding time at the cost of reduced compression efficiency. Firstly, this paper evaluates the energy consumption and compression efficiency of different x265 presets for 500 video sequences. Secondly, optimized presets are selected for various representations in a bitrate ladder based on the results to guarantee a minimal drop in video quality while saving energy. Finally, a new per-title model, which optimizes the trade-off between compression efficiency and energy consumption, is proposed. The experimental results show that decreasing the VMAF score by 0.15 and 0.39 while choosing an optimized preset results in encoding energy savings of 70% and 83%, respectively.
Energy-Efficient Multi-Codec Bitrate-Ladder Estimation for Adaptive Video Str...Alpen-Adria-Universität
With the emergence of multiple modern video codecs, streaming service providers are forced to encode, store, and transmit bitrate ladders of multiple codecs separately, consequently suffering from additional energy costs for encoding, storage, and transmission.
To tackle this issue, we introduce an online energy-efficient Multi-Codec Bitrate ladder Estimation scheme (MCBE) for adaptive video streaming applications. In MCBE, quality representations within the bitrate ladder of new-generation codecs (e.g., HEVC, AV1) that lie below the predicted rate-distortion curve of the AVC codec are removed. Moreover, perceptual redundancy between representations of the bitrate ladders of the considered codecs is also minimized based on a Just Noticeable Difference (JND) threshold. Therefore, random forest-based models predict the VMAF of bitrate ladder representations of each codec. In a live streaming session where all clients support the decoding of AVC, HEVC, and AV1, MCBE achieves impressive results, reducing cumulative encoding energy by 56.45%, storage energy usage by 94.99%, and transmission energy usage by 77.61% (considering a JND of six VMAF points). These energy reductions are in comparison to a baseline bitrate ladder encoding based on current industry practice.
Machine Learning Based Resource Utilization Prediction in the Computing Conti...Alpen-Adria-Universität
This paper presents UtilML, a novel approach for tackling resource utilization prediction challenges in the computing continuum. UtilML leverages Long-Short-Term Memory (LSTM) neural networks, a machine learning technique, to forecast resource utilization accurately. The effectiveness of UtilML is demonstrated through its evaluation of data extracted from a real GPU cluster in a computing continuum infrastructure comprising more than 1800 computing devices. To assess the performance of UtilML, we compared it with two related approaches that utilize a Baseline-LSTM model. Furthermore, we analyzed the LSTM results against User-Predicted values provided by GPU cluster owners for task deployment with estimated allocation values. The results indicate that UtilML outperformed user predictions by 2% to 27% for CPU utilization prediction. For memory prediction, UtilML variants excelled, showing improvements of 17% to 20% compared to user predictions.
The exponential growth of computer game streaming has led to the development of Quality of Experience (QoE) metrics to evaluate user satisfaction and enjoyment during online gameplay and live streaming. Adaptive Bitrate (ABR) streaming is a recent technology that has been suggested to improve QoE. This method enhances the streaming experience, upholds visual quality, minimizes stall events, and boosts player retention. It achieves this by estimating network bottlenecks and selecting appropriate versions of the content that best match the available bandwidth rather than adjusting encoding parameters. To investigate the correlation between quality switching and stall events, a subjective test was conducted separately and comparatively with 71 participants. For more detailed and in-depth research, video games were analyzed with the Video Complexity Analyzer (VCA) tool and divided into three categories of different genres, camera view, and temporal complexity heatmap from the two sets of normal and action scenes. This study seeks to shed light on three unresolved issues pertinent to QoE in game streaming: (i) the user preferences towards quality switching and stall events across varied scenes and games, (ii) the user inclinations towards either a single, prolonged stall event or multiple, shorter stall events, and (iii) the impact of conspicuous quality switching on the user’s QoE. Results from the study provided valuable insights, both qualitatively and quantitatively. The study found a marked preference among users for quality switching over stall events across all types of game streaming, irrespective of the scene’s intensity. Furthermore, it was observed that multiple short-stall events were generally favored over a single long-stall event in streaming first-person shooting games. Interestingly, approximately half of the participants remained oblivious to quality switching during their game viewing sessions, and among those who noticed a change in quality, the alteration did not significantly impact their perceived QoE.
Network-Assisted Delivery of Adaptive Video Streaming Services through CDN, S...Alpen-Adria-Universität
Multimedia applications, mainly video streaming services, are currently the dominant source of network load worldwide. In recent Video-on-Demand (VoD) and live video streaming services, traditional streaming delivery techniques have been replaced by adaptive solutions based on the HTTP protocol. Current trends toward high-resolution (e.g., 8K) and/or low- latency VoD and live video streaming pose new challenges to end-to-end (E2E) bandwidth demand and have stringent delay requirements. To do this, video providers typically rely on Content Delivery Networks (CDNs) to ensure that they provide scalable video streaming services. To support future streaming scenarios involving millions of users, it is necessary to increase the CDNs’ efficiency. It is widely agreed that these requirements may be satisfied by adopting emerging networking techniques to present Network-Assisted Video Streaming (NAVS) methods. Motivated by this, this thesis goes one step beyond traditional pure client- based HAS algorithms by incorporating (an) in-network component(s) with a broader view of the network to present completely transparent NAVS solutions for HAS clients.
Over the last recent years, video streaming traffic has become the dominating service over mobile networks. The two main reasons for the growth of video streaming traffic are the improved capabilities of mobile devices and the emergence of HTTP Adaptive Streaming (HAS). Hence, there is a demand for new technologies to cope with the increasing traffic load while improving clients’ Quality of Experience (QoE). The network plays a crucial role in the video streaming process. One of the key technologies on the network side is Multi-access Edge Computing (MEC), which has several key characteristics: computing power, storage, proximity to the clients and access to network and player metrics. Thus, it is possible to deploy mechanisms at the MEC node that assist video streaming.
This thesis investigates how MEC capabilities can be leveraged to support video streaming delivery, specifically to improve the QoE, reduce latency or increase storage and bandwidth savings.
In the last decades, video streaming has been developing significantly. Among cur- rent technologies, HTTP Adaptive Streaming (HAS) is considered the de-facto approach in multimedia transmission over the internet. In HAS, the video is split into temporal segments with the same duration (e.g., 4s), each of which is then encoded into different quality versions and stored at servers. The end user sends requests to the server to retrieve segments with specific quality versions determined by an Adaptive Bitrate (ABR) algorithm for the purpose of adapting the throughput fluctuation. Though the majority of HAS-based media services function well even under throughput restrictions and variations, there are still significant challenges for multimedia systems, especially the tradeoff among the increasing content complexity, various time-related requirements, and Quality of Experience (QoE). Content complexity encompasses the increased demands for data, such as high-resolution videos and high frame rates, as well as novel content formats, such as virtual reality (VR) and augmented reality (AR). Time-related requirements include – but are not limited to – start-up delay and end-to-end latency. QoE can be defined as the level of satisfaction or frustration experienced by the user of an application or service. Optimizing for one aspect usually negatively impacts at least one of the other two aspects. This thesis tackles critical open research questions in the context of HAS that significantly impact the QoE at the client side.
VE-Match: Video Encoding Matching-based Model for Cloud and Edge Computing In...Alpen-Adria-Universität
The considerable surge in energy consumption within data centers can be attributed to the exponential rise in demand for complex computing workflows and storage resources. Video streaming applications are both compute and storage-intensive and account for the majority of today’s internet services. In this work, we designed a video encoding application consisting of codec, bitrate, and resolution set for encoding a video segment. Then, we propose VE-Match, a matching-based method to schedule video encoding applications on both Cloud and Edge resources to optimize costs and energy consumption. Evaluation results on a real computing testbed federated between Amazon Web Services (AWS) EC2 Cloud instances and the Alpen-Adria University (AAU) Edge server reveal that VE-Match achieves lower costs by 17%-78% in the cost-optimized scenarios compared to the energy-optimized and tradeoff between cost and energy. Moreover, VE-Match improves the video encoding energy consumption by 38%-45% and gCO2 emission by up to 80 % in the energy-optimized scenarios compared to the cost-optimized and tradeoff between cost and energy.
Energy Consumption in Video Streaming: Components, Measurements, and StrategiesAlpen-Adria-Universität
The rapid growth of video streaming usage is a significant source of energy consumption, driven by improved internet connections and service offerings, the quick development of video entertainment, the deployment of Ultra High-Definition, Virtual and Augmented Reality, as well as an increasing number of video surveillance and IoT applications. To address this challenge, it is essential to understand the various components involved in energy consumption during video streaming, ranging from video encoding to decoding and displaying the video on the end user’s screen. Then, it is critical to measure energy consumption for each component accurately and conduct an in-depth analysis to develop energy-efficient strategies that optimize video streaming [1, 2, 3]. These components are classified into three categories [4]: (i) data centers, which include encoding, packaging, and storage on cloud data centers; (ii) networks, which include core network and access networks; and (iii) end-user devices which involve decoding, players, hardware, etc.
In addition to identifying the primary components of video streaming that affect energy consumption, it is important to conduct a comprehensive analysis of the entire video streaming. It is also essential to balance energy optimization and service quality to ensure that energyefficient strategies are implemented without sacrificing the quality of video streaming services.
This talk aims to provide insights into the components of video streaming that contribute to energy consumption and highlight the challenges associated with measuring their energy usage. I will also introduce the tools that can be used for energy measurements for those components and the possible and associated strategies that lie within energy efficiency. By accurately measuring energy consumption, digital media companies can effectively monitor and control their energy usage, ultimately leading to cost savings and improved sustainability.
Exploring the Energy Consumption of Video Streaming: Components, Challenges, ...Alpen-Adria-Universität
The rapid growth of video streaming usage is a significant source of energy consumption, driven by improved internet connections and service offerings, the quick development of video entertainment, the deployment of Ultra High-Definition, Virtual and Augmented Reality, as well as an increasing number of video surveillance and IoT applications. However, it is essential to note that these advancements come at the cost of energy consumption. To address this challenge, it is essential to understand the various components involved in energy consumption during video streaming, ranging from video encoding to decoding and displaying the video on the end user’s screen. Then, it is critical to accurately measure energy consumption for each component and conduct an in-depth analysis to develop energy-efficient strategies that optimize video streaming. I categorize these components into three categories: (i) data centers, (ii) networks, and (iii) end-user devices.
In this talk, my objective is to provide insights into the components of video streaming that contribute to energy consumption and highlight the challenges associated with measuring their energy usage. I will also introduce the tools that can be used for energy measurements for those components and the possible and associated strategies that lie within energy efficiency. By accurately measuring energy consumption, digital media companies can effectively monitor and control their energy usage, ultimately leading to cost savings and improved sustainability.
Video Coding Enhancements for HTTP Adaptive Streaming Using Machine LearningAlpen-Adria-Universität
Video is evolving into a crucial tool as daily lives are increasingly centered around visual communication. The demand for better video content is constantly rising, from entertainment to business meetings. The delivery of video content to users is of utmost significance. HTTP adaptive streaming, in which the video content adjusts to the changing network circumstances, has become the de-facto method for delivering internet video.
As video technology continues to advance, it presents a number of challenges, one of which is the large amount of data required to describe a video accurately. To address this issue, it is necessary to have a powerful video encoding tool. Historically, these efforts have relied on hand-crafted tools and heuristics. However, with the recent advances in machine learning, there has been increasing exploration into using these techniques to enhance video coding performance.
This thesis proposes eight contributions that enhance video coding performance for HTTP adaptive streaming using machine learning.
Optimizing QoE and Latency of Live Video Streaming Using Edge Computing a...Alpen-Adria-Universität
Nowadays, HTTP Adaptive Streaming (HAS) has become the de-facto standard for delivering video over the Internet. More users have started generating and delivering high-quality live streams (usually 4K resolution) through popular online streaming platforms, resulting in a rise in live streaming traffic. Typically, the video contents are generated by streamers and watched by many audiences, geographically distributed in various locations far away from the streamers. The resource limitation in the network (e.g., bandwidth) is a challenging issue for network and video providers to meet the users’ requested quality. This dissertation leverages edge computing capabilities and in-network intelligence to design, implement, and evaluate approaches to optimize Quality of Experience (QoE) and end-to-end (E2E) latency of live HAS. In addition, improving transcoding performance and optimizing the cost of running live HAS services and the network’s backhaul utilization are considered. Motivated by the mentioned issue, the dissertation proposes five contributions in two classes: optimizing resource utilization and light-weight transcoding.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
JMeter webinar - integration with InfluxDB and Grafana
Measuring Quality of Experience for MPEG-21-based Cross-Layer Multimedia Content Adaptation
1. Measuring Quality of Experience for MPEG‐21‐based
Cross‐Layer Mul>media Content Adapta>on
Chris&an Timmerer
Klagenfurt University (UNIKLU) Faculty of Technical Sciences (TEWI)
Department of Informa&on Technology (ITEC) Mul&media Communica&on (MMC)
hBp://research.>mmerer.com hBp://blog.>mmerer.com
mailto:chris>an.>mmerer@itec.uni‐klu.ac.at
Co‐Authors: Chris>an Timmerer (University of Klagenfurt, Austria), Víctor H. Ortega (Tecsidel, Spain), José M.
González, and Alberto León (Telefonica, Spain)
Acknowledgment: This work is supported by the European Commission in the context of the ENTHRONE project
(IST‐1‐507637). Further informa>on is available at hBp://www.ist‐enthrone.org.
7. Probing Quality of Service / Experience
Audio
• ETSI/ITU‐T‘s E‐model: quot;Psychological factors on the psychological
scale are addi>vequot;
• Factors: packet loss, delay, equipment impairment, packet loss
robustness factors (depends on codec)
Video
• Profiles/Levels with diff. resolu>ons, frames per second, bit rates,
etc.
• Peak Signal to Noise Ra>o (PSNR), Mean Opinion Square (MOS),
Video Quality Metric (VQM)
• Various models exist, e.g., perceptual impression of packet loss,
frame rate varia>on and sync. w/ audio
need for interoperable solu&ons
2008/04/03 Chris>an Timmerer ‐ UNIKLU ‐ WISe'08, Doha, Qatar 7
8. The MPEG‐21 Mul>media Framework
MPEG‐21 Vision
• ... to enable transparent and augmented use of mul>media resources
across a wide range of networks, devices, user preferences, and
communi>es, notably for trading (of bits)
What ? – Digital Items (DIs)
• A Digital Item (DI) is a structured digital object with a standard
representa>on, iden>fica>on, and metadata within the MPEG‐21
framework
• Digital Items are “the content”
Who ? – Users
• A User is any en>ty that interacts in the MPEG‐21 environment or makes
use of a Digital Item
• Users will assume rights and responsibili>es according to their interac>on
with other Users
• All par>es that have a requirement within MPEG‐21 to interact are
categorized equally as Users
2008/04/03 Chris>an Timmerer ‐ UNIKLU ‐ WISe'08, Doha, Qatar 8
9. MPEG‐21 Organisa>on – Parts
Digital Adapta&on Processing Systems Misc
Rights Pt. 7: Digital Pt. 10: Digital Pt. 9: File Pt. 8: Reference
Management Item Adapta>on Item Processing Format Sosware
Pt. 11: Persistent
Amd.1: Add‘l Pt. 16: Binary
Amd.1: Convers.
Pt. 4: IPMP
Associa>on
C++ bindings Format
And Permissions
Components
Pt. 12: Test Bed
Pt. 18: Digital
Amd.2: Dynamic
Pt. 5: Rights
Item Streaming
and Distributed Pt. 14: Conform.
Expression Lang
Adapta>on
Pt. 15: Event
Pt. 6: Rights
Repor>ng
Data Dic>onary
Pt. 17: Fragment
Amd.1: DII
Idenfica>on
rela>onship types
Vision, Declara&on, and Iden&fica&on
Pt. 1: Vision, Technologies Pt. 2: Digital Item Pt. 3: Digital Item
and Strategy Declara>on Iden>fica>on
2008/04/03 Chris>an Timmerer ‐ UNIKLU ‐ WISe'08, Doha, Qatar 9
10. MPEG‐21 Digital Item Adapta>on
• Sa>sfy transmission, storage and consump>on constraints as well as QoS
management
• Enable transparent access to (distributed) advanced mul>media content
by shielding users from network and terminal installa>on issues
Relevant Tools (among other)
• Usage Environment Descrip>on (UED)
– network, terminal, user, natural
environment
• Universal Constraints Descrip>on (UCD)
– limita>on, op>miza>on
• Adapta>onQoS
– rela>onship between constraints (i.e., the UED/UCD), feasible adapta>on
opera>ons (e.g., transcoding, scaling, etc.) sa>sfying these constraints, and
associated u>li>es (i.e., quali>es / PSNR).
2008/04/03 Chris>an Timmerer ‐ UNIKLU ‐ WISe'08, Doha, Qatar 10
11. An Interoperable QoS Model for Video
Transmission Exploi>ng Cross‐Layer Interac>ons
• AVC test content: frame rate [6.25,25] fps; bit rate
[150,1500] kbps; packet loss [0,10] %
• Public survey (across EU) – boBom‐up approach
Impact of Packet Loss
• Bernoulli model => packet loss randomly distributed
over uniform probability density func>on (all packets
have same probability to be dropped)
• Real world: packet loss == bursts of random length
(1)
calculate the quality in short intervals
(packet loss density distribu>on can be considered
uniform even if we are inside a burst)
2008/04/03 Chris>an Timmerer ‐ UNIKLU ‐ WISe'08, Doha, Qatar 11
12. An Interoperable QoS Model for Video
Transmission Exploi>ng Cross‐Layer Interac>ons
• AVC test content: frame rate [6.25,25] fps; bit rate
[150,1500] kbps; packet loss [0,10] %
• Public survey (across EU) – boBom‐up approach
Impact of Packet Loss
• Bernoulli model => packet loss randomly distributed
over uniform probability density func>on (all packets
have same probability to be dropped)
• Real world: packet loss == bursts of random length
(1)
calculate the quality in short intervals
(packet loss density distribu>on can be considered
uniform even if we are inside a burst)
2008/04/03 Chris>an Timmerer ‐ UNIKLU ‐ WISe'08, Doha, Qatar 12
13. An Interoperable QoS Model for Video Transmission
Exploi>ng Cross‐Layer Interac>ons (cont’d)
Impact of Bandwidth
• Different bandwidth curves studied
rela>onship between bit rate and the packet loss
(2)
Impact of Frame Rate
• Classifica>on according to temporal nature +
actual audio‐visual content: [1..7] (7 is the best)
Extrapolated for VoD szenarion with high
temporal nature (3)
2008/04/03 Chris>an Timmerer ‐ UNIKLU ‐ WISe'08, Doha, Qatar 13
14. An Interoperable QoS Model for Video Transmission
Exploi>ng Cross‐Layer Interac>ons (cont’d)
Impact of Bandwidth
• Different bandwidth curves studied
rela>onship between bit rate and the packet loss
(2)
Impact of Frame Rate
• Classifica>on according to temporal nature +
actual audio‐visual content: [1..7] (7 is the best)
Extrapolated for VoD szenarion with high
temporal nature (3)
2008/04/03 Chris>an Timmerer ‐ UNIKLU ‐ WISe'08, Doha, Qatar 14
15. An Interoperable QoS Model for Video Transmission
Exploi>ng Cross‐Layer Interac>ons (cont’d)
Proposed Model
(4)
add interoperability support
2008/04/03 Chris>an Timmerer ‐ UNIKLU ‐ WISe'08, Doha, Qatar 15
16. Adding MPEG‐21 Support Enabling
Interoperable Cross‐Layer Interac>ons
AdaptationQoS Stack Function for MOS (aqos.xml):
<!-- Stack Function for MOS calculation -->
• Describe func>onal dependencies of (4) <Module xsi:type=quot;StackFunctionTypequot;
iOPinRef=quot;MOSquot;>
<StackFunction>
– MPEG‐21 DIA Adapta>onQoS' stack <Argument xsi:type=quot;InternalIOPinRefTypequot;
iOPinRef=quot;F_FRAMERATEquot;/>
func>ons <Argument xsi:type=quot;InternalIOPinRefTypequot;
iOPinRef=quot;F_PACKETLOSSquot;/>
– Range of possible content frame rate and <!-- multiply -->
<Operation operator=quot;:SFO:18quot;/>
bit‐rate combina>ons solu>on space </StackFunction>
</Module>
UCD maximizing the MOS (ucd_provider.xml):
<OptimizationConstraint optimize=quot;maximizequot;>
<Argument xsi:type=quot;ExternalIOPinRefTypequot;
• Usage environment: network condi>ons UED (ued.xml): iOPinRef=quot;aqos.xml#MOSquot;/>
</OptimizationConstraint>
(bandwidth, packet loss) <Network xsi:type=quot;NetworkTypequot;>
<NetworkCharacteristic
– MPEG‐21 DIA Usage Environment Descrip>on xsi:type=quot;NetworkConditionTypequot;>
<AvailableBandwidth average=quot;1500000quot;/>
<Error packetLossRate=quot;0.03quot;/>
</NetworkCharacteristic>
</Network>
• Constraints of the probe (pl, br, fps) + UCD for probe constraints (ucd_probe.xml):
<!-- packet loss <= 0.1 (10%) -->
objec>ve func>on, i.e., maximize the MOS <LimitConstraint>
<Argument xsi:type=quot;SemanticalRefTypequot;
semantics=quot;:AQoS:6.6.5.8quot;/>
– MPEG‐21 DIA Universal Constraints Descrip>on <Argument xsi:type=quot;ConstantDataTypequot;>
<Constant xsi:type=quot;FloatTypequot;>
<Value>0.1</Value>
</Constant>
</Argument>
<Operation operator=quot;:SFO:38quot;/>
2008/04/03 Chris>an Timmerer ‐ UNIKLU ‐ WISe'08, Doha, Qatar 16
</LimitConstraint>
17. Conclusions and Future Work
• QoS/QoE: guarantee the quality of mul>media traffic
experimented by the user
translate network issues into user perceived quality
• New model for evalua>ng the quality of video streams
proposed – extracted from RTP traffic
• Interoperability across layers through MPEG‐21
• @TODO evalua>on in large‐scale pilots featuring inter‐
connected test‐beds across Europe FP6‐IST‐
ENTHRONE
2008/04/03 Chris>an Timmerer ‐ UNIKLU ‐ WISe'08, Doha, Qatar 17