UNIFI.DSI.DISIT Lab Distributed Systems and Internet Technologies Lab Paolo Nesi
FP7 DISIT lab profile and interests
Research Areas
Semantic Computing algorithms and tools
Social Media algorithms and tools
Applications: end-2-end, cloud, SDK, mobile
UNIFI.DSI.DISIT Lab Distributed Systems and Internet Technologies Lab Paolo Nesi
FP7 DISIT lab profile and interests
Research Areas
Semantic Computing algorithms and tools
Social Media algorithms and tools
Applications: end-2-end, cloud, SDK, mobile
Policy-driven Dynamic HTTP Adaptive Streaming Player EnvironmentMinh Nguyen
Video streaming services account for the majority of today’s traffic on the Internet. Although the data transmission rate has been increasing significantly, the growing number and variety of media and higher quality expectations of users have led networked media applications to fully or even over-utilize the available throughput. HTTP Adaptive Streaming (HAS) has become a predominant technique for multimedia delivery over the Internet today. However, there are critical challenges for multimedia systems, especially the tradeoff between the increasing content (complexity) and various requirements regarding time (latency) and quality (QoE). This thesis will cover the main aspects within the end user’s environment, including video consumption and interactivity, collectively referred to as player environment, which is probably the most crucial component in today’s multimedia applications and services. We will investigate the methods that can enable the specification of various policies reflecting the user’s needs in given use cases. Besides, we will also work on schemes that allow efficient support for server-assisted, and network-assisted HAS systems. Finally, those approaches will be considered to combine into policies that fit the requirements of all use cases (e.g., live streaming, video on demand, etc.).
rNews: Embedding Metadata in On-line News
From the talk at SemTech
Wednesday, June 8, 2011
09:45 AM - 10:35 AM
Level: Business / Non-Technical
Case Study
Location: Yosemite A
The IPTC, a consortium of the world's major news agencies, news publishers and news industry vendors, recently released rNews, a semantic standard for on-line news. rNews uses RDFa to annotate HTML documents with news-specific metadata, to help with search, ad placement, aggregation and the sharing of on-line news. Jayson Lorenzen, a software engineer with Business Wire and one of the IPTC Member organization delegates working on rNews, will give an overview of the IPTC, the rNews standard, why rNews is needed and how the standard was eventually created. The talk will include use cases and live demonstrations of rNews and will end with a call to action for you to participate; rNews is currently at version 0.5 and the IPTC is looking for feedback on how to improve the standard.
Many HPC applications are massively parallel and can benefit from the spatial parallelism offered by reconfigurable logic. While modern memory technologies can offer high bandwidth, designers must craft advanced communication and memory architectures for efficient data movement and on-chip storage. Addressing these challenges requires to combine compiler optimizations, high-level synthesis, and hardware design.
In this talk, I will present challenges, solutions, and trends for generating massively parallel accelerators on FPGA for high-performance computing. These architectures can provide performance comparable to software implementations on high-end processors, and much higher energy efficiency thanks to logic customization.
Nubank is the leading fintech in Latin America. Using bleeding-edge technology, design, and data, the company aims to fight complexity and empower people to take control of their finances. We are disrupting an outdated and bureaucratic system by building a simple, safe and 100% digital environment.
In order to succeed, we need to constantly make better decisions in the speed of insight, and that’s what We aim when building Nubank’s Data Platform. In this talk we want to explore and share the guiding principles and how we created an automated, scalable, declarative and self-service platform that has more than 200 contributors, mostly non-technical, to build 8 thousand distinct datasets, ingesting data from 800 databases, leveraging Apache Spark expressiveness and scalability.
The topics we want to explore are:
– Making data-ingestion a no-brainer when creating new services
– Reducing the cycle time to deploy new Datasets and Machine Learning models to production
– Closing the loop and leverage knowledge processed in the analytical environment to take decisions in production
– Providing the perfect level of abstraction to users
You will get from this talk:
– Our love for ‘The Log’ and how we use it to decouple databases from its schema and distribute the work to keep schemas up to date to the entire team.
– How we made data ingestion so simple using Kafka Streams that teams stopped using databases for analytical data.
– The huge benefits of relying on the DataFrame API to create datasets which made possible having tests end-to-end verifying that the 8000 datasets work without even running a Spark Job and much more.
– The importance of creating the right amount of abstractions and restrictions to have the power to optimize.
The Industrial Internet of Things (IIoT) is one of todays hottest topics within the automation and manufacturing industries. Individuals and organizations that uses variable frequency drives have high expectations that the IIoT ecosystem will deliver on its promises of added value through increased productivity, predictive maintenance, and reduced asset downtime. The idea is to come up with a prototype of a remote monitoring system for VLT FC-302 Danfoss drives. A portal that interfaces with the cloud server and displays the current state of all connected drives.
Policy-driven Dynamic HTTP Adaptive Streaming Player EnvironmentMinh Nguyen
Video streaming services account for the majority of today’s traffic on the Internet. Although the data transmission rate has been increasing significantly, the growing number and variety of media and higher quality expectations of users have led networked media applications to fully or even over-utilize the available throughput. HTTP Adaptive Streaming (HAS) has become a predominant technique for multimedia delivery over the Internet today. However, there are critical challenges for multimedia systems, especially the tradeoff between the increasing content (complexity) and various requirements regarding time (latency) and quality (QoE). This thesis will cover the main aspects within the end user’s environment, including video consumption and interactivity, collectively referred to as player environment, which is probably the most crucial component in today’s multimedia applications and services. We will investigate the methods that can enable the specification of various policies reflecting the user’s needs in given use cases. Besides, we will also work on schemes that allow efficient support for server-assisted, and network-assisted HAS systems. Finally, those approaches will be considered to combine into policies that fit the requirements of all use cases (e.g., live streaming, video on demand, etc.).
rNews: Embedding Metadata in On-line News
From the talk at SemTech
Wednesday, June 8, 2011
09:45 AM - 10:35 AM
Level: Business / Non-Technical
Case Study
Location: Yosemite A
The IPTC, a consortium of the world's major news agencies, news publishers and news industry vendors, recently released rNews, a semantic standard for on-line news. rNews uses RDFa to annotate HTML documents with news-specific metadata, to help with search, ad placement, aggregation and the sharing of on-line news. Jayson Lorenzen, a software engineer with Business Wire and one of the IPTC Member organization delegates working on rNews, will give an overview of the IPTC, the rNews standard, why rNews is needed and how the standard was eventually created. The talk will include use cases and live demonstrations of rNews and will end with a call to action for you to participate; rNews is currently at version 0.5 and the IPTC is looking for feedback on how to improve the standard.
Many HPC applications are massively parallel and can benefit from the spatial parallelism offered by reconfigurable logic. While modern memory technologies can offer high bandwidth, designers must craft advanced communication and memory architectures for efficient data movement and on-chip storage. Addressing these challenges requires to combine compiler optimizations, high-level synthesis, and hardware design.
In this talk, I will present challenges, solutions, and trends for generating massively parallel accelerators on FPGA for high-performance computing. These architectures can provide performance comparable to software implementations on high-end processors, and much higher energy efficiency thanks to logic customization.
Nubank is the leading fintech in Latin America. Using bleeding-edge technology, design, and data, the company aims to fight complexity and empower people to take control of their finances. We are disrupting an outdated and bureaucratic system by building a simple, safe and 100% digital environment.
In order to succeed, we need to constantly make better decisions in the speed of insight, and that’s what We aim when building Nubank’s Data Platform. In this talk we want to explore and share the guiding principles and how we created an automated, scalable, declarative and self-service platform that has more than 200 contributors, mostly non-technical, to build 8 thousand distinct datasets, ingesting data from 800 databases, leveraging Apache Spark expressiveness and scalability.
The topics we want to explore are:
– Making data-ingestion a no-brainer when creating new services
– Reducing the cycle time to deploy new Datasets and Machine Learning models to production
– Closing the loop and leverage knowledge processed in the analytical environment to take decisions in production
– Providing the perfect level of abstraction to users
You will get from this talk:
– Our love for ‘The Log’ and how we use it to decouple databases from its schema and distribute the work to keep schemas up to date to the entire team.
– How we made data ingestion so simple using Kafka Streams that teams stopped using databases for analytical data.
– The huge benefits of relying on the DataFrame API to create datasets which made possible having tests end-to-end verifying that the 8000 datasets work without even running a Spark Job and much more.
– The importance of creating the right amount of abstractions and restrictions to have the power to optimize.
The Industrial Internet of Things (IIoT) is one of todays hottest topics within the automation and manufacturing industries. Individuals and organizations that uses variable frequency drives have high expectations that the IIoT ecosystem will deliver on its promises of added value through increased productivity, predictive maintenance, and reduced asset downtime. The idea is to come up with a prototype of a remote monitoring system for VLT FC-302 Danfoss drives. A portal that interfaces with the cloud server and displays the current state of all connected drives.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
VEED: Video Encoding Energy and CO2 Emissions Dataset for AWS EC2 instancesAlpen-Adria-Universität
Video streaming constitutes 65 % of global internet traffic, prompting an investigation into its energy consumption and CO2 emissions. Video encoding, a computationally intensive part of streaming, has moved to cloud computing for its scalability and flexibility. However, cloud data centers’ energy consumption, especially video encoding, poses environmental challenges. This paper presents VEED, a FAIR Video Encoding Energy and CO2 Emissions Dataset for Amazon Web Services (AWS) EC2 instances. Additionally, the dataset also contains the duration, CPU utilization, and cost of the encoding. To prepare this dataset, we introduce a model and conduct a benchmark to estimate the energy and CO2 emissions of different Amazon EC2 instances during the encoding of 500 video segments with various complexities and resolutions using Advanced Video Coding (AVC)
and High-Efficiency Video Coding (HEVC). VEED and its analysis can provide valuable insights for video researchers and engineers to model energy consumption, manage energy resources, and distribute workloads, contributing to the sustainability of cloud-based video encoding and making them cost-effective. VEED is available at Github.
Addressing climate change requires a global decrease in greenhouse gas (GHG) emissions. In today’s digital landscape, video streaming significantly influences internet traffic, driven by the widespread use of mobile devices and the rising popularity of streaming plat-
forms. This trend emphasizes the importance of evaluating energy consumption and the development of sustainable and eco-friendly video streaming solutions with a low Carbon Dioxide (CO2) footprint. We developed a specialized tool, released as an open-source library called GREEM , addressing this pressing concern. This tool measures video encoding and decoding energy consumption and facilitates benchmark tests. It monitors the computational impact on hardware resources and offers various analysis cases. GREEM is helpful for developers, researchers, service providers, and policy makers interested in minimizing the energy consumption of video encoding and streaming.
Optimal Quality and Efficiency in Adaptive Live Streaming with JND-Aware Low ...Alpen-Adria-Universität
In HTTP adaptive live streaming applications, video segments are encoded at a fixed set of bitrate-resolution pairs known as bitrate ladder. Live encoders use the fastest available encoding configuration, referred to as preset, to ensure the minimum possible latency in video encoding. However, an optimized preset and optimized number of CPU threads for each encoding instance may result in (i) increased quality and (ii) efficient CPU utilization while encoding. For low latency live encoders, the encoding speed is expected to be more than or equal to the video framerate. To this light, this paper introduces a Just Noticeable Difference (JND)-Aware Low latency Encoding Scheme (JALE), which uses random forest-based models to jointly determine the optimized encoder preset and thread count for each representation, based on video complexity features, the target encoding speed, the total number of available CPU threads, and the target encoder. Experimental results show that, on average, JALE yield a quality improvement of 1.32 dB PSNR and 5.38 VMAF points with the same bitrate, compared to the fastest preset encoding of the HTTP Live Streaming (HLS) bitrate ladder using x265 HEVC open-source encoder with eight CPU threads used for each representation. These enhancements are achieved while maintaining the desired encoding speed. Furthermore, on average, JALE results in an overall storage reduction of 72.70%, a reduction in the total number of CPU threads used by 63.83%, and a 37.87% reduction in the overall encoding time, considering a JND of six VMAF points.
In the context of rising environmental concerns, this paper introduces VEEP, an architecture designed to predict energy consumption and CO2 emissions in cloud-based video encoding. VEEP combines video analysis with machine learning (ML)-based energy prediction and real-time carbon intensity, enabling precise estimations of CPU energy usage and CO2 emissions during the encoding process. It is trained on the Video Complexity Dataset (VCD) and encoding results from various AWS EC2 instances. VEEP achieves high accuracy, indicated by an 𝑅2-score of 0.96, a mean absolute error (MAE) of 2.41 × 10−5, and a mean squared error (MSE) of 1.67 × 10−9. An important finding is the potential to reduce emissions by up to 375 times when comparing cloud instances and their locations. These results highlight the importance of considering environmental factors in cloud computing.
In today’s dynamic streaming landscape, where viewers access content on various devices and en- counter fluctuating network conditions, optimizing video delivery for each unique scenario is impera- tive. Video content complexity analysis, content-adaptive video coding, and multi-encoding methods are fundamental for the success of adaptive video streaming, as they serve crucial roles in delivering high-quality video experiences to a diverse audience. Video content complexity analysis allows us to comprehend the video content’s intricacies, such as motion, texture, and detail, providing valuable insights to enhance encoding decisions. By understanding the content’s characteristics, we can effi- ciently allocate bandwidth and encoding resources, thereby improving compression efficiency without compromising quality. Content-adaptive video coding techniques built upon this analysis involve dy- namically adjusting encoding parameters based on the content complexity. This adaptability ensures that the video stream remains visually appealing and artifacts are minimized, even under challenging network conditions. Multi-encoding methods further bolster adaptive streaming by offering faster encoding of multiple representations of the same video at different bitrates. This versatility reduces computational overhead and enables efficient resource allocation on the server side. Collectively, these technologies empower adaptive video streaming to deliver optimal visual quality and uninter- rupted viewing experiences, catering to viewers’ diverse needs and preferences across a wide range of devices and network conditions. Embracing video content complexity analysis, content-adaptive video coding, and multi-encoding methods is essential to meet modern video streaming platforms’ evolving demands and create immersive experiences that captivate and engage audiences. In this light, this dissertation proposes contributions categorized into four classes:
Empowerment of Atypical Viewers via Low-Effort Personalized Modeling of Video...Alpen-Adria-Universität
Quality of Experience (QoE) and QoE models are of an increasing importance to networked systems. The traditional QoE modeling for video streaming applications builds a one-size-fits-all QoE model that underserves atypical viewers who perceive QoE differently. To address the problem of atypical viewers, this paper proposes iQoE (individualized QoE), a method that employs explicit, expressible, and actionable feedback from a viewer to construct a personalized QoE model for this viewer. The iterative iQoE design exercises active learning and combines a novel sampler with a modeler. The chief emphasis of our paper is on making iQoE sample-efficient and accurate.
By leveraging the Microworkers crowdsourcing platform, we conduct studies with 120 subjects who provide 14,400 individual scores. According to the subjective studies, a session of about 22 minutes empowers a viewer to construct a personalized QoE model that, compared to the best of the 10 baseline models, delivers the average accuracy improvement of at least 42% for all viewers and at least 85% for the atypical viewers. The large-scale simulations based on a new technique of synthetic profiling expand the evaluation scope by exploring iQoE design choices, parameter sensitivity, and generalizability.
Empowerment of Atypical Viewers via Low-Effort Personalized Modeling of Vid...Alpen-Adria-Universität
Quality of Experience (QoE) and QoE models are of an increasing importance to networked systems. The traditional QoE modeling for video streaming applications builds a one-size-fits-all QoE model that underserves atypical viewers who perceive QoE differently. To address the problem of atypical viewers, this paper proposes iQoE (individualized QoE), a method that employs explicit, expressible, and actionable feedback from a viewer to construct a personalized QoE model for this viewer. The iterative iQoE design exercises active learning and combines a novel sampler with a modeler. The chief emphasis of our paper is on making iQoE sample-efficient and accurate.
By leveraging the Microworkers crowdsourcing platform, we conduct studies with 120 subjects who provide 14,400 individual scores. According to the subjective studies, a session of about 22 minutes empowers a viewer to construct a personalized QoE model that, compared to the best of the 10 baseline models, delivers the average accuracy improvement of at least 42% for all viewers and at least 85% for the atypical viewers. The large-scale simulations based on a new technique of synthetic profiling expand the evaluation scope by exploring iQoE design choices, parameter sensitivity, and generalizability.
Optimizing Video Streaming for Sustainability and Quality: The Role of Prese...Alpen-Adria-Universität
HTTP Adaptive Streaming (HAS) methods divide a video into smaller segments, encoded at multiple pre-defined bitrates to construct a bitrate ladder. Bitrate ladders are usually optimized per title over several dimensions, such as bitrate, resolution, and framerate. This paper adds a new dimension to the bitrate ladder by considering the energy consumption of the encoding process. Video encoders often have multiple pre-defined presets to balance the trade-off between encoding time, energy consumption, and compression efficiency. Faster presets disable certain coding tools defined by the codec to reduce the encoding time at the cost of reduced compression efficiency. Firstly, this paper evaluates the energy consumption and compression efficiency of different x265 presets for 500 video sequences. Secondly, optimized presets are selected for various representations in a bitrate ladder based on the results to guarantee a minimal drop in video quality while saving energy. Finally, a new per-title model, which optimizes the trade-off between compression efficiency and energy consumption, is proposed. The experimental results show that decreasing the VMAF score by 0.15 and 0.39 while choosing an optimized preset results in encoding energy savings of 70% and 83%, respectively.
Energy-Efficient Multi-Codec Bitrate-Ladder Estimation for Adaptive Video Str...Alpen-Adria-Universität
With the emergence of multiple modern video codecs, streaming service providers are forced to encode, store, and transmit bitrate ladders of multiple codecs separately, consequently suffering from additional energy costs for encoding, storage, and transmission.
To tackle this issue, we introduce an online energy-efficient Multi-Codec Bitrate ladder Estimation scheme (MCBE) for adaptive video streaming applications. In MCBE, quality representations within the bitrate ladder of new-generation codecs (e.g., HEVC, AV1) that lie below the predicted rate-distortion curve of the AVC codec are removed. Moreover, perceptual redundancy between representations of the bitrate ladders of the considered codecs is also minimized based on a Just Noticeable Difference (JND) threshold. Therefore, random forest-based models predict the VMAF of bitrate ladder representations of each codec. In a live streaming session where all clients support the decoding of AVC, HEVC, and AV1, MCBE achieves impressive results, reducing cumulative encoding energy by 56.45%, storage energy usage by 94.99%, and transmission energy usage by 77.61% (considering a JND of six VMAF points). These energy reductions are in comparison to a baseline bitrate ladder encoding based on current industry practice.
Machine Learning Based Resource Utilization Prediction in the Computing Conti...Alpen-Adria-Universität
This paper presents UtilML, a novel approach for tackling resource utilization prediction challenges in the computing continuum. UtilML leverages Long-Short-Term Memory (LSTM) neural networks, a machine learning technique, to forecast resource utilization accurately. The effectiveness of UtilML is demonstrated through its evaluation of data extracted from a real GPU cluster in a computing continuum infrastructure comprising more than 1800 computing devices. To assess the performance of UtilML, we compared it with two related approaches that utilize a Baseline-LSTM model. Furthermore, we analyzed the LSTM results against User-Predicted values provided by GPU cluster owners for task deployment with estimated allocation values. The results indicate that UtilML outperformed user predictions by 2% to 27% for CPU utilization prediction. For memory prediction, UtilML variants excelled, showing improvements of 17% to 20% compared to user predictions.
The exponential growth of computer game streaming has led to the development of Quality of Experience (QoE) metrics to evaluate user satisfaction and enjoyment during online gameplay and live streaming. Adaptive Bitrate (ABR) streaming is a recent technology that has been suggested to improve QoE. This method enhances the streaming experience, upholds visual quality, minimizes stall events, and boosts player retention. It achieves this by estimating network bottlenecks and selecting appropriate versions of the content that best match the available bandwidth rather than adjusting encoding parameters. To investigate the correlation between quality switching and stall events, a subjective test was conducted separately and comparatively with 71 participants. For more detailed and in-depth research, video games were analyzed with the Video Complexity Analyzer (VCA) tool and divided into three categories of different genres, camera view, and temporal complexity heatmap from the two sets of normal and action scenes. This study seeks to shed light on three unresolved issues pertinent to QoE in game streaming: (i) the user preferences towards quality switching and stall events across varied scenes and games, (ii) the user inclinations towards either a single, prolonged stall event or multiple, shorter stall events, and (iii) the impact of conspicuous quality switching on the user’s QoE. Results from the study provided valuable insights, both qualitatively and quantitatively. The study found a marked preference among users for quality switching over stall events across all types of game streaming, irrespective of the scene’s intensity. Furthermore, it was observed that multiple short-stall events were generally favored over a single long-stall event in streaming first-person shooting games. Interestingly, approximately half of the participants remained oblivious to quality switching during their game viewing sessions, and among those who noticed a change in quality, the alteration did not significantly impact their perceived QoE.
Network-Assisted Delivery of Adaptive Video Streaming Services through CDN, S...Alpen-Adria-Universität
Multimedia applications, mainly video streaming services, are currently the dominant source of network load worldwide. In recent Video-on-Demand (VoD) and live video streaming services, traditional streaming delivery techniques have been replaced by adaptive solutions based on the HTTP protocol. Current trends toward high-resolution (e.g., 8K) and/or low- latency VoD and live video streaming pose new challenges to end-to-end (E2E) bandwidth demand and have stringent delay requirements. To do this, video providers typically rely on Content Delivery Networks (CDNs) to ensure that they provide scalable video streaming services. To support future streaming scenarios involving millions of users, it is necessary to increase the CDNs’ efficiency. It is widely agreed that these requirements may be satisfied by adopting emerging networking techniques to present Network-Assisted Video Streaming (NAVS) methods. Motivated by this, this thesis goes one step beyond traditional pure client- based HAS algorithms by incorporating (an) in-network component(s) with a broader view of the network to present completely transparent NAVS solutions for HAS clients.
Over the last recent years, video streaming traffic has become the dominating service over mobile networks. The two main reasons for the growth of video streaming traffic are the improved capabilities of mobile devices and the emergence of HTTP Adaptive Streaming (HAS). Hence, there is a demand for new technologies to cope with the increasing traffic load while improving clients’ Quality of Experience (QoE). The network plays a crucial role in the video streaming process. One of the key technologies on the network side is Multi-access Edge Computing (MEC), which has several key characteristics: computing power, storage, proximity to the clients and access to network and player metrics. Thus, it is possible to deploy mechanisms at the MEC node that assist video streaming.
This thesis investigates how MEC capabilities can be leveraged to support video streaming delivery, specifically to improve the QoE, reduce latency or increase storage and bandwidth savings.
In the last decades, video streaming has been developing significantly. Among cur- rent technologies, HTTP Adaptive Streaming (HAS) is considered the de-facto approach in multimedia transmission over the internet. In HAS, the video is split into temporal segments with the same duration (e.g., 4s), each of which is then encoded into different quality versions and stored at servers. The end user sends requests to the server to retrieve segments with specific quality versions determined by an Adaptive Bitrate (ABR) algorithm for the purpose of adapting the throughput fluctuation. Though the majority of HAS-based media services function well even under throughput restrictions and variations, there are still significant challenges for multimedia systems, especially the tradeoff among the increasing content complexity, various time-related requirements, and Quality of Experience (QoE). Content complexity encompasses the increased demands for data, such as high-resolution videos and high frame rates, as well as novel content formats, such as virtual reality (VR) and augmented reality (AR). Time-related requirements include – but are not limited to – start-up delay and end-to-end latency. QoE can be defined as the level of satisfaction or frustration experienced by the user of an application or service. Optimizing for one aspect usually negatively impacts at least one of the other two aspects. This thesis tackles critical open research questions in the context of HAS that significantly impact the QoE at the client side.
VE-Match: Video Encoding Matching-based Model for Cloud and Edge Computing In...Alpen-Adria-Universität
The considerable surge in energy consumption within data centers can be attributed to the exponential rise in demand for complex computing workflows and storage resources. Video streaming applications are both compute and storage-intensive and account for the majority of today’s internet services. In this work, we designed a video encoding application consisting of codec, bitrate, and resolution set for encoding a video segment. Then, we propose VE-Match, a matching-based method to schedule video encoding applications on both Cloud and Edge resources to optimize costs and energy consumption. Evaluation results on a real computing testbed federated between Amazon Web Services (AWS) EC2 Cloud instances and the Alpen-Adria University (AAU) Edge server reveal that VE-Match achieves lower costs by 17%-78% in the cost-optimized scenarios compared to the energy-optimized and tradeoff between cost and energy. Moreover, VE-Match improves the video encoding energy consumption by 38%-45% and gCO2 emission by up to 80 % in the energy-optimized scenarios compared to the cost-optimized and tradeoff between cost and energy.
Energy Consumption in Video Streaming: Components, Measurements, and StrategiesAlpen-Adria-Universität
The rapid growth of video streaming usage is a significant source of energy consumption, driven by improved internet connections and service offerings, the quick development of video entertainment, the deployment of Ultra High-Definition, Virtual and Augmented Reality, as well as an increasing number of video surveillance and IoT applications. To address this challenge, it is essential to understand the various components involved in energy consumption during video streaming, ranging from video encoding to decoding and displaying the video on the end user’s screen. Then, it is critical to measure energy consumption for each component accurately and conduct an in-depth analysis to develop energy-efficient strategies that optimize video streaming [1, 2, 3]. These components are classified into three categories [4]: (i) data centers, which include encoding, packaging, and storage on cloud data centers; (ii) networks, which include core network and access networks; and (iii) end-user devices which involve decoding, players, hardware, etc.
In addition to identifying the primary components of video streaming that affect energy consumption, it is important to conduct a comprehensive analysis of the entire video streaming. It is also essential to balance energy optimization and service quality to ensure that energyefficient strategies are implemented without sacrificing the quality of video streaming services.
This talk aims to provide insights into the components of video streaming that contribute to energy consumption and highlight the challenges associated with measuring their energy usage. I will also introduce the tools that can be used for energy measurements for those components and the possible and associated strategies that lie within energy efficiency. By accurately measuring energy consumption, digital media companies can effectively monitor and control their energy usage, ultimately leading to cost savings and improved sustainability.
Exploring the Energy Consumption of Video Streaming: Components, Challenges, ...Alpen-Adria-Universität
The rapid growth of video streaming usage is a significant source of energy consumption, driven by improved internet connections and service offerings, the quick development of video entertainment, the deployment of Ultra High-Definition, Virtual and Augmented Reality, as well as an increasing number of video surveillance and IoT applications. However, it is essential to note that these advancements come at the cost of energy consumption. To address this challenge, it is essential to understand the various components involved in energy consumption during video streaming, ranging from video encoding to decoding and displaying the video on the end user’s screen. Then, it is critical to accurately measure energy consumption for each component and conduct an in-depth analysis to develop energy-efficient strategies that optimize video streaming. I categorize these components into three categories: (i) data centers, (ii) networks, and (iii) end-user devices.
In this talk, my objective is to provide insights into the components of video streaming that contribute to energy consumption and highlight the challenges associated with measuring their energy usage. I will also introduce the tools that can be used for energy measurements for those components and the possible and associated strategies that lie within energy efficiency. By accurately measuring energy consumption, digital media companies can effectively monitor and control their energy usage, ultimately leading to cost savings and improved sustainability.
Video Coding Enhancements for HTTP Adaptive Streaming Using Machine LearningAlpen-Adria-Universität
Video is evolving into a crucial tool as daily lives are increasingly centered around visual communication. The demand for better video content is constantly rising, from entertainment to business meetings. The delivery of video content to users is of utmost significance. HTTP adaptive streaming, in which the video content adjusts to the changing network circumstances, has become the de-facto method for delivering internet video.
As video technology continues to advance, it presents a number of challenges, one of which is the large amount of data required to describe a video accurately. To address this issue, it is necessary to have a powerful video encoding tool. Historically, these efforts have relied on hand-crafted tools and heuristics. However, with the recent advances in machine learning, there has been increasing exploration into using these techniques to enhance video coding performance.
This thesis proposes eight contributions that enhance video coding performance for HTTP adaptive streaming using machine learning.
Optimizing QoE and Latency of Live Video Streaming Using Edge Computing a...Alpen-Adria-Universität
Nowadays, HTTP Adaptive Streaming (HAS) has become the de-facto standard for delivering video over the Internet. More users have started generating and delivering high-quality live streams (usually 4K resolution) through popular online streaming platforms, resulting in a rise in live streaming traffic. Typically, the video contents are generated by streamers and watched by many audiences, geographically distributed in various locations far away from the streamers. The resource limitation in the network (e.g., bandwidth) is a challenging issue for network and video providers to meet the users’ requested quality. This dissertation leverages edge computing capabilities and in-network intelligence to design, implement, and evaluate approaches to optimize Quality of Experience (QoE) and end-to-end (E2E) latency of live HAS. In addition, improving transcoding performance and optimizing the cost of running live HAS services and the network’s backhaul utilization are considered. Motivated by the mentioned issue, the dissertation proposes five contributions in two classes: optimizing resource utilization and light-weight transcoding.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
UiPath Test Automation using UiPath Test Suite series, part 3
The MPEG-21 Multimedia Framework for Integrated Management of Environments enabling Quality of Service
1. The MPEG-21 Multimedia Framework for Integrated Management of Environments enabling Quality of Service Christian Timmerer Klagenfurt University (UNIKLU) Faculty of Technical Sciences (TEWI) Department of Information Technology (ITEC) Multimedia Communication (MMC) http://research.timmerer.com http://blog.timmerer.com mailto:christian.timmerer@itec.uni-klu.ac.at
2.
3. UMA Challenge and Concept 2008/07/16 Christian Timmerer, Klagenfurt University, Austria Rich Multimedia Content Diverse Set of Terminal Devices, User Preferences Heterogeneous Networks, Dynamic Conditions Universal Multimedia Access := any content should be available anytime , anywhere Universal Multimedia Experiences := User should have worthwhile , informative experience anytime, anywhere Content Adaptation for Universal Access Growing mismatch Need for scalable content , descriptions, negotiation, adaptation
4.
5.
6.
7. MPEG-21 Organisation – Parts 2008/07/16 Christian Timmerer, Klagenfurt University, Austria Vision, Declaration, and Identification Digital Rights Management Adaptation Processing Systems Misc Pt. 4: IPMP Components Pt. 5: R ights E xpression L ang Pt. 6: R ights D ata D ictionary Pt. 7: D igital I tem A daptation Pt. 10: D igital I tem P rocessing Amd.1 : Convers. And Permissions Amd.2 : Dynamic and Distributed Adaptation Pt. 1: Vision, Technologies and Strategy Pt. 2: D igital I tem D eclaration Pt. 3: D igital I tem I dentification Pt. 9: File Format Pt. 16: Binary Format Pt. 18: D igital I tem S treaming Pt. 8: Reference Software Pt. 11: Persistent Association Pt. 12: Test Bed Pt. 14: Conform. Pt. 15: Event Reporting Pt. 17: Fragment Idenfication Amd.1 : Add‘l C++ bindings Amd.1 : DII relationship types
15. End-to-End QoS through Integrated Management of Content, Networks and Terminals 2008/07/16 Christian Timmerer, Klagenfurt University, Austria Integrated Management of Content (Digital Items) Integrated Management of Services Content- and Context-aware Digital Item Service Management Integrated Management of Connectivity Services of Heterogeneous Networks Integrated Management of Heterogeneous Terminals 1 2 3 4 5
16. ENTHRONE System Architecture Metadata Management Model Metadata Management and Search (MATool) Enhanced Features Quality of Service and Adaptation 2008/07/16 Christian Timmerer, Klagenfurt University, Austria Adapters Delivery layer ENTHRONE Integrated Management Supervisor EIMS Supervision layer Interfaces Business Actors Business level (simplified)
17. ENTHRONE System Architecture Metadata Management Model Metadata Management and Search (MATool) Enhanced Features Quality of Service and Adaptation - Adaptation management And extended functionalities : - End to end (QoS) management - Service management (SM) - Terminal Device Management (TDM) 2008/07/16 Christian Timmerer, Klagenfurt University, Austria Adapters Delivery layer ENTHRONE Integrated Management Supervisor EIMS Supervision layer Interfaces Business Actors Business level (simplified)
18. ENTHRONE System Architecture Metadata Management Model Metadata Management and Search (MATool) Enhanced Features Quality of Service and Adaptation Generic model for - Metadata management - Metadata storage MAtool implementation using MPEG-7/-21, TV-anytime, ... 2008/07/16 Christian Timmerer, Klagenfurt University, Austria Adapters Delivery layer ENTHRONE Integrated Management Supervisor EIMS Supervision layer Interfaces Business Actors Business level (simplified)
19. ENTHRONE System Architecture Metadata Management Model Metadata Management and Search (MATool) Enhanced Features Quality of Service and Adaptation - Multicast management - Content caching and CDN management 2008/07/16 Christian Timmerer, Klagenfurt University, Austria Adapters Delivery layer ENTHRONE Integrated Management Supervisor EIMS Supervision layer Interfaces Business Actors Business level (simplified)
20. ENTHRONE System Architecture Metadata Management Model Metadata Management and Search (MATool) Enhanced Features Quality of Service and Adaptation New entity. More open business models 2008/07/16 Christian Timmerer, Klagenfurt University, Austria Adapters Delivery layer ENTHRONE Integrated Management Supervisor EIMS Supervision layer Interfaces Business Actors Business level (simplified)
21. MPEG-21 for End-to-End QoS Management enabling UMA 2008/07/16 Christian Timmerer, Klagenfurt University, Austria DI Model/Declaration/Identification Rights Expression Basic Content Descr. Enhance with DIA AdaptationQoS/UCD according to E2E QoS Model Add’l Rights Expression, License Service-related Metadata Capabilities of Adaptation Engines Adaptation Decision-Taking Engine: exploit Content- and Context-related Metadata Signaling of Characteristics and Conditions using UED Request and configure monitoring system through Event Reporting UED: User Characteristics and Terminal Capabilities Event Reporting: req./conf. Monitoring System 1 2 3 4 5