Truong Cong Thang Univ. of Aizu, Aizu-Wakamatsu, Japan
Le, H.T. ; Nguyen, H.X. ; Pham, A.T. ; Jung Won Kang ; Yong Man Ro
Journal of Communications and Networks, Vol. 15, No. 6, Dec. 2013
Labmeeting - 20150831 - Overhead and Performance of Low Latency Live Streamin...Syuan Wang
This document summarizes research into reducing latency for live video streaming using MPEG-DASH. It introduces MPEG-DASH and how using HTTP chunked transfer encoding and Gradual Decoding Refresh encoding can help lower latency compared to basic DASH. The paper describes experiments conducted to generate and distribute live content using these techniques and evaluate latency, finding they were able to achieve latency as low as 240ms.
FAUST: Fast Per-Scene Encoding Using Entropy-Based Scene Detection and Machin...Alpen-Adria-Universität
HTTP adaptive video streaming is a widespread and sought-after technology on the Internet that allows clients to dynamically switch between different stream qualities presented in the bitrate ladder to optimize overall received video quality. Currently, there exist several approaches of different complexity for building such a ladder. The simplest method is to use a static bitrate ladder, and the more complex one is to compute a per-title encoding ladder. The main drawback of these approaches is that they do not provide bitrate ladders for scenes with different visual complexity within the video. Moreover, most modern methods require additional computationally-intensive test encodings of the entire video to construct the convex hull, used to calculate the bitrate ladder. This paper proposes a new fast per-scene encoding approach called FAUST based on 1) quick entropy-based scene detection and 2) prediction of optimized bitrate ladder for each scene using an artificial neural network. The results show that our model reduces the mean absolute error to 0.15, the mean square error to 0.08, and the bitrate to 13.5% while increasing the difference in video multimethod assessment fusion to 5.6 points.
ES-HAS: An Edge- and SDN-Assisted Framework for HTTP Adaptive Video StreamingAlpen-Adria-Universität
Recently, HTTP Adaptive Streaming (HAS) has become the dominant video delivery technology over the Internet. In HAS, clients have full control over the media streaming and adaptation processes. Lack of coordination among the clients and lack of awareness of the network conditions may lead to sub-optimal user experience, and resource utilization in a pure client-based HAS adaptation scheme. Software-Defined Networking (SDN) has recently been considered to enhance the video streaming process. In this paper, we leverage the capability of SDN and Network Function Virtualization (NFV) to introduce an edge- and SDN-assisted video streaming framework called ES-HAS. We employ virtualized edge components to collect HAS clients’ requests and retrieve networking information in a time-slotted manner. These components then perform an optimization model in a time-slotted manner to efficiently serve clients’ requests by selecting an optimal cache server (with the shortest fetch time). In case of a cache miss, a client’s request is served (i) by an optimal replacement quality (only better quality levels with minimum deviation) from a cache server, or (ii) by the originally requested quality level from the origin server. This approach is validated through experiments on a large-scale testbed, and the performance of our framework is compared to pure client-based strategies and the SABR system [11]. Although SABR and ES-HAS show (almost) identical performance in the number of quality switches, ES-HAS outperforms SABR in terms of playback bitrate and the number of stalls by at least 70% and 40%, respectively.
Performance Analysis Of AOMDV In Terms Of Mobility Speed And Pause TimeAkmal
This document provides information about a final year project to analyze the performance of the AOMDV routing protocol in mobile ad hoc networks (MANETs) by varying node mobility speed and pause time. The objectives are to study AOMDV, modify it by changing speed and pause time, and analyze its performance based on packet delivery ratio, throughput, and end-to-end delay. The framework will use OMNeT++ to simulate AOMDV in a MANET and vary mobility parameters. Expected results include graphs of throughput, end-to-end delay, and packet delivery ratio for analysis.
Streaming media has evolved significantly over the past 20 years. Early systems in the 1990s used proprietary protocols over UDP and later included pre-roll buffers and adaptive bitrate techniques. Standards like RTSP, 3GPP, and ISMA provided interoperability but relied on complex server implementations. The shift to HTTP in the 2000s simplified delivery using progressive download and then adaptive streaming formats like HLS, DASH, and CMAF that divide media into short segments. These standards separate the media format from the delivery method, enabling delivery via HTTP while supporting features like DRM and playback across different devices and networks.
EPIQ'21: Days of Future Past: An Optimization-based Adaptive Bitrate Algorith...Minh Nguyen
HTTP Adaptive Streaming (HAS) has become a predominant technique for delivering videos in the Internet. Due to its adaptive behavior according to changing network conditions, it may result in video quality variations that negatively impact the Quality of Experience (QoE) of the user. In this paper, we propose Days of Future Past, an optimization- based Adaptive Bitrate (ABR) algorithm over HTTP/3. Days of Future Past takes advantage of an optimization model and HTTP/3 features, including (i) stream multiplexing and (ii) request cancellation. We design a Mixed Integer Linear Programming (MILP) model that determines the optimal video qualities of both the next segment to be requested and the segments currently located in the buffer. If better qualities for buffered segments are found, the client will send corresponding HTTP GET requests to retrieve them. Multiple segments (i.e., retransmitted segments) might be downloaded simultaneously to upgrade some buffered but not yet played segments to avoid quality decreases using the stream multiplexing feature of QUIC. HTTP/3’s request cancellation will be used in case retransmitted segments will arrive at the client after their playout time. The experimental results shows that our proposed method is able to improve the QoE by up to 33.9%.
CSDN: CDN-Aware QoE Optimization in SDN-Assisted HTTP Adaptive Video StreamingAlpen-Adria-Universität
With the increasing demand for video streaming applications, HTTP Adaptive Streaming (HAS) technology has become the dominant video delivery technique over the Internet. Current HAS solutions only consider either client- or server-side optimization, which causes many problems in achieving high-quality video, leading to sub-optimal users’ experience and network resource utilization. Recent studies have revealed that network-assisted HAS techniques, by providing a comprehensive view of the network, can lead to more significant gains in HAS system performance. In this paper, we leverage the capability of Software-Define Networking (SDN), Network Function Virtualization (NFV), and edge computing to introduce a CDN-Aware QoE Optimization in SDN-Assisted Adaptive Video Streaming framework called CSDN. We employ virtualized edge entities to collect various information items (e.g., user-, client, CDN- and network-level information) in a time-slotted method. These components then run an optimization model with a new server/segment selection approach in a time-slotted fashion to serve the clients’ requests by selecting optimal cache servers (in terms of fetch and transcoding times). In case of a cache miss, a client’s request is served (i) by an optimal replacement quality (only better quality levels with minimum deviation) from a cache server, (ii) by a quality transcoded from an optimal replacement quality at the edge, or (iii) by the originally requested quality level from the origin server. By means of comprehensive experiments conducted on a real-world large-scale testbed, we demonstrate that CSDN outperforms the state-of-the-art in terms of playback bitrate, the number of quality switches, the number of stalls, and bandwidth usage by at least 7.5%, 19%, 19%, and 63%, respectively.
Labmeeting - 20150831 - Overhead and Performance of Low Latency Live Streamin...Syuan Wang
This document summarizes research into reducing latency for live video streaming using MPEG-DASH. It introduces MPEG-DASH and how using HTTP chunked transfer encoding and Gradual Decoding Refresh encoding can help lower latency compared to basic DASH. The paper describes experiments conducted to generate and distribute live content using these techniques and evaluate latency, finding they were able to achieve latency as low as 240ms.
FAUST: Fast Per-Scene Encoding Using Entropy-Based Scene Detection and Machin...Alpen-Adria-Universität
HTTP adaptive video streaming is a widespread and sought-after technology on the Internet that allows clients to dynamically switch between different stream qualities presented in the bitrate ladder to optimize overall received video quality. Currently, there exist several approaches of different complexity for building such a ladder. The simplest method is to use a static bitrate ladder, and the more complex one is to compute a per-title encoding ladder. The main drawback of these approaches is that they do not provide bitrate ladders for scenes with different visual complexity within the video. Moreover, most modern methods require additional computationally-intensive test encodings of the entire video to construct the convex hull, used to calculate the bitrate ladder. This paper proposes a new fast per-scene encoding approach called FAUST based on 1) quick entropy-based scene detection and 2) prediction of optimized bitrate ladder for each scene using an artificial neural network. The results show that our model reduces the mean absolute error to 0.15, the mean square error to 0.08, and the bitrate to 13.5% while increasing the difference in video multimethod assessment fusion to 5.6 points.
ES-HAS: An Edge- and SDN-Assisted Framework for HTTP Adaptive Video StreamingAlpen-Adria-Universität
Recently, HTTP Adaptive Streaming (HAS) has become the dominant video delivery technology over the Internet. In HAS, clients have full control over the media streaming and adaptation processes. Lack of coordination among the clients and lack of awareness of the network conditions may lead to sub-optimal user experience, and resource utilization in a pure client-based HAS adaptation scheme. Software-Defined Networking (SDN) has recently been considered to enhance the video streaming process. In this paper, we leverage the capability of SDN and Network Function Virtualization (NFV) to introduce an edge- and SDN-assisted video streaming framework called ES-HAS. We employ virtualized edge components to collect HAS clients’ requests and retrieve networking information in a time-slotted manner. These components then perform an optimization model in a time-slotted manner to efficiently serve clients’ requests by selecting an optimal cache server (with the shortest fetch time). In case of a cache miss, a client’s request is served (i) by an optimal replacement quality (only better quality levels with minimum deviation) from a cache server, or (ii) by the originally requested quality level from the origin server. This approach is validated through experiments on a large-scale testbed, and the performance of our framework is compared to pure client-based strategies and the SABR system [11]. Although SABR and ES-HAS show (almost) identical performance in the number of quality switches, ES-HAS outperforms SABR in terms of playback bitrate and the number of stalls by at least 70% and 40%, respectively.
Performance Analysis Of AOMDV In Terms Of Mobility Speed And Pause TimeAkmal
This document provides information about a final year project to analyze the performance of the AOMDV routing protocol in mobile ad hoc networks (MANETs) by varying node mobility speed and pause time. The objectives are to study AOMDV, modify it by changing speed and pause time, and analyze its performance based on packet delivery ratio, throughput, and end-to-end delay. The framework will use OMNeT++ to simulate AOMDV in a MANET and vary mobility parameters. Expected results include graphs of throughput, end-to-end delay, and packet delivery ratio for analysis.
Streaming media has evolved significantly over the past 20 years. Early systems in the 1990s used proprietary protocols over UDP and later included pre-roll buffers and adaptive bitrate techniques. Standards like RTSP, 3GPP, and ISMA provided interoperability but relied on complex server implementations. The shift to HTTP in the 2000s simplified delivery using progressive download and then adaptive streaming formats like HLS, DASH, and CMAF that divide media into short segments. These standards separate the media format from the delivery method, enabling delivery via HTTP while supporting features like DRM and playback across different devices and networks.
EPIQ'21: Days of Future Past: An Optimization-based Adaptive Bitrate Algorith...Minh Nguyen
HTTP Adaptive Streaming (HAS) has become a predominant technique for delivering videos in the Internet. Due to its adaptive behavior according to changing network conditions, it may result in video quality variations that negatively impact the Quality of Experience (QoE) of the user. In this paper, we propose Days of Future Past, an optimization- based Adaptive Bitrate (ABR) algorithm over HTTP/3. Days of Future Past takes advantage of an optimization model and HTTP/3 features, including (i) stream multiplexing and (ii) request cancellation. We design a Mixed Integer Linear Programming (MILP) model that determines the optimal video qualities of both the next segment to be requested and the segments currently located in the buffer. If better qualities for buffered segments are found, the client will send corresponding HTTP GET requests to retrieve them. Multiple segments (i.e., retransmitted segments) might be downloaded simultaneously to upgrade some buffered but not yet played segments to avoid quality decreases using the stream multiplexing feature of QUIC. HTTP/3’s request cancellation will be used in case retransmitted segments will arrive at the client after their playout time. The experimental results shows that our proposed method is able to improve the QoE by up to 33.9%.
CSDN: CDN-Aware QoE Optimization in SDN-Assisted HTTP Adaptive Video StreamingAlpen-Adria-Universität
With the increasing demand for video streaming applications, HTTP Adaptive Streaming (HAS) technology has become the dominant video delivery technique over the Internet. Current HAS solutions only consider either client- or server-side optimization, which causes many problems in achieving high-quality video, leading to sub-optimal users’ experience and network resource utilization. Recent studies have revealed that network-assisted HAS techniques, by providing a comprehensive view of the network, can lead to more significant gains in HAS system performance. In this paper, we leverage the capability of Software-Define Networking (SDN), Network Function Virtualization (NFV), and edge computing to introduce a CDN-Aware QoE Optimization in SDN-Assisted Adaptive Video Streaming framework called CSDN. We employ virtualized edge entities to collect various information items (e.g., user-, client, CDN- and network-level information) in a time-slotted method. These components then run an optimization model with a new server/segment selection approach in a time-slotted fashion to serve the clients’ requests by selecting optimal cache servers (in terms of fetch and transcoding times). In case of a cache miss, a client’s request is served (i) by an optimal replacement quality (only better quality levels with minimum deviation) from a cache server, (ii) by a quality transcoded from an optimal replacement quality at the edge, or (iii) by the originally requested quality level from the origin server. By means of comprehensive experiments conducted on a real-world large-scale testbed, we demonstrate that CSDN outperforms the state-of-the-art in terms of playback bitrate, the number of quality switches, the number of stalls, and bandwidth usage by at least 7.5%, 19%, 19%, and 63%, respectively.
WISH: User-centric Bitrate Adaptation for HTTP Adaptive Streaming on Mobile D...Minh Nguyen
Recently, mobile devices have become paramount in online video streaming. Adaptive bitrate (ABR) algorithms of players responsible for selecting the quality of the videos face critical challenges in providing a high Quality of Experience (QoE) for end users. One open issue is how to ensure the optimal experience for heterogeneous devices in the context of extreme variation of mobile broadband networks. Additionally, end users may have different priorities on video quality and data usage (i.e., the amount of data downloaded to the devices through the mobile networks). A generic mechanism for players that enables specification of various policies to meet end users’ needs is still missing. In this paper, we propose a weighted sum model, namely WISH, that yields high QoE of the video and allows end users to express their preferences among different parameters (i.e., data usage, stall events, and video quality) of video streaming. WISH has been implemented into ExoPlayer, a popular player used in many mobile applications. The experimental results show that WISH improves the QoE by up to 17.6% while saving 36.4% of data usage compared to state-of-the-art ABR algorithms and provides dynamic adaptation to end users’ requirements.
Quality impact of scalable video coding tunneling for media aware content del...Alpen-Adria-Universität
The document discusses a study on using Scalable Video Coding (SVC) tunneling for media-aware content delivery. SVC can save bandwidth for multicast content delivery but many devices do not support SVC natively. The study proposes using SVC tunneling only for delivery, with transcoding at ingress and egress points. The study tests transcoding MPEG-2 video to SVC and back to MPEG-2, finding a peak signal-to-noise ratio drop of 2.1 dB and 43% lower bandwidth requirements compared to MPEG-2 simulcast. Future work could analyze different video formats and implementations of SVC.
ComplexCTTP: Complexity Class Based Transcoding Time Prediction for Video Seq...Alpen-Adria-Universität
HTTP Adaptive Streaming of video content is becoming an integral part of the Internet and accounts for the majority of today’s traffic. Although Internet bandwidth is constantly increasing, video compression technology plays an important role and the major challenge is to select and set up multiple video codecs, each with hundreds of transcoding parameters. Additionally, the transcoding speed depends directly on the selected transcoding parameters and the infrastructure used. Predicting transcoding time for multiple transcoding parameters with different codecs and processing units is a challenging task, as it depends on many factors. This paper provides a novel and considerably fast method for transcoding time prediction using video content classification and neural network prediction. Our artificial neural network (ANN) model predicts the transcoding times of video segments for state-of-the-art video codecs based on transcoding parameters and content complexity. We evaluated our method for two video codecs/implementations (AVC/x264 and HEVC/x265) as part of large-scale HTTP Adaptive Streaming services. The ANN model of our method is able to predict the transcoding time by minimizing the mean absolute error (MAE) to 1.37 and 2.67 for x264 and x265 codecs, respectively. For x264, this is an improvement of 22% compared to the state of the art.
Understanding Quality of Experience of Heuristic-based HTTP Adaptive Bitrate ...Alpen-Adria-Universität
Adaptive BitRate (ABR) algorithms play a crucial role in delivering the highest possible viewer’s Quality of Experience (QoE) in HTTP Adaptive Streaming (HAS). Online video streaming service providers use HAS – the dominant video streaming technique on the Internet – to deliver the best QoE for their users. Viewer’s delightfulness relies heavily on how the ABR of a media player can adapt the stream’s quality to the current network conditions. QoE for end-to-end video streaming sessions has been evaluated in many research projects to give better insight into the quality metrics. Objective evaluation models such as ITU Telecommunication Standardization Sector (ITU-T) P.1203 allow for the calculation of Mean Opinion Score (MOS) by considering various QoE metrics, and subjective evaluation is the best assessment approach in investigating the end-user opinion over a video streaming session’s experienced quality. We have conducted subjective evaluations with crowdsourced participants and evaluated the MOS of the sessions using the ITU-T P.1203 quality model. This paper’s main contribution is subjective evaluation analogy with objective evaluation for well-known heuristic-based ABRs.
A Distributed Delivery Architecture for User Generated Content Live Streaming...Alpen-Adria-Universität
Live User Generated Content (UGC) has become very popular in today’s video streaming applications, in particular with gaming and e-sport. However, streaming UGC presents unique challenges for video delivery. When dealing with the technical complexity of managing hundreds or thousands of concurrent streams that are geographically distributed, UGCsystems are forces to made difficult trade-offs with video quality and latency. To bridge this gap, this paper presents a fully distributed architecture for UGC delivery over the Internet, termed QuaLA(joint Quality-Latency Architecture). The proposed architecture aims to jointly optimize video quality and latency for a better user experience and fairness. By using the proximal Jacobi alternating direction method of multipliers(ProxJ-ADMM) technique, QuaLA proposes a fully distributed mechanism to achieve an optimal solution. We demonstrate the effectiveness of the proposed architecture through real-world experiments using the CloudLAB testbed. Experimental results show the outperformance ofQuaLAin achieving high quality with more than 57% improvement while preserving a good level of fairness and respecting a given target latency among all clients compared to conventional client-driven solutions
Machine Learning Based Video Coding Enhancements for HTTP Adaptive StreamingAlpen-Adria-Universität
This document discusses machine learning based enhancements for video coding and HTTP adaptive streaming. It introduces the research questions around efficiently providing multi-rate video representations over different resolutions for adaptive streaming, improving video codec performance with machine learning, improving video quality with machine learning, and using machine learning for perceptual quality assessment. It outlines the methodology, design process, and existing results from papers on fast multi-rate encoding using information from reference representations and machine learning models. Ongoing and future work is focused on super-resolution, perceptual quality assessment with machine learning, and improving in-loop filtering with machine learning.
FaME-ML: Fast Multirate Encoding for HTTP Adaptive Streaming Using Machine Le...Alpen-Adria-Universität
HTTP Adaptive Streaming(HAS) is the most common approach for delivering video content over the Internet. Therequirement to encode the same content at different quality levels(i.e., representations) in HAS is a challenging problem for content providers. Fast multirate encoding approaches try to accelerate this process by reusing information from previously encoded representations. In this paper, we propose to use convolutional neural networks (CNNs) to speed up the encoding of multiple representations with a specific focus on parallel encoding. In parallel encoding, the overall time-complexity is limited to the maximum time-complexity of one of the representations that are encoded in parallel. Therefore, instead of reducing the time-complexity for all representations, the highest time-complexities are reduced. Experimental results show that FaME-ML achieves significant time-complexity savings in parallel encoding scenarios(41%in average) with a slight increase in bitrate and quality degradation compared to the HEVC reference software.
Mobile networks equipped with edge computing nodes enable access to information that can be leveraged to assist client-based adaptive bitrate (ABR) algorithms in making better adaptation decisions to improve both Quality of Experience (QoE) and fairness. For this purpose, we propose a novel on-the-fly edge mechanism, named EADAS (Edge Assisted Adaptation Scheme for HTTP Adaptive Streaming), located at the edge node that assists and improves the ABR decisions on-the-fly. EADAS proposes (i) an edge ABR algorithm to improve QoE and fairness for clients and (ii) a segment prefetching scheme. The results show a QoE increase of 4.6%, 23.5%, and 24.4% and a fairness increase of 11%, 3.4%, and 5.8% when using a buffer-based, a throughput-based, and a hybrid ABR algorithm, respectively, at the client compared with client-based algorithms without EADAS. Moreover, QoE and fairness among clients can be prioritized using parameters of the EADAS algorithm according to service providers’ requirements.
High Efficiency Video Coding (HEVC) improves the encoding efficiency by utilizing sophisticated tools such as flexible Coding Tree Unit (CTU) partitioning. The Coding Unit (CU) can be split recursively into four equally sized CUs ranging from 64×64 to 8×8 pixels. At each depth level (or CU size), intra prediction via exhaustive mode search was exploited in HEVC to improve the encoding efficiency and result in a very high encoding time complexity. This paper proposes an Intra CU Depth Prediction (INCEPT) algorithm, which limits Rate-Distortion Optimization (RDO) for each CTU in HEVC by utilizing the spatial correlation with the neighboring CTUs, which is computed using a DCT energy-based feature. Thus, INCEPT reduces the number of candidate CU sizes required to be considered for each CTU in HEVC intra coding. Experimental results show that the INCEPT algorithm achieves a better trade-off between the encoding efficiency and encoding time saving (i.e., BDR/∆T) than the benchmark algorithms. While BDR/∆T is 12.35% and 9.03% for the benchmark algorithms, it is 5.49% for the proposed algorithm. As a result, INCEPT achieves a 23.34% reduction in encoding time on average while incurring only a 1.67% increase in bit rate than the original coding in the x265 HEVC open-source encoder.
Relevance-Based Compression of Cataract Surgery Videos Using Convolutional Ne...Alpen-Adria-Universität
The document proposes a relevance-based compression method for cataract surgery videos using convolutional neural networks. The method uses Mask R-CNN to detect relevant regions like the cornea and instruments. Pixels outside these regions are removed or compressed at lower quality. Testing showed the method achieved up to 68% reduction in video size while maintaining good quality for relevant regions. The summaries provide the key information about the proposed method and results at a high level in 3 sentences or less as requested.
This document summarizes a study on using Multipath TCP (MPTCP) to tolerate packet reordering and path heterogeneity in wireless networks. The study evaluated the performance of different MPTCP congestion controllers combined with various packet reordering recovery algorithms. The results showed that MPTCP with D-SACK or TCP-DOOR performed best in terms of throughput by increasing path utilization. D-SACK required less memory and was best for asymmetric paths, while TCP-DOOR was best for symmetric paths. In general, packet reordering solutions improved MPTCP performance significantly.
This document summarizes an upcoming presentation on HTTP Adaptive Streaming. The presentation will cover content provisioning, delivery, consumption, and end-to-end aspects of HAS, as well as quality of experience. It will introduce ATHENA, a research center focused on adaptive streaming over HTTP and emerging multimedia technologies. The agenda outlines sections on video encoding for HAS, edge computing, network assistance for clients, bitrate adaptation schemes, and quality of experience models. The presenters are Christian Timmerer and Hermann Hellwagner from Alpen-Adria-Universität Klagenfurt.
Live video streaming is widely embraced in video services, and its applications have attracted much attention in recent years. The increased number of users demanding high quality (e.g., 4K resolution) live videos increase the bandwidth utilization in the backhaul network. To decrease bandwidth utilization in HTTP Adaptive Streaming (HAS), in on-the-fly transcoding approaches, only the highest bitrate representation is delivered to the edge, and other representations are generated by transcoding at the edge. However, this approach is inefficient due to the high transcoding cost. In this paper, we propose a light-weight transcoding at the edge method for live applications, LwTE-Live, to decrease the band-width utilization and the overall live streaming cost. During the encoding processes at the origin server, the optimal encoding decisions are saved as metadata, and the metadata replaces the corresponding representation in the bitrate ladder. The significantly reduced size of the metadata compared to its corresponding representation decreases the bandwidth utilization. The extracted metadata is then utilized at the edge to decrease the transcoding time. We formulate the problem as a Mixed-Binary Linear Programming (MBLP) model to optimize the live streaming cost, including the bandwidth and computation costs. We compare the proposed model with state-of-the-art approaches and the experimental results show that our proposed method saves the cost and backhaul bandwidth utilization up to 34% and 45%, respectively.
On Optimizing Resource Utilization in AVC-based Real-time Video StreamingAlpen-Adria-Universität
Real-time video streaming traffic and related applications have witnessed significant growth in recent years. However, this has been accompanied by some challenging issues, predominantly resource utilization. IP multicasting, as a solution to this problem, suffers from many problems. Using scalable video coding could not gain wide adoption in the industry, due to reduced compression efficiency and additional computational complexity. The emerging software-defined networking (SDN)and network function virtualization (NFV) paradigms enable re-searchers to cope with IP multicasting issues in novel ways. In this paper, by leveraging the SDN and NFV concepts, we introduce a cost-aware approach to provide advanced video coding (AVC)-based real-time video streaming services in the network. In this study, we use two types of virtualized network functions (VNFs): virtual reverse proxy (VRP) and virtual transcoder (VTF)functions. At the edge of the network, VRPs are responsible for collecting clients’ requests and sending them to an SDN controller. Then, executing a mixed-integer linear program (MILP) determines an optimal multicast tree from an appropriate set of video source servers to the optimal group of transcoders. The desired video is sent over the multicast tree. The VTFs transcode the received video segments and stream to the requested VRPs over unicast paths. To mitigate the time complexity of the proposed MILPmodel, we propose a heuristic algorithm that determines a near-optimal solution in a reasonable amount of time. Using theMiniNet emulator, we evaluate the proposed approach and show it achieves better performance in terms of cost and resource utilization in comparison with traditional multicast and unicast approaches.
With the recent surge in Internet multimedia traffic, the enhancement and improvement of media players, specifically DASH media players happened at an incredible rate. DASH Media players take advantage of adapting a media stream to the network fluctuations by continuously monitoring the network and making decisions in near real-time. The performance of algorithms that are in charge of making such decisions was often difficult to be evaluated and objectively assessed.
CAdViSE provides a Cloud-based Adaptive Video Streaming Evaluation framework for the automated testing of adaptive media players. In this talk, I will introduce the CAdViSE framework, its application, and propose the benefits and advantages that it can bring to every web-based media player development pipeline. To demonstrate the power of CAdViSE in evaluating Adaptive Bitrate (ABR) algorithms I will exhibit its capabilities when combined with objective Quality of Experience (QoE) models. For this talk, my team at Bitmovin/ATHENA has selected the ITU-T P.1203 (mode 1) model in order to execute experiments and calculate the Mean Opinion Score (MOS), and better understand the behavior of a set of well-known ABR algorithms in a real-life setting. The talk will display how we tested and deployed our framework using a modular architecture into a cloud infrastructure. This method yields a massive growth to the number of concurrent experiments and the number of media players that can be evaluated and compared at the same time, thus enabling maximum potential scalability. In my team’s most recent experiments, we used Amazon Web Services (AWS) for demonstration purposes. Another awesome feature of CAdViSE that will be discussed here is the ability to shape the test network with endless network profiles. To do so, we used a fluctuation network profile and a real LTE network trace based on the recorded internet usage of a bicycle commuter in Belgium.
CAdViSE produces comprehensive logs for each media streaming experimental session. These logs can then be applied against different goals, such as objective evaluation to stitch back media segments and conduct subjective evaluations afterwards. In addition, startup delays, stall events, and other media streaming defects can be imitated exactly as they happened during the experimental streaming sessions.
Generic and Automatic Specman Based Verification EnvironmentDVClub
This document describes a generic and automatic verification environment for image signal processing IPs. It uses configurable verification components (eVCs) to model the register interface and video data interfaces of an IP. A register model and memory model interface with the DUT. 'C'/Python models are used for output checking. Test cases and coverage are generated automatically from IP-XACT files describing the IP interfaces and registers. The environment supports verifying IPs individually and connected in a image processing pipeline at the subsystem level.
Scalable High Efficiency Video Coding based HTTP Adaptive Streaming over QUIC...Alpen-Adria-Universität
HTTP/2 has been explored widely for video streaming, but still suffers from Head-of-Line blocking and three-way hand-shake delay due to TCP. Meanwhile, QUIC running on top of UDP can tackle these issues. In addition, although many adaptive bitrate (ABR) algorithms have been proposed for scalable and non-scalable video streaming, the literature lacks an algorithm designed for both types of video streaming approaches. In this paper, we investigate the impact of quick and HTTP/2 on the performance of adaptive bitrate (ABR) algorithms in terms of different metrics. Moreover, we propose an efficient approach for utilizing scalable video coding formats for adaptive video streaming that combines a traditional video streaming approach (based on non-scalable video coding formats) and a retransmission technique. The experimental results show that QUIC benefits significantly from our proposed method in the context of packet loss and retransmission. Compared to HTTP/2, it improves the average video quality and also provides a smoother adaptation behavior. Finally, we demonstrate that our proposed method originally designed for non-scalable video codecs also works efficiently for scalable videos such as Scalable High EfficiencyVideo Coding (SHVC).
MiPSO: Multi-Period Per-Scene Optimization For HTTP Adaptive StreamingAlpen-Adria-Universität
Video delivery over the Internet has become more and more established in recent years due to the widespread use of Dynamic Adaptive Streaming over HTTP (DASH). The current DASH specification defines a hierarchical data model for Media Presentation Descriptions (MPDs) in terms of periods, adaptation sets, representations, and segments. Although multi-period MPDs are widely used in live streaming scenarios, they are not fully utilized in Video-on-Demand (VoD) HTTP adaptive streaming (HAS) scenarios. In this paper, we introduce MiPSO, a framework for Multi-Period per-Scene optimization, to examine multiple periods in VoD HAS scenarios. MiPSO provides different encoded representations of a video at either (i) maximum possible quality or (ii) minimum possible bitrate, beneficial to both service providers and subscribers. In each period, the proposed framework adjusts the video representations (resolution-bitrate pairs) by taking into account the complexities of the video content, with the aim of achieving streams at either higher qualities or lower bitrates. The experimental evaluation with a test video data set shows that MiPSO reduces the average bitrate of streams with the same visual quality by approximately 10% or increases the visual quality of streams by at least 1 dB in terms of Peak Signal-to-Noise (PSNR) at the same bitrate compared to conventional approaches.
The document summarizes a presentation about the UDT high performance data transfer protocol. It describes the UDT team members, provides an overview of UDT including its capabilities and design, compares UDT to TCP, and lists some technical implementations of UDT by various organizations for tasks like file transfer and distributed visualization of large datasets.
Accelerating Networked Applications with Flexible Packet ProcessingOpen-NFP
The recent surge of network I/O performance has put enormous pressure on memory and software I/O processing subsystems for many cloud and data center applications, such as key-value stores and real-time analytics frameworks. A major reason for the high memory and processing overheads is the inefficient use of these resources by network interface cards. Offloading functionality to a programmable NIC can help, but what to offload needs to be carefully chosen.
This presentation will cover a number of reusable offloading mechanisms that can help data center software processing efficiency. It will show how to implement these mechanisms in the P4 programming language and discuss their efficiency using experiments run on the Netronome Agilio-CX NIC.
RootGuard is a system that provides fine-grained control of root access on Android phones to enhance security. It defends against threats like silent installation of malware, termination of antivirus tools, and creation of backdoors. RootGuard adds a SuperuserEx component, policy storage, and kernel module to the standard root management model. It was shown to effectively prevent attacks from malware like RootSmart and DKFBootKit while introducing minimal performance overhead.
WISH: User-centric Bitrate Adaptation for HTTP Adaptive Streaming on Mobile D...Minh Nguyen
Recently, mobile devices have become paramount in online video streaming. Adaptive bitrate (ABR) algorithms of players responsible for selecting the quality of the videos face critical challenges in providing a high Quality of Experience (QoE) for end users. One open issue is how to ensure the optimal experience for heterogeneous devices in the context of extreme variation of mobile broadband networks. Additionally, end users may have different priorities on video quality and data usage (i.e., the amount of data downloaded to the devices through the mobile networks). A generic mechanism for players that enables specification of various policies to meet end users’ needs is still missing. In this paper, we propose a weighted sum model, namely WISH, that yields high QoE of the video and allows end users to express their preferences among different parameters (i.e., data usage, stall events, and video quality) of video streaming. WISH has been implemented into ExoPlayer, a popular player used in many mobile applications. The experimental results show that WISH improves the QoE by up to 17.6% while saving 36.4% of data usage compared to state-of-the-art ABR algorithms and provides dynamic adaptation to end users’ requirements.
Quality impact of scalable video coding tunneling for media aware content del...Alpen-Adria-Universität
The document discusses a study on using Scalable Video Coding (SVC) tunneling for media-aware content delivery. SVC can save bandwidth for multicast content delivery but many devices do not support SVC natively. The study proposes using SVC tunneling only for delivery, with transcoding at ingress and egress points. The study tests transcoding MPEG-2 video to SVC and back to MPEG-2, finding a peak signal-to-noise ratio drop of 2.1 dB and 43% lower bandwidth requirements compared to MPEG-2 simulcast. Future work could analyze different video formats and implementations of SVC.
ComplexCTTP: Complexity Class Based Transcoding Time Prediction for Video Seq...Alpen-Adria-Universität
HTTP Adaptive Streaming of video content is becoming an integral part of the Internet and accounts for the majority of today’s traffic. Although Internet bandwidth is constantly increasing, video compression technology plays an important role and the major challenge is to select and set up multiple video codecs, each with hundreds of transcoding parameters. Additionally, the transcoding speed depends directly on the selected transcoding parameters and the infrastructure used. Predicting transcoding time for multiple transcoding parameters with different codecs and processing units is a challenging task, as it depends on many factors. This paper provides a novel and considerably fast method for transcoding time prediction using video content classification and neural network prediction. Our artificial neural network (ANN) model predicts the transcoding times of video segments for state-of-the-art video codecs based on transcoding parameters and content complexity. We evaluated our method for two video codecs/implementations (AVC/x264 and HEVC/x265) as part of large-scale HTTP Adaptive Streaming services. The ANN model of our method is able to predict the transcoding time by minimizing the mean absolute error (MAE) to 1.37 and 2.67 for x264 and x265 codecs, respectively. For x264, this is an improvement of 22% compared to the state of the art.
Understanding Quality of Experience of Heuristic-based HTTP Adaptive Bitrate ...Alpen-Adria-Universität
Adaptive BitRate (ABR) algorithms play a crucial role in delivering the highest possible viewer’s Quality of Experience (QoE) in HTTP Adaptive Streaming (HAS). Online video streaming service providers use HAS – the dominant video streaming technique on the Internet – to deliver the best QoE for their users. Viewer’s delightfulness relies heavily on how the ABR of a media player can adapt the stream’s quality to the current network conditions. QoE for end-to-end video streaming sessions has been evaluated in many research projects to give better insight into the quality metrics. Objective evaluation models such as ITU Telecommunication Standardization Sector (ITU-T) P.1203 allow for the calculation of Mean Opinion Score (MOS) by considering various QoE metrics, and subjective evaluation is the best assessment approach in investigating the end-user opinion over a video streaming session’s experienced quality. We have conducted subjective evaluations with crowdsourced participants and evaluated the MOS of the sessions using the ITU-T P.1203 quality model. This paper’s main contribution is subjective evaluation analogy with objective evaluation for well-known heuristic-based ABRs.
A Distributed Delivery Architecture for User Generated Content Live Streaming...Alpen-Adria-Universität
Live User Generated Content (UGC) has become very popular in today’s video streaming applications, in particular with gaming and e-sport. However, streaming UGC presents unique challenges for video delivery. When dealing with the technical complexity of managing hundreds or thousands of concurrent streams that are geographically distributed, UGCsystems are forces to made difficult trade-offs with video quality and latency. To bridge this gap, this paper presents a fully distributed architecture for UGC delivery over the Internet, termed QuaLA(joint Quality-Latency Architecture). The proposed architecture aims to jointly optimize video quality and latency for a better user experience and fairness. By using the proximal Jacobi alternating direction method of multipliers(ProxJ-ADMM) technique, QuaLA proposes a fully distributed mechanism to achieve an optimal solution. We demonstrate the effectiveness of the proposed architecture through real-world experiments using the CloudLAB testbed. Experimental results show the outperformance ofQuaLAin achieving high quality with more than 57% improvement while preserving a good level of fairness and respecting a given target latency among all clients compared to conventional client-driven solutions
Machine Learning Based Video Coding Enhancements for HTTP Adaptive StreamingAlpen-Adria-Universität
This document discusses machine learning based enhancements for video coding and HTTP adaptive streaming. It introduces the research questions around efficiently providing multi-rate video representations over different resolutions for adaptive streaming, improving video codec performance with machine learning, improving video quality with machine learning, and using machine learning for perceptual quality assessment. It outlines the methodology, design process, and existing results from papers on fast multi-rate encoding using information from reference representations and machine learning models. Ongoing and future work is focused on super-resolution, perceptual quality assessment with machine learning, and improving in-loop filtering with machine learning.
FaME-ML: Fast Multirate Encoding for HTTP Adaptive Streaming Using Machine Le...Alpen-Adria-Universität
HTTP Adaptive Streaming(HAS) is the most common approach for delivering video content over the Internet. Therequirement to encode the same content at different quality levels(i.e., representations) in HAS is a challenging problem for content providers. Fast multirate encoding approaches try to accelerate this process by reusing information from previously encoded representations. In this paper, we propose to use convolutional neural networks (CNNs) to speed up the encoding of multiple representations with a specific focus on parallel encoding. In parallel encoding, the overall time-complexity is limited to the maximum time-complexity of one of the representations that are encoded in parallel. Therefore, instead of reducing the time-complexity for all representations, the highest time-complexities are reduced. Experimental results show that FaME-ML achieves significant time-complexity savings in parallel encoding scenarios(41%in average) with a slight increase in bitrate and quality degradation compared to the HEVC reference software.
Mobile networks equipped with edge computing nodes enable access to information that can be leveraged to assist client-based adaptive bitrate (ABR) algorithms in making better adaptation decisions to improve both Quality of Experience (QoE) and fairness. For this purpose, we propose a novel on-the-fly edge mechanism, named EADAS (Edge Assisted Adaptation Scheme for HTTP Adaptive Streaming), located at the edge node that assists and improves the ABR decisions on-the-fly. EADAS proposes (i) an edge ABR algorithm to improve QoE and fairness for clients and (ii) a segment prefetching scheme. The results show a QoE increase of 4.6%, 23.5%, and 24.4% and a fairness increase of 11%, 3.4%, and 5.8% when using a buffer-based, a throughput-based, and a hybrid ABR algorithm, respectively, at the client compared with client-based algorithms without EADAS. Moreover, QoE and fairness among clients can be prioritized using parameters of the EADAS algorithm according to service providers’ requirements.
High Efficiency Video Coding (HEVC) improves the encoding efficiency by utilizing sophisticated tools such as flexible Coding Tree Unit (CTU) partitioning. The Coding Unit (CU) can be split recursively into four equally sized CUs ranging from 64×64 to 8×8 pixels. At each depth level (or CU size), intra prediction via exhaustive mode search was exploited in HEVC to improve the encoding efficiency and result in a very high encoding time complexity. This paper proposes an Intra CU Depth Prediction (INCEPT) algorithm, which limits Rate-Distortion Optimization (RDO) for each CTU in HEVC by utilizing the spatial correlation with the neighboring CTUs, which is computed using a DCT energy-based feature. Thus, INCEPT reduces the number of candidate CU sizes required to be considered for each CTU in HEVC intra coding. Experimental results show that the INCEPT algorithm achieves a better trade-off between the encoding efficiency and encoding time saving (i.e., BDR/∆T) than the benchmark algorithms. While BDR/∆T is 12.35% and 9.03% for the benchmark algorithms, it is 5.49% for the proposed algorithm. As a result, INCEPT achieves a 23.34% reduction in encoding time on average while incurring only a 1.67% increase in bit rate than the original coding in the x265 HEVC open-source encoder.
Relevance-Based Compression of Cataract Surgery Videos Using Convolutional Ne...Alpen-Adria-Universität
The document proposes a relevance-based compression method for cataract surgery videos using convolutional neural networks. The method uses Mask R-CNN to detect relevant regions like the cornea and instruments. Pixels outside these regions are removed or compressed at lower quality. Testing showed the method achieved up to 68% reduction in video size while maintaining good quality for relevant regions. The summaries provide the key information about the proposed method and results at a high level in 3 sentences or less as requested.
This document summarizes a study on using Multipath TCP (MPTCP) to tolerate packet reordering and path heterogeneity in wireless networks. The study evaluated the performance of different MPTCP congestion controllers combined with various packet reordering recovery algorithms. The results showed that MPTCP with D-SACK or TCP-DOOR performed best in terms of throughput by increasing path utilization. D-SACK required less memory and was best for asymmetric paths, while TCP-DOOR was best for symmetric paths. In general, packet reordering solutions improved MPTCP performance significantly.
This document summarizes an upcoming presentation on HTTP Adaptive Streaming. The presentation will cover content provisioning, delivery, consumption, and end-to-end aspects of HAS, as well as quality of experience. It will introduce ATHENA, a research center focused on adaptive streaming over HTTP and emerging multimedia technologies. The agenda outlines sections on video encoding for HAS, edge computing, network assistance for clients, bitrate adaptation schemes, and quality of experience models. The presenters are Christian Timmerer and Hermann Hellwagner from Alpen-Adria-Universität Klagenfurt.
Live video streaming is widely embraced in video services, and its applications have attracted much attention in recent years. The increased number of users demanding high quality (e.g., 4K resolution) live videos increase the bandwidth utilization in the backhaul network. To decrease bandwidth utilization in HTTP Adaptive Streaming (HAS), in on-the-fly transcoding approaches, only the highest bitrate representation is delivered to the edge, and other representations are generated by transcoding at the edge. However, this approach is inefficient due to the high transcoding cost. In this paper, we propose a light-weight transcoding at the edge method for live applications, LwTE-Live, to decrease the band-width utilization and the overall live streaming cost. During the encoding processes at the origin server, the optimal encoding decisions are saved as metadata, and the metadata replaces the corresponding representation in the bitrate ladder. The significantly reduced size of the metadata compared to its corresponding representation decreases the bandwidth utilization. The extracted metadata is then utilized at the edge to decrease the transcoding time. We formulate the problem as a Mixed-Binary Linear Programming (MBLP) model to optimize the live streaming cost, including the bandwidth and computation costs. We compare the proposed model with state-of-the-art approaches and the experimental results show that our proposed method saves the cost and backhaul bandwidth utilization up to 34% and 45%, respectively.
On Optimizing Resource Utilization in AVC-based Real-time Video StreamingAlpen-Adria-Universität
Real-time video streaming traffic and related applications have witnessed significant growth in recent years. However, this has been accompanied by some challenging issues, predominantly resource utilization. IP multicasting, as a solution to this problem, suffers from many problems. Using scalable video coding could not gain wide adoption in the industry, due to reduced compression efficiency and additional computational complexity. The emerging software-defined networking (SDN)and network function virtualization (NFV) paradigms enable re-searchers to cope with IP multicasting issues in novel ways. In this paper, by leveraging the SDN and NFV concepts, we introduce a cost-aware approach to provide advanced video coding (AVC)-based real-time video streaming services in the network. In this study, we use two types of virtualized network functions (VNFs): virtual reverse proxy (VRP) and virtual transcoder (VTF)functions. At the edge of the network, VRPs are responsible for collecting clients’ requests and sending them to an SDN controller. Then, executing a mixed-integer linear program (MILP) determines an optimal multicast tree from an appropriate set of video source servers to the optimal group of transcoders. The desired video is sent over the multicast tree. The VTFs transcode the received video segments and stream to the requested VRPs over unicast paths. To mitigate the time complexity of the proposed MILPmodel, we propose a heuristic algorithm that determines a near-optimal solution in a reasonable amount of time. Using theMiniNet emulator, we evaluate the proposed approach and show it achieves better performance in terms of cost and resource utilization in comparison with traditional multicast and unicast approaches.
With the recent surge in Internet multimedia traffic, the enhancement and improvement of media players, specifically DASH media players happened at an incredible rate. DASH Media players take advantage of adapting a media stream to the network fluctuations by continuously monitoring the network and making decisions in near real-time. The performance of algorithms that are in charge of making such decisions was often difficult to be evaluated and objectively assessed.
CAdViSE provides a Cloud-based Adaptive Video Streaming Evaluation framework for the automated testing of adaptive media players. In this talk, I will introduce the CAdViSE framework, its application, and propose the benefits and advantages that it can bring to every web-based media player development pipeline. To demonstrate the power of CAdViSE in evaluating Adaptive Bitrate (ABR) algorithms I will exhibit its capabilities when combined with objective Quality of Experience (QoE) models. For this talk, my team at Bitmovin/ATHENA has selected the ITU-T P.1203 (mode 1) model in order to execute experiments and calculate the Mean Opinion Score (MOS), and better understand the behavior of a set of well-known ABR algorithms in a real-life setting. The talk will display how we tested and deployed our framework using a modular architecture into a cloud infrastructure. This method yields a massive growth to the number of concurrent experiments and the number of media players that can be evaluated and compared at the same time, thus enabling maximum potential scalability. In my team’s most recent experiments, we used Amazon Web Services (AWS) for demonstration purposes. Another awesome feature of CAdViSE that will be discussed here is the ability to shape the test network with endless network profiles. To do so, we used a fluctuation network profile and a real LTE network trace based on the recorded internet usage of a bicycle commuter in Belgium.
CAdViSE produces comprehensive logs for each media streaming experimental session. These logs can then be applied against different goals, such as objective evaluation to stitch back media segments and conduct subjective evaluations afterwards. In addition, startup delays, stall events, and other media streaming defects can be imitated exactly as they happened during the experimental streaming sessions.
Generic and Automatic Specman Based Verification EnvironmentDVClub
This document describes a generic and automatic verification environment for image signal processing IPs. It uses configurable verification components (eVCs) to model the register interface and video data interfaces of an IP. A register model and memory model interface with the DUT. 'C'/Python models are used for output checking. Test cases and coverage are generated automatically from IP-XACT files describing the IP interfaces and registers. The environment supports verifying IPs individually and connected in a image processing pipeline at the subsystem level.
Scalable High Efficiency Video Coding based HTTP Adaptive Streaming over QUIC...Alpen-Adria-Universität
HTTP/2 has been explored widely for video streaming, but still suffers from Head-of-Line blocking and three-way hand-shake delay due to TCP. Meanwhile, QUIC running on top of UDP can tackle these issues. In addition, although many adaptive bitrate (ABR) algorithms have been proposed for scalable and non-scalable video streaming, the literature lacks an algorithm designed for both types of video streaming approaches. In this paper, we investigate the impact of quick and HTTP/2 on the performance of adaptive bitrate (ABR) algorithms in terms of different metrics. Moreover, we propose an efficient approach for utilizing scalable video coding formats for adaptive video streaming that combines a traditional video streaming approach (based on non-scalable video coding formats) and a retransmission technique. The experimental results show that QUIC benefits significantly from our proposed method in the context of packet loss and retransmission. Compared to HTTP/2, it improves the average video quality and also provides a smoother adaptation behavior. Finally, we demonstrate that our proposed method originally designed for non-scalable video codecs also works efficiently for scalable videos such as Scalable High EfficiencyVideo Coding (SHVC).
MiPSO: Multi-Period Per-Scene Optimization For HTTP Adaptive StreamingAlpen-Adria-Universität
Video delivery over the Internet has become more and more established in recent years due to the widespread use of Dynamic Adaptive Streaming over HTTP (DASH). The current DASH specification defines a hierarchical data model for Media Presentation Descriptions (MPDs) in terms of periods, adaptation sets, representations, and segments. Although multi-period MPDs are widely used in live streaming scenarios, they are not fully utilized in Video-on-Demand (VoD) HTTP adaptive streaming (HAS) scenarios. In this paper, we introduce MiPSO, a framework for Multi-Period per-Scene optimization, to examine multiple periods in VoD HAS scenarios. MiPSO provides different encoded representations of a video at either (i) maximum possible quality or (ii) minimum possible bitrate, beneficial to both service providers and subscribers. In each period, the proposed framework adjusts the video representations (resolution-bitrate pairs) by taking into account the complexities of the video content, with the aim of achieving streams at either higher qualities or lower bitrates. The experimental evaluation with a test video data set shows that MiPSO reduces the average bitrate of streams with the same visual quality by approximately 10% or increases the visual quality of streams by at least 1 dB in terms of Peak Signal-to-Noise (PSNR) at the same bitrate compared to conventional approaches.
The document summarizes a presentation about the UDT high performance data transfer protocol. It describes the UDT team members, provides an overview of UDT including its capabilities and design, compares UDT to TCP, and lists some technical implementations of UDT by various organizations for tasks like file transfer and distributed visualization of large datasets.
Accelerating Networked Applications with Flexible Packet ProcessingOpen-NFP
The recent surge of network I/O performance has put enormous pressure on memory and software I/O processing subsystems for many cloud and data center applications, such as key-value stores and real-time analytics frameworks. A major reason for the high memory and processing overheads is the inefficient use of these resources by network interface cards. Offloading functionality to a programmable NIC can help, but what to offload needs to be carefully chosen.
This presentation will cover a number of reusable offloading mechanisms that can help data center software processing efficiency. It will show how to implement these mechanisms in the P4 programming language and discuss their efficiency using experiments run on the Netronome Agilio-CX NIC.
RootGuard is a system that provides fine-grained control of root access on Android phones to enhance security. It defends against threats like silent installation of malware, termination of antivirus tools, and creation of backdoors. RootGuard adds a SuperuserEx component, policy storage, and kernel module to the standard root management model. It was shown to effectively prevent attacks from malware like RootSmart and DKFBootKit while introducing minimal performance overhead.
Wetland restoration in San Francisco Bay aims to reverse decades of habitat loss. San Francisco Bay once hosted over 350,000 acres of tidal wetlands but land conversion reduced this to around 115,000 acres by the 1990s. In 2003, over 16,500 acres of former salt ponds were acquired for restoration, representing the largest wetland restoration effort on the west coast. The restoration seeks to reintroduce tidal flooding and sedimentation to support target species like clapper rails and harvest mice while providing habitat, flood protection, and other benefits. Careful planning is needed to balance ecological and public access goals.
Kart Bilgilerini ve Kampanyaları Tek Bir Yerde Toplayan Uygulama; Bonus Flaş Yayında!
Userspots olarak en çok efor harcadığımız ve bu eforun karşılığında (500.000 +) kullanıcı sayısına ulaşarak gururlanmamıza sebep olan ’Bonus Flaş’ 2015’in sonlarına doğru hayata geçti.
Bonus Flaş tasarımı sürecinde, kullanıcıların alışveriş alışkanlıkları ile ürünün KPI’larını orta noktada buluşturabilmek için deneyim tasarımı gerçekleştirdik.
This PPT contains data related to Drilling, its necessity, its types, precautions during drilling, Selection of Drilling method and Equipment & Factors Affecting the optimum drilling pressure.
Iranian art has one of the deepest art heritages and it embodies many disciplines from architecture to music. The Iranian art world has been crafted by modern and contemporary elements that make it’s cultural community a very unique collaboration.
Rajendra D Kori is a senior software engineer and project leader with over 9 years of experience developing applications for banking, financial services, and automotive clients. He has strong technical skills in Microsoft technologies like C#, ASP.NET, and SQL Server and has experience managing teams of up to 20 people on projects. Currently he is seeking a team leader position and is available for new opportunities.
This document discusses research at the intersection of human and computer vision, with a focus on objects in context. It provides background on visual perception and challenges in object and scene recognition. Context is important for human vision but difficult for computers. Representative work by Renninger and Malik shows that early scene identification can be explained by a simple texture model, demonstrating the value of interdisciplinary research between human and computer vision. The document concludes by discussing the author's experiences with interdisciplinary collaboration between psychology and computer science.
Drill- off Test"- Selection of Weight on Bit and Rotary SpeedAby Saxen
This test was first proposed by Lubinski in 1958 and is the quickest method of finding the WOB and RPM combination for maximizing ROP. To get the best combination,it is desired that the test be carried out.
1) Companies are looking to revitalize existing oil and gas fields as new discoveries decline, through methods like infill drilling, recompletions, waterflooding, and enhanced oil recovery. Only 13% of fields have been fully abandoned.
2) Unconventional techniques originally developed for shale like horizontal drilling and hydraulic fracturing are now being applied to conventional fields. In the Delaware Sandstones, horizontal wells offset production declines and increased output by 60%.
3) IHS estimates these techniques could unlock an additional 141 billion barrels globally from existing low-productivity fields. The techniques are already extending the lives of fields and increasing recovery, such as adding 11% to France's Saint Martin du Bossenay field
Labmeeting - 20150512 - New Secure Routing Method & Applications Facing MitM ...Syuan Wang
This document proposes a new secure routing method using graph theory to route network traffic across multiple paths to mitigate man-in-the-middle attacks. It represents computer networks as graphs and develops an algorithm called pathFinder to choose secure path combinations based on criteria like safety, speed and buffer size. The method finds two paths between a source and destination with equal weight or calculates a ratio of traffic loads across two unequal weight paths to balance security and performance. A simulation confirmed the approach does not significantly impact router performance. Further optimization is needed to scale to larger networks and select only the most secure paths.
12 Temmuz 2012 tarihinde Bilgi Üniversitesi - Sosyal Medya Uzmanlığı 2. dönem sertifika programı için hazırladığım "Online ve Offline Entegre Pazarlama" sunumudur.
Online ve Offline pazarlamanın güncel değerlendirmesi ve gelecekteki pazarlamaya dair öngörüler hakkındadır.
This document provides information on constructing block walling. It discusses different types of blocks including their sizes, weights, and strengths. It describes how blocks are manufactured and the various materials that can be used. Special blocks are highlighted for bonding, openings, and maintaining courses. Guidance is given for below-grade construction, foundations, and installing insulation. Bonding techniques, building with blocks, and closing cavities are also summarized.
Advanced blowout and well control robert d. graceThần Chết Nụ Hôn
= Critical velocity, ft/sec
= Constant, dimensionless
= Fluid density, Ib/ft3
= Pressure, psia
= Universal gas constant, ft-lbf/lb-°R
= Temperature, °R
= Particle diameter, ft
= Specific gravity of particles
This chapter discusses important well control equipment such as the blowout preventer stack, choke line, choke manifold, separator, and stabbing valve. It notes that while blowout preventers themselves are generally reliable, auxiliary equipment often has problems that can exacerbate well control issues. Issues discussed include leaking or non-functioning equipment, poor design of choke lines that are not resistant to erosion from abras
This document discusses different types of multimedia streaming including RTP/RTSP streaming, HTTP progressive download, and adaptive HTTP streaming. It provides details on each type such as protocols used, advantages, and requirements. Adaptive HTTP streaming generates multiple versions of content at different bitrates and resolutions, chops them into segments, and allows the client to adaptively switch between versions based on available bandwidth. The document also discusses quality of service (QoS) versus quality of experience (QoE) for streaming and how adaptation techniques can help improve streaming quality.
Policy-driven Dynamic HTTP Adaptive Streaming Player EnvironmentMinh Nguyen
Video streaming services account for the majority of today’s traffic on the Internet. Although the data transmission rate has been increasing significantly, the growing number and variety of media and higher quality expectations of users have led networked media applications to fully or even over-utilize the available throughput. HTTP Adaptive Streaming (HAS) has become a predominant technique for multimedia delivery over the Internet today. However, there are critical challenges for multimedia systems, especially the tradeoff between the increasing content (complexity) and various requirements regarding time (latency) and quality (QoE). This thesis will cover the main aspects within the end user’s environment, including video consumption and interactivity, collectively referred to as player environment, which is probably the most crucial component in today’s multimedia applications and services. We will investigate the methods that can enable the specification of various policies reflecting the user’s needs in given use cases. Besides, we will also work on schemes that allow efficient support for server-assisted, and network-assisted HAS systems. Finally, those approaches will be considered to combine into policies that fit the requirements of all use cases (e.g., live streaming, video on demand, etc.).
This document proposes a hybrid P2P-CDN architecture called RICHTER for live video streaming. RICHTER leverages NFV and edge computing to employ virtual transcoding servers that optimize content delivery by intelligently selecting whether to fetch or transcode content from peers, CDNs or the origin server. An online learning approach is used to solve the NP-hard optimization problem. Evaluation on a large-scale testbed shows RICHTER improves QoE, latency and network utilization compared to baseline schemes. Future work includes extending the action classification tree.
Policy-Driven Dynamic HTTP Adaptive Streaming Player EnvironmentMinh Nguyen
In the last decades, video streaming has been developing significantly. Among current technologies, HTTP Adaptive Streaming (HAS) is considered the de-facto approach in multimedia transmission over the internet. Though the majority of HAS-based media services function well even under throughput restrictions and variations, there are still significant challenges for multimedia systems, especially the tradeoff among the increasing content complexity, various time-related requirements, and Quality of Experience (QoE). Optimizing for one aspect usually negatively impacts at least one of the other two aspects. This thesis tackles critical open research questions in the context of HAS that significantly impact the QoE at the client side. The main contributions of this thesis are four-fold:
- We propose Days of Future Past Plus (DoFP+) approach that leverages HTTP/3’s features to upgrade low-quality segments while downloading others.
- This thesis proposes a weighted sum model, namely WISH, to provide a high QoE of the video and allow end users to express their preferences among different parameters, including data usage, stall events, and video quality.
- To improve segment qualities on high-end mobile devices, this thesis introduces an ABR scheme called WISH-SR that integrates a lightweight Convolutional Neural Network (CNN) to enhance low-resolution/low-quality videos at the client side.
- To improve segment qualities on high-end mobile devices, this thesis introduces an ABR scheme called WISH-SR that integrates a lightweight Convolutional Neural Network (CNN) to enhance low-resolution/low-quality videos at the client side.
ACM NOSSDAV'21-ES-HAS_ An Edge- and SDN-Assisted Framework for HTTP Adaptive ...Reza Farahani
The document presents ES-HAS, an edge- and SDN-assisted framework for HTTP adaptive video streaming. ES-HAS leverages SDN and NFV paradigms to provide network assistance for video streaming. It introduces virtual reverse proxy servers at the network edge that employ a novel server/segment selection policy. An evaluation on a large-scale cloud testbed with 60 clients shows that ES-HAS outperforms state-of-the-art approaches in terms of playback bitrate and number of stalls by at least 70% and 40% respectively. Future work directions include extending edge caching and collaboration as well as improving the proposed optimization model.
In the last decades, video streaming has been developing significantly. Among cur- rent technologies, HTTP Adaptive Streaming (HAS) is considered the de-facto approach in multimedia transmission over the internet. In HAS, the video is split into temporal segments with the same duration (e.g., 4s), each of which is then encoded into different quality versions and stored at servers. The end user sends requests to the server to retrieve segments with specific quality versions determined by an Adaptive Bitrate (ABR) algorithm for the purpose of adapting the throughput fluctuation. Though the majority of HAS-based media services function well even under throughput restrictions and variations, there are still significant challenges for multimedia systems, especially the tradeoff among the increasing content complexity, various time-related requirements, and Quality of Experience (QoE). Content complexity encompasses the increased demands for data, such as high-resolution videos and high frame rates, as well as novel content formats, such as virtual reality (VR) and augmented reality (AR). Time-related requirements include – but are not limited to – start-up delay and end-to-end latency. QoE can be defined as the level of satisfaction or frustration experienced by the user of an application or service. Optimizing for one aspect usually negatively impacts at least one of the other two aspects. This thesis tackles critical open research questions in the context of HAS that significantly impact the QoE at the client side.
IEEE ICC'22_ LEADER_ A Collaborative Edge- and SDN-Assisted Framework for HTT...Reza Farahani
1) The document proposes LEADER, a collaborative edge- and SDN-assisted framework for HTTP adaptive video streaming. LEADER employs virtual network functions with transcoding capabilities at network edges to optimize video streaming quality of experience and network utilization.
2) An SDN controller runs an optimization model to determine the optimal location, action, and approach for fetching client-requested video qualities. A lightweight heuristic approach is also proposed.
3) An evaluation using a large-scale testbed of 250 clients, edge servers, and an SDN controller shows that LEADER improves average video bitrate, reduces quality switches and stalls, and increases perceived quality of experience over non-collaborative and default edge approaches. LE
ABR Algorithms Explained (from Streaming Media East 2016).pptxAliEdan2
The document discusses adaptive bitrate algorithms used in streaming media. It begins with an introduction to why adaptive bitrate (ABR) algorithms are used and the basic goals and constraints in their design. Examples are then provided of the ABR algorithms used in HLS.js and DASH.js players, including how they estimate bandwidth, handle constraints like CPU and screen size, and make bitrate switching decisions. The document outlines some potential improvements to basic ABR algorithms, such as using smoothing, quantization, and scheduling techniques. It concludes by emphasizing the importance of testing and iteration in optimizing ABR algorithm performance.
ABR Algorithms Explained (from Streaming Media East 2016) Erica Beavers
Adaptive bitrate algorithms have become paramount in ensuring quality video delivery on every device and across varying network conditions. This presentation looks at the design goals and the inner workings of ABR logic, how it is used in the open-source players hls.js and dash.js, and what broadcasters can do to improve and optimize their own stack.
The document discusses using software-defined networking (SDN) to improve quality of experience (QoE) for video streaming applications. It presents three research scenarios:
1) An SDN testbed that routes YouTube video traffic to minimize buffering times and maintain high video quality. Application-aware path selection achieved full bandwidth utilization while keeping buffer levels high.
2) A GENI testbed provides a scalable live video streaming service between university campuses. It demonstrated stable 1080p video transmission with minimal bandwidth usage.
3) An SDN solution for multi-party video conferencing that constructs multicast trees. It achieved higher average video rates and lower delays compared to traditional MCU-based approaches.
CSDN_ CDN-Aware QoE Optimization inSDN-Assisted HTTP Adaptive Video Streaming...Reza Farahani
1) The document presents CSDN, a framework that leverages SDN and NFV to provide network assistance for HTTP adaptive video streaming. It proposes using SDN virtual routers equipped with transcoding capabilities to optimize quality of experience (QoE) based on network conditions and user preferences.
2) An evaluation of CSDN on a testbed with 100 clients showed it improved playback bitrate by 7.5% and reduced quality switches and stalls by 19% compared to state-of-the-art approaches, enhancing user QoE and network utilization.
3) Future work directions include improving edge caching strategies, developing learning-based approaches, and extending an MILP model to optimize transcoding
A Two-Tiered On-Line Server-Side Bandwidth Reservation Framework for the Real...white paper
This document summarizes a two-tiered bandwidth reservation framework for delivering multiple video streams from servers in real-time. The framework uses a combination of per-stream reservations and a shared aggregate reservation across all streams. Each stream is allocated a guaranteed reservation equal to the p percentile of its bandwidth distribution. An additional shared reservation provides statistical multiplexing of peak bandwidth demands. This enables delivery of streams with less total bandwidth than deterministic approaches while bounding frame drop probabilities based on system parameters. The document proposes an online admission control algorithm that uses three pre-computed parameters per stream and has linear complexity in the number of servers.
Video services are evolving from traditional two-dimensional video to virtual reality and holograms, which offer six degrees of freedom to users, enabling them to freely move around in a scene and change focus as desired. However, this increase in freedom translates into stringent requirements in terms of ultra-high bandwidth (in the order of Gigabits per second) and minimal latency (in the order of milliseconds). To realize such immersive services, the network transport, as well as the video representation and encoding, have to be fundamentally enhanced. The purpose of this tutorial article is to provide an elaborate introduction to the creation, streaming, and evaluation of immersive video. Moreover, it aims to provide lessons learned and to point at promising research paths to enable truly interactive immersive video applications toward holography.
(Slides) P2P video broadcast based on per-peer transcoding and its evaluatio...Naoki Shibata
Shibata, N., Yasumoto, K., and Mori, M.: P2P Video Broadcast based on Per-Peer Transcoding and its Evaluation on PlanetLab, Proc. of 19th IASTED Int'l. Conf. on Parallel and Distributed Computing and Systems (PDCS2007), (November 2007).
http://ito-lab.naist.jp/themes/pdffiles/071121.shibata.pdcs2007.pdf
Adaptive Surveillance System using HTTP StreamingDuc Nguyen
This document proposes an adaptive home surveillance system using HTTP streaming. It discusses using intelligent cameras and dynamic adaptive streaming over HTTP (DASH) to develop a system that satisfies requirements for high automation, scalability and limited bandwidth. The system architecture divides video content into segments that are encoded into multiple quality levels. A resource allocation problem is formulated to determine the optimal video quality levels from different cameras under throughput and delay constraints. Experiments test the system's ability to dynamically adapt based on bandwidth changes and surveillance context. Future work aims to extend this approach to larger networks.
This document discusses evaluating the performance of video streaming over a simulated mobile WiMAX network using different segment durations. The experiment used OPNET to simulate a mobile client moving near and far from a base station. Video was streamed from a physical server to a client using DASH with segment sizes of 4, 8, and 12 seconds. Analysis found that smaller segments provided smoother streaming with less buffering, while larger segments consumed more CPU power at the client. Proper segment size selection can improve mobile video streaming quality.
Past research has shown that concurrent HTTP adaptive streaming (HAS) players behave selfishly and the resulting competition for shared resources leads to underutilization or oversubscription of the network, presentation quality instability and unfairness among the players, all of which adversely impact the viewer experience. While coordination among the players, as opposed to all being selfish, has its merits and may alleviate some of these issues. A fully distributed architecture is still desirable in many deployments and better reflects the design spirit of HAS. In this study, we focus on and propose a distributed bitrate adaptation scheme for HAS that borrows ideas from consensus and game theory frameworks. Experimental results show that the proposed distributed approach provides significant improvements in terms of viewer experience, presentation quality stability, fairness and network utilization, without using any explicit communication between the players.
This document summarizes an overview presentation on over-the-top content delivery and HTTP adaptive streaming. It discusses example services like Netflix, HBO Go, and BBC iPlayer. It also covers media delivery over the Internet, including the differences between managed IPTV delivery and unmanaged over-the-top delivery. The presentation also provides an overview of HTTP adaptive streaming building blocks and workflows for content generation, distribution, and consumption.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
This document proposes a software/hardware co-design framework called an SDSoC (system on a chip) to enable real-time computer vision processing at the edge for applications in the "Internet of Eyes". The framework uses a Xilinx Zynq chip containing an ARM processor and programmable logic. A prototype application for variable speed limit control on a motorway splits processing between the processor and programmable logic. Evaluation results found the framework can provide real-time processing with response times under 50ms while keeping power consumption under 2.5 watts.
Similar to Labmeeting - 20151013 - Adaptive Video Streaming over HTTP with Dynamic Resource Estimation (20)
A high-Speed Communication System is based on the Design of a Bi-NoC Router, ...DharmaBanothu
The Network on Chip (NoC) has emerged as an effective
solution for intercommunication infrastructure within System on
Chip (SoC) designs, overcoming the limitations of traditional
methods that face significant bottlenecks. However, the complexity
of NoC design presents numerous challenges related to
performance metrics such as scalability, latency, power
consumption, and signal integrity. This project addresses the
issues within the router's memory unit and proposes an enhanced
memory structure. To achieve efficient data transfer, FIFO buffers
are implemented in distributed RAM and virtual channels for
FPGA-based NoC. The project introduces advanced FIFO-based
memory units within the NoC router, assessing their performance
in a Bi-directional NoC (Bi-NoC) configuration. The primary
objective is to reduce the router's workload while enhancing the
FIFO internal structure. To further improve data transfer speed,
a Bi-NoC with a self-configurable intercommunication channel is
suggested. Simulation and synthesis results demonstrate
guaranteed throughput, predictable latency, and equitable
network access, showing significant improvement over previous
designs
Build the Next Generation of Apps with the Einstein 1 Platform.
Rejoignez Philippe Ozil pour une session de workshops qui vous guidera à travers les détails de la plateforme Einstein 1, l'importance des données pour la création d'applications d'intelligence artificielle et les différents outils et technologies que Salesforce propose pour vous apporter tous les bénéfices de l'IA.
Null Bangalore | Pentesters Approach to AWS IAMDivyanshu
#Abstract:
- Learn more about the real-world methods for auditing AWS IAM (Identity and Access Management) as a pentester. So let us proceed with a brief discussion of IAM as well as some typical misconfigurations and their potential exploits in order to reinforce the understanding of IAM security best practices.
- Gain actionable insights into AWS IAM policies and roles, using hands on approach.
#Prerequisites:
- Basic understanding of AWS services and architecture
- Familiarity with cloud security concepts
- Experience using the AWS Management Console or AWS CLI.
- For hands on lab create account on [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
# Scenario Covered:
- Basics of IAM in AWS
- Implementing IAM Policies with Least Privilege to Manage S3 Bucket
- Objective: Create an S3 bucket with least privilege IAM policy and validate access.
- Steps:
- Create S3 bucket.
- Attach least privilege policy to IAM user.
- Validate access.
- Exploiting IAM PassRole Misconfiguration
-Allows a user to pass a specific IAM role to an AWS service (ec2), typically used for service access delegation. Then exploit PassRole Misconfiguration granting unauthorized access to sensitive resources.
- Objective: Demonstrate how a PassRole misconfiguration can grant unauthorized access.
- Steps:
- Allow user to pass IAM role to EC2.
- Exploit misconfiguration for unauthorized access.
- Access sensitive resources.
- Exploiting IAM AssumeRole Misconfiguration with Overly Permissive Role
- An overly permissive IAM role configuration can lead to privilege escalation by creating a role with administrative privileges and allow a user to assume this role.
- Objective: Show how overly permissive IAM roles can lead to privilege escalation.
- Steps:
- Create role with administrative privileges.
- Allow user to assume the role.
- Perform administrative actions.
- Differentiation between PassRole vs AssumeRole
Try at [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
Accident detection system project report.pdfKamal Acharya
The Rapid growth of technology and infrastructure has made our lives easier. The
advent of technology has also increased the traffic hazards and the road accidents take place
frequently which causes huge loss of life and property because of the poor emergency facilities.
Many lives could have been saved if emergency service could get accident information and
reach in time. Our project will provide an optimum solution to this draw back. A piezo electric
sensor can be used as a crash or rollover detector of the vehicle during and after a crash. With
signals from a piezo electric sensor, a severe accident can be recognized. According to this
project when a vehicle meets with an accident immediately piezo electric sensor will detect the
signal or if a car rolls over. Then with the help of GSM module and GPS module, the location
will be sent to the emergency contact. Then after conforming the location necessary action will
be taken. If the person meets with a small accident or if there is no serious threat to anyone’s
life, then the alert message can be terminated by the driver by a switch provided in order to
avoid wasting the valuable time of the medical rescue team.
Tools & Techniques for Commissioning and Maintaining PV Systems W-Animations ...Transcat
Join us for this solutions-based webinar on the tools and techniques for commissioning and maintaining PV Systems. In this session, we'll review the process of building and maintaining a solar array, starting with installation and commissioning, then reviewing operations and maintenance of the system. This course will review insulation resistance testing, I-V curve testing, earth-bond continuity, ground resistance testing, performance tests, visual inspections, ground and arc fault testing procedures, and power quality analysis.
Fluke Solar Application Specialist Will White is presenting on this engaging topic:
Will has worked in the renewable energy industry since 2005, first as an installer for a small east coast solar integrator before adding sales, design, and project management to his skillset. In 2022, Will joined Fluke as a solar application specialist, where he supports their renewable energy testing equipment like IV-curve tracers, electrical meters, and thermal imaging cameras. Experienced in wind power, solar thermal, energy storage, and all scales of PV, Will has primarily focused on residential and small commercial systems. He is passionate about implementing high-quality, code-compliant installation techniques.
We have designed & manufacture the Lubi Valves LBF series type of Butterfly Valves for General Utility Water applications as well as for HVAC applications.
Digital Twins Computer Networking Paper Presentation.pptxaryanpankaj78
A Digital Twin in computer networking is a virtual representation of a physical network, used to simulate, analyze, and optimize network performance and reliability. It leverages real-time data to enhance network management, predict issues, and improve decision-making processes.
Digital Twins Computer Networking Paper Presentation.pptx
Labmeeting - 20151013 - Adaptive Video Streaming over HTTP with Dynamic Resource Estimation
1. NTUST - Mobilizing Information Technology Lab
Adaptive Video Streaming over HTTP with
Dynamic Resource Estimation
Truong Cong Thang Univ. of Aizu, Aizu-Wakamatsu, Japan
Le, H.T. ; Nguyen, H.X. ; Pham, A.T. ; Jung Won Kang ; Yong Man Ro
Journal of Communications and Networks, Vol. 15, No. 6, Dec. 2013
Advisor:Jenq-Shiou Leu
Student:Bing-Syuan Wang
Date:2015/10/13
National Taiwan University of Science and Technology
2. NTUST - Mobilizing Information Technology Lab
Outline
• Introduction
• Overview of Adaptive HTTP Streaming
• Throughput Estimation
• Experiments
• Video Bitrate Estimation
• Experiments
• Conclusion
2
3. NTUST - Mobilizing Information Technology Lab
Introduction
• Hypertext transfer protocol (HTTP) streaming has become a cost effective
means for multimedia delivery.
• Client-based approach.
• Bandwidth estimation.
Bitrate estimation.
• Constant bitrate (CBR) video/Variable bitrate (VBR) video.
• This solution may enable CBR-streaming even though the video is encoded in
VBR mode
3
4. NTUST - Mobilizing Information Technology Lab
Overview of Adaptive HTTP Streaming
• Dynamic adaptive streaming over HTTP (DASH)
• Metadata: media presentation description (MPD)
4
5. NTUST - Mobilizing Information Technology Lab
Throughput Estimation
• Based on previous segment throughputs / last segment (aggressive method).
5
6. NTUST - Mobilizing Information Technology Lab
Throughput Estimation
• 𝑇𝑑
𝑒
(𝑖) : The estimated download throughput
• 𝐷 𝑒
(𝑖) : The estimated download duration
6
7. NTUST - Mobilizing Information Technology Lab
Experiments
• Segment duration 6s; target buffer level 12s.
• Client behavior with segment duration of 6 s and throughput estimation using:
(a) Aggressive method and (b) proposed method.
7
11.5s
6s
8. NTUST - Mobilizing Information Technology Lab
Experiments
• Comparison of the aggressive method and proposed method with segment
duration of 6 s using: (a) CDF of bitrate and (b) CDF of buffer level.
8
9. NTUST - Mobilizing Information Technology Lab
Video Bitrate Estimation
• The bitrate is not constant.
• The client will be able to dynamically select the highest possible bitrate.
• Quantization parameter (QP)
(1-unit QP → 12%)
• A. Inter-Stream Bitrate Estimation
• B. Intra-Stream Bitrate Estimation
9
10. NTUST - Mobilizing Information Technology Lab
Experiments
• Segment duration 2s; target buffer level 4s.
• Preparing the video alternatives in VBR mode.
10
11. NTUST - Mobilizing Information Technology Lab
Experiments
• Based on average bitrate (ABR), highest bitrate (HBR), estimated bitrate (EBR)
11
12. NTUST - Mobilizing Information Technology Lab
Conclusion
• The proposed method can capture quickly the changes of throughput, and
then adjust video bitrate accordingly.
• The estimation methods essentially do not affect the playback quality (less
than 1 ms).
• The proposed solutions were effective to maintain a stable buffer under
fluctuations of bandwidth and video bitrate.
12
Editor's Notes
Feature extraction: parameter
Controller: decide to adjust the computation model