This document summarizes a two-tiered bandwidth reservation framework for delivering multiple video streams from servers in real-time. The framework uses a combination of per-stream reservations and a shared aggregate reservation across all streams. Each stream is allocated a guaranteed reservation equal to the p percentile of its bandwidth distribution. An additional shared reservation provides statistical multiplexing of peak bandwidth demands. This enables delivery of streams with less total bandwidth than deterministic approaches while bounding frame drop probabilities based on system parameters. The document proposes an online admission control algorithm that uses three pre-computed parameters per stream and has linear complexity in the number of servers.
Multihop Routing In Camera Sensor NetworksChuka Okoye
This poster abstract summarizes an experimental study of multihop routing in camera sensor networks. The experiments tested the Collection Tree Protocol (CTP) using CITRIC camera motes and TelosB motes. The experiments varied payload size and delay between packet transmissions to evaluate data rate, reception rate, and latency over different hop counts. The results show that there is a tradeoff between reception rate and latency. Adding a delay between transmissions can improve both data rate and reception rate compared to best effort transmission. The optimal delay depends on the network density and hop count.
Broadcasting protocols has the ability to improve the efficiency of VOD service by minimizing the bandwidth required to transfer video’s to client as requested. Harmonic broadcasting protocol is the most protocol among this those protocols. Here I present the characteristics and functionalities of Poly-harmonic Broadcast- ing Protocol. At the end of this paper , i also established a hypothesis to modify this Poly-harmonic Protocol so that client can receive the requested data at less waiting time along with less buffer storage.
This document discusses three methods for reducing the bit-rate of transmitted video streams: 1) Time-shifting of MPEG-2 packets, which smooths out variable bit-rates without changing individual encoding rates; 2) Open loop transrating, which uses encoding tools to recompress streams at lower rates in a non-reversible way; 3) Closed loop transrating, which iteratively adjusts rates using feedback to maintain quality. These techniques help network operators optimize bandwidth usage and revenues by controlling streaming rates to match infrastructure limits and service pricing models.
This document provides an introduction to digital television. It discusses analog TV standards and the conversion to digital with ITU-BT.601 and BT.709 defining digital video formats. It describes MPEG-2 transport streams and tables for encoding digital TV signals. Standards for digital terrestrial, satellite and cable broadcasting networks are also summarized.
This document provides a European standard for a second generation digital transmission system for cable systems, known as DVB-C2. It defines the system architecture, input processing, bit-interleaved coding and modulation, data slice packet generation, layer 1 part 2 signalling generation and coding, frame builder functions, and OFDM generation for the DVB-C2 system. The standard specifies the frame structure, coding, modulation, and other technical aspects to enable digital video and audio broadcasting over cable networks.
This document discusses bandwidth utilization techniques including multiplexing and spreading. It describes multiplexing as a way to share bandwidth across a link when the bandwidth of the medium is greater than what is needed by a single device. Specific multiplexing techniques covered include frequency-division multiplexing, wavelength-division multiplexing, synchronous time-division multiplexing, and statistical time-division multiplexing. The document also discusses spreading techniques including frequency hopping spread spectrum and direct sequence spread spectrum as ways to prevent eavesdropping and jamming by adding redundancy. It provides examples and diagrams to illustrate key concepts.
This document analyzes the RC4 encryption algorithm and examines how its performance is affected by changing parameters like encryption key length and file size. Experimental tests were conducted to measure encryption time for different key lengths and file types. The results show encryption time increases with longer keys and larger files, and are modeled mathematically. The document also provides background on encryption methods, how RC4 works, and compares stream and block ciphers.
A Statistical Approach to Adaptive Playout Scheduling in Voice Over Internet ...IJECEIAES
This document summarizes a proposed statistical approach to adaptive playout scheduling in Voice over Internet Protocol (VoIP) communication. The approach estimates the optimal buffer delay for each packet based on network statistics, packet loss rate, and buffer availability. It uses a window-based method to track recent network conditions and estimate delay for the current packet. Buffer delay is calculated based on estimated jitter, a delay factor accounting for surrounding packet arrival, and late packet loss rate. Experimental results show this approach allocates buffer delay with the lowest late packet loss rate compared to other algorithms.
Multihop Routing In Camera Sensor NetworksChuka Okoye
This poster abstract summarizes an experimental study of multihop routing in camera sensor networks. The experiments tested the Collection Tree Protocol (CTP) using CITRIC camera motes and TelosB motes. The experiments varied payload size and delay between packet transmissions to evaluate data rate, reception rate, and latency over different hop counts. The results show that there is a tradeoff between reception rate and latency. Adding a delay between transmissions can improve both data rate and reception rate compared to best effort transmission. The optimal delay depends on the network density and hop count.
Broadcasting protocols has the ability to improve the efficiency of VOD service by minimizing the bandwidth required to transfer video’s to client as requested. Harmonic broadcasting protocol is the most protocol among this those protocols. Here I present the characteristics and functionalities of Poly-harmonic Broadcast- ing Protocol. At the end of this paper , i also established a hypothesis to modify this Poly-harmonic Protocol so that client can receive the requested data at less waiting time along with less buffer storage.
This document discusses three methods for reducing the bit-rate of transmitted video streams: 1) Time-shifting of MPEG-2 packets, which smooths out variable bit-rates without changing individual encoding rates; 2) Open loop transrating, which uses encoding tools to recompress streams at lower rates in a non-reversible way; 3) Closed loop transrating, which iteratively adjusts rates using feedback to maintain quality. These techniques help network operators optimize bandwidth usage and revenues by controlling streaming rates to match infrastructure limits and service pricing models.
This document provides an introduction to digital television. It discusses analog TV standards and the conversion to digital with ITU-BT.601 and BT.709 defining digital video formats. It describes MPEG-2 transport streams and tables for encoding digital TV signals. Standards for digital terrestrial, satellite and cable broadcasting networks are also summarized.
This document provides a European standard for a second generation digital transmission system for cable systems, known as DVB-C2. It defines the system architecture, input processing, bit-interleaved coding and modulation, data slice packet generation, layer 1 part 2 signalling generation and coding, frame builder functions, and OFDM generation for the DVB-C2 system. The standard specifies the frame structure, coding, modulation, and other technical aspects to enable digital video and audio broadcasting over cable networks.
This document discusses bandwidth utilization techniques including multiplexing and spreading. It describes multiplexing as a way to share bandwidth across a link when the bandwidth of the medium is greater than what is needed by a single device. Specific multiplexing techniques covered include frequency-division multiplexing, wavelength-division multiplexing, synchronous time-division multiplexing, and statistical time-division multiplexing. The document also discusses spreading techniques including frequency hopping spread spectrum and direct sequence spread spectrum as ways to prevent eavesdropping and jamming by adding redundancy. It provides examples and diagrams to illustrate key concepts.
This document analyzes the RC4 encryption algorithm and examines how its performance is affected by changing parameters like encryption key length and file size. Experimental tests were conducted to measure encryption time for different key lengths and file types. The results show encryption time increases with longer keys and larger files, and are modeled mathematically. The document also provides background on encryption methods, how RC4 works, and compares stream and block ciphers.
A Statistical Approach to Adaptive Playout Scheduling in Voice Over Internet ...IJECEIAES
This document summarizes a proposed statistical approach to adaptive playout scheduling in Voice over Internet Protocol (VoIP) communication. The approach estimates the optimal buffer delay for each packet based on network statistics, packet loss rate, and buffer availability. It uses a window-based method to track recent network conditions and estimate delay for the current packet. Buffer delay is calculated based on estimated jitter, a delay factor accounting for surrounding packet arrival, and late packet loss rate. Experimental results show this approach allocates buffer delay with the lowest late packet loss rate compared to other algorithms.
Microsoft PowerPoint - WirelessCluster_PresVideoguy
This document analyzes delays in unicast video streaming over IEEE 802.11 WLAN networks. It describes conducting an experiment using a testbed with a Darwin Streaming Server and WLAN probe to capture packets. The analysis found that video bitrate variations, packetization scheme, bandwidth load, and frame-based nature of video all impacted mean delay. Bursts of packets from video frames caused per-packet delay to increase in a sawtooth pattern. Increasing uplink load was also found to affect delay variations.
This document discusses network provisioning for multimedia services using traffic aggregation. It covers topics like network provisioning, packet aggregation, traffic engineering, dimensioning, traffic analysis and aggregation. Methods are proposed for optimizing network resource reservations to guarantee delay bounds for aggregated multimedia traffic, including using real video traces and generating synthetic aggregates. Network provisioning scenarios are described for provisioning using real traces, dynamic aggregates, traffic patterns, and optimizing bandwidth utilization.
This document discusses different techniques for bandwidth utilization, including multiplexing and spreading. It describes multiplexing techniques such as frequency division multiplexing (FDM), time division multiplexing (TDM), and statistical TDM. FDM combines analog signals by modulating them to different carrier frequencies. TDM combines digital channels by assigning each a time slot in a frame. Statistical TDM improves efficiency over synchronous TDM by removing empty slots. The document also discusses applications of these techniques such as telephone line multiplexing using T-1 and E-1 lines.
This document discusses traditional communication architectures for multiprocessor systems and proposes that Active Messages is a better communication architecture. It analyzes three traditional low-level communication layers - message passing, message driven, and shared memory - and argues that they are best viewed as communication models implemented on top of a general-purpose communication architecture like Active Messages, rather than as architectures themselves. The document provides an example implementation of the send and receive communication model using Active Messages on the CM-5 to demonstrate how it can be implemented efficiently while gaining flexibility.
Here we study the channel capacity of the signal from analog and digital communication signals. Also study data rates limit , Noisy-channel coding theorem, Shannon capacity theorem.
The document discusses the application layer in computer networking. It describes the client-server model where clients send queries to servers which respond with answers. It also discusses name resolution, where hostnames are translated to IP addresses, and protocols like TCP and UDP which provide transport services. Common applications like email, the web, and peer-to-peer are briefly mentioned as examples.
The document discusses multimedia networking technologies. Chapter 1 covers RTP and RTCP for multimedia transmission over IP networks. It describes RTP packet formats, RTCP packet types including SR, RR and SDES, and how RTP implements voice and video streaming. It also discusses quality of service techniques for multimedia networking including scheduling, policing, packet classification, and call admission control.
Vehicular Ad hoc Networks -VANET as a sub
class of Mobile Ad hoc Networks -MANET provides a
wireless communication among vehicles and vehicle to road
side equipment. VANET allows vehicles to form a selforganized network without the need for permanent
infrastructure. With high number of nodes and mobility,
ensuring the Quality of Service- QoS in VANET is a
challenging task. QoS is essential to improve the
communication efficiency in vehicular networks. Thus a
study of QoS in VANET is useful as a fundamental for
constructing an effective vehicular network. In this paper,
we propose Network coding Technique to improve
Bandwidth utilization on VANET. When two sources are
involved in broadcasting in the same area and at same time
, the relay will make use of Network coding to reduce the
Bandwidth consumption. While receiving the packet, a
relay has to decide whether to send the packet directly to
reduce the delay or use Network coding for effective
Bandwidth utilization. In order to make trade off, we
introduce two kind of protocols named Buffer Size Control
Scheme -BSCS and Time Control Scheme -TCS. By this
two protocols, we aim to reduce the delay that is
experienced by each packet and achieving better
bandwidth utilization.
The low efficiency caused by the high amount of small packets present in the network can be alleviated by means of packet aggregation.
There are some situations in which multiplexing a number of small packets into a bigger one is desirable. For example, a number of small packets can be sent together between a pair of machines if they share a common network path. Thus, the traffic profile can be shifted from small to larger packets, reducing the network overhead and the number of packets per second to be managed by intermediaterouters.
This presentation describes Simplemux, a protocol able to encapsulate a number of packets belonging to different protocols into a single packet. It includes the "Protocol" field on each multiplexing header, thus allowing the inclusion of a number of packets belonging to different protocols (multiplexed packets) on a packet of another protocol (tunneling protocol).
In order to reduce the overhead, the size of the multiplexing headers is kept very low (it may be a single byte when multiplexing small packets).
Header compression and multiplexing in LISPJose Saldana
When small payloads are transmitted through a packet-switched network, the resulting overhead may result significant. This is stressed in the case of LISP, where a number of headers are prepended to a packet, as new headers have to be added to each packet.
This presentation proposes to send together a number of small packets, which are in the buffer of a ITR, having the same ETR as destination, into a single packet. Therefore, they will share a single LISP header, and therefore bandwidth savings can be obtained, and a reduction in the overall number of packets sent to the network can be achieved.
There are three main types of multiplexing: frequency division multiplexing (FDM), time division multiplexing (TDM), and wavelength division multiplexing (WDM). FDM assigns different frequency bands to different signals, TDM divides the transmission medium into time slots and assigns each signal to a time slot, and WDM assigns different wavelength bands to different signals. Pulse code modulation (PCM) is commonly used for digital voice transmission. In PCM, the analog voice signal is sampled, quantized into digital code, and transmitted over the channel in a frame structure consisting of time slots.
Linear Programming Case Study - Maximizing Audio QualitySharad Srivastava
This document presents a linear programming problem to maximize audio quality for real-time multimedia applications under bandwidth and delay constraints. It formulates the problem of optimizing codec selection to maximize MOS score given limitations of available bandwidth and delay. It provides sample codec data and implements the linear program for different network conditions, finding optimal mixes of codecs to achieve the best possible MOS within each set of constraints.
Engineering Research Publication
Best International Journals, High Impact Journals,
International Journal of Engineering & Technical Research
ISSN : 2321-0869 (O) 2454-4698 (P)
www.erpublication.org
The document discusses various topics related to congestion control and quality of service in computer networks. It defines congestion and explains congestion control techniques like open-loop prevention using policies around retransmission, windows, acknowledgements, and admission. It also covers closed-loop removal techniques like back pressure, choke points, and implicit/explicit signaling. Quality of service techniques like scheduling, shaping, and reservation are explained. Integrated services and differentiated services models for providing QoS in IP networks are summarized.
This document discusses multiplexing and spreading techniques for bandwidth utilization and privacy/anti-jamming. It covers frequency division multiplexing (FDM), wavelength division multiplexing (WDM), time division multiplexing (TDM), statistical TDM, inverse multiplexing, frequency hopping spread spectrum (FHSS), and direct sequence spread spectrum (DSSS). Examples are provided for combining voice channels using FDM, optical signal multiplexing with WDM, and modulating data streams for transmission using TDM, FHSS, and DSSS. Common applications discussed include radio/TV broadcasting, fiber optic networks, telephone systems, and digital subscriber lines.
104623 time division multiplexing (transmitter, receiver,commutator)Devyani Gera
Time Division Multiplexing (TDM) is a technique that transmits multiple message signals over a single communication channel by dividing the time frame into time slots, with one time slot for each message signal. TDM transmits samples of each signal in sequential time slots using techniques like PAM. There are two main types of TDM: synchronous TDM where all signals use the same sampling rate, and asynchronous TDM where different signals can use different sampling rates. The document then provides examples and problems demonstrating the application of TDM techniques.
RESOURCE ALLOCATION ALGORITHMS FOR QOS OPTIMIZATION IN MOBILE WIMAX NETWORKSijwmn
This document summarizes research on resource allocation algorithms for quality of service (QoS) optimization in mobile WiMAX networks. It discusses the Swapping Min-Max (SWIM) algorithm and Cooperative Multicast Scheduling (CMS) technique. SWIM performs scheduling for real-time polling service to meet QoS criteria like optimal throughput, latency guarantees, minimal delay jitter and number of bursts. CMS enhances throughput for multicast video by dividing transmission bursts into two phases where selected stations retransmit to nearby members for cooperation. Simulation results show SWIM has less bursts, zero jitter and optimal throughput, while CMS further improves throughput for each multicast group member.
Mathematical Explanation of channel capacityHere we can see that the channel capacity is measured with the multiplication of pulses per second and information. This is how we can measure the channel capacity.
This document discusses multiple-input multiple-output (MIMO) systems. It begins by outlining the motivations and aspirations for developing MIMO, including achieving high data rates near 1 Gbps while maintaining quality of service. It then covers MIMO system modeling and capacity studies. Different MIMO designs are presented that aim to achieve spatial multiplexing gain or diversity gain. Practical MIMO systems and architectures like V-BLAST are described. Networking applications of MIMO including MAC protocols are also discussed.
This document discusses how we build our lives and the importance of doing so carefully and intentionally. It tells a story about a carpenter who was ready to retire but agreed to build one last house as a favor. However, his heart was not in his work and he took shortcuts, using inferior materials and shoddy workmanship. This led to an unfortunate end to his career. The moral is that we must build our lives with care and focus, as we will have to live with the "house" we create. We should avoid distractions and reacting instead of acting thoughtfully.
The “direct site,” or the main company Web site, is the cornerstone of your sales and distribution system. It not only supports “standard pricing” but also reinforces company messaging and positioning as well as providing access to customer, partner, and vendor support systems.
This is a copy of a review session given to a group of manufacturers, publishers, and technology firms in late 2010. All of these types of Companies have the unique challenge of balancing customer and reseller requirements. Contact Yeoman at 800-667-6098 if you have any questions
Microsoft PowerPoint - WirelessCluster_PresVideoguy
This document analyzes delays in unicast video streaming over IEEE 802.11 WLAN networks. It describes conducting an experiment using a testbed with a Darwin Streaming Server and WLAN probe to capture packets. The analysis found that video bitrate variations, packetization scheme, bandwidth load, and frame-based nature of video all impacted mean delay. Bursts of packets from video frames caused per-packet delay to increase in a sawtooth pattern. Increasing uplink load was also found to affect delay variations.
This document discusses network provisioning for multimedia services using traffic aggregation. It covers topics like network provisioning, packet aggregation, traffic engineering, dimensioning, traffic analysis and aggregation. Methods are proposed for optimizing network resource reservations to guarantee delay bounds for aggregated multimedia traffic, including using real video traces and generating synthetic aggregates. Network provisioning scenarios are described for provisioning using real traces, dynamic aggregates, traffic patterns, and optimizing bandwidth utilization.
This document discusses different techniques for bandwidth utilization, including multiplexing and spreading. It describes multiplexing techniques such as frequency division multiplexing (FDM), time division multiplexing (TDM), and statistical TDM. FDM combines analog signals by modulating them to different carrier frequencies. TDM combines digital channels by assigning each a time slot in a frame. Statistical TDM improves efficiency over synchronous TDM by removing empty slots. The document also discusses applications of these techniques such as telephone line multiplexing using T-1 and E-1 lines.
This document discusses traditional communication architectures for multiprocessor systems and proposes that Active Messages is a better communication architecture. It analyzes three traditional low-level communication layers - message passing, message driven, and shared memory - and argues that they are best viewed as communication models implemented on top of a general-purpose communication architecture like Active Messages, rather than as architectures themselves. The document provides an example implementation of the send and receive communication model using Active Messages on the CM-5 to demonstrate how it can be implemented efficiently while gaining flexibility.
Here we study the channel capacity of the signal from analog and digital communication signals. Also study data rates limit , Noisy-channel coding theorem, Shannon capacity theorem.
The document discusses the application layer in computer networking. It describes the client-server model where clients send queries to servers which respond with answers. It also discusses name resolution, where hostnames are translated to IP addresses, and protocols like TCP and UDP which provide transport services. Common applications like email, the web, and peer-to-peer are briefly mentioned as examples.
The document discusses multimedia networking technologies. Chapter 1 covers RTP and RTCP for multimedia transmission over IP networks. It describes RTP packet formats, RTCP packet types including SR, RR and SDES, and how RTP implements voice and video streaming. It also discusses quality of service techniques for multimedia networking including scheduling, policing, packet classification, and call admission control.
Vehicular Ad hoc Networks -VANET as a sub
class of Mobile Ad hoc Networks -MANET provides a
wireless communication among vehicles and vehicle to road
side equipment. VANET allows vehicles to form a selforganized network without the need for permanent
infrastructure. With high number of nodes and mobility,
ensuring the Quality of Service- QoS in VANET is a
challenging task. QoS is essential to improve the
communication efficiency in vehicular networks. Thus a
study of QoS in VANET is useful as a fundamental for
constructing an effective vehicular network. In this paper,
we propose Network coding Technique to improve
Bandwidth utilization on VANET. When two sources are
involved in broadcasting in the same area and at same time
, the relay will make use of Network coding to reduce the
Bandwidth consumption. While receiving the packet, a
relay has to decide whether to send the packet directly to
reduce the delay or use Network coding for effective
Bandwidth utilization. In order to make trade off, we
introduce two kind of protocols named Buffer Size Control
Scheme -BSCS and Time Control Scheme -TCS. By this
two protocols, we aim to reduce the delay that is
experienced by each packet and achieving better
bandwidth utilization.
The low efficiency caused by the high amount of small packets present in the network can be alleviated by means of packet aggregation.
There are some situations in which multiplexing a number of small packets into a bigger one is desirable. For example, a number of small packets can be sent together between a pair of machines if they share a common network path. Thus, the traffic profile can be shifted from small to larger packets, reducing the network overhead and the number of packets per second to be managed by intermediaterouters.
This presentation describes Simplemux, a protocol able to encapsulate a number of packets belonging to different protocols into a single packet. It includes the "Protocol" field on each multiplexing header, thus allowing the inclusion of a number of packets belonging to different protocols (multiplexed packets) on a packet of another protocol (tunneling protocol).
In order to reduce the overhead, the size of the multiplexing headers is kept very low (it may be a single byte when multiplexing small packets).
Header compression and multiplexing in LISPJose Saldana
When small payloads are transmitted through a packet-switched network, the resulting overhead may result significant. This is stressed in the case of LISP, where a number of headers are prepended to a packet, as new headers have to be added to each packet.
This presentation proposes to send together a number of small packets, which are in the buffer of a ITR, having the same ETR as destination, into a single packet. Therefore, they will share a single LISP header, and therefore bandwidth savings can be obtained, and a reduction in the overall number of packets sent to the network can be achieved.
There are three main types of multiplexing: frequency division multiplexing (FDM), time division multiplexing (TDM), and wavelength division multiplexing (WDM). FDM assigns different frequency bands to different signals, TDM divides the transmission medium into time slots and assigns each signal to a time slot, and WDM assigns different wavelength bands to different signals. Pulse code modulation (PCM) is commonly used for digital voice transmission. In PCM, the analog voice signal is sampled, quantized into digital code, and transmitted over the channel in a frame structure consisting of time slots.
Linear Programming Case Study - Maximizing Audio QualitySharad Srivastava
This document presents a linear programming problem to maximize audio quality for real-time multimedia applications under bandwidth and delay constraints. It formulates the problem of optimizing codec selection to maximize MOS score given limitations of available bandwidth and delay. It provides sample codec data and implements the linear program for different network conditions, finding optimal mixes of codecs to achieve the best possible MOS within each set of constraints.
Engineering Research Publication
Best International Journals, High Impact Journals,
International Journal of Engineering & Technical Research
ISSN : 2321-0869 (O) 2454-4698 (P)
www.erpublication.org
The document discusses various topics related to congestion control and quality of service in computer networks. It defines congestion and explains congestion control techniques like open-loop prevention using policies around retransmission, windows, acknowledgements, and admission. It also covers closed-loop removal techniques like back pressure, choke points, and implicit/explicit signaling. Quality of service techniques like scheduling, shaping, and reservation are explained. Integrated services and differentiated services models for providing QoS in IP networks are summarized.
This document discusses multiplexing and spreading techniques for bandwidth utilization and privacy/anti-jamming. It covers frequency division multiplexing (FDM), wavelength division multiplexing (WDM), time division multiplexing (TDM), statistical TDM, inverse multiplexing, frequency hopping spread spectrum (FHSS), and direct sequence spread spectrum (DSSS). Examples are provided for combining voice channels using FDM, optical signal multiplexing with WDM, and modulating data streams for transmission using TDM, FHSS, and DSSS. Common applications discussed include radio/TV broadcasting, fiber optic networks, telephone systems, and digital subscriber lines.
104623 time division multiplexing (transmitter, receiver,commutator)Devyani Gera
Time Division Multiplexing (TDM) is a technique that transmits multiple message signals over a single communication channel by dividing the time frame into time slots, with one time slot for each message signal. TDM transmits samples of each signal in sequential time slots using techniques like PAM. There are two main types of TDM: synchronous TDM where all signals use the same sampling rate, and asynchronous TDM where different signals can use different sampling rates. The document then provides examples and problems demonstrating the application of TDM techniques.
RESOURCE ALLOCATION ALGORITHMS FOR QOS OPTIMIZATION IN MOBILE WIMAX NETWORKSijwmn
This document summarizes research on resource allocation algorithms for quality of service (QoS) optimization in mobile WiMAX networks. It discusses the Swapping Min-Max (SWIM) algorithm and Cooperative Multicast Scheduling (CMS) technique. SWIM performs scheduling for real-time polling service to meet QoS criteria like optimal throughput, latency guarantees, minimal delay jitter and number of bursts. CMS enhances throughput for multicast video by dividing transmission bursts into two phases where selected stations retransmit to nearby members for cooperation. Simulation results show SWIM has less bursts, zero jitter and optimal throughput, while CMS further improves throughput for each multicast group member.
Mathematical Explanation of channel capacityHere we can see that the channel capacity is measured with the multiplication of pulses per second and information. This is how we can measure the channel capacity.
This document discusses multiple-input multiple-output (MIMO) systems. It begins by outlining the motivations and aspirations for developing MIMO, including achieving high data rates near 1 Gbps while maintaining quality of service. It then covers MIMO system modeling and capacity studies. Different MIMO designs are presented that aim to achieve spatial multiplexing gain or diversity gain. Practical MIMO systems and architectures like V-BLAST are described. Networking applications of MIMO including MAC protocols are also discussed.
This document discusses how we build our lives and the importance of doing so carefully and intentionally. It tells a story about a carpenter who was ready to retire but agreed to build one last house as a favor. However, his heart was not in his work and he took shortcuts, using inferior materials and shoddy workmanship. This led to an unfortunate end to his career. The moral is that we must build our lives with care and focus, as we will have to live with the "house" we create. We should avoid distractions and reacting instead of acting thoughtfully.
The “direct site,” or the main company Web site, is the cornerstone of your sales and distribution system. It not only supports “standard pricing” but also reinforces company messaging and positioning as well as providing access to customer, partner, and vendor support systems.
This is a copy of a review session given to a group of manufacturers, publishers, and technology firms in late 2010. All of these types of Companies have the unique challenge of balancing customer and reseller requirements. Contact Yeoman at 800-667-6098 if you have any questions
This document compares and contrasts Costco and Walmart. It shows that while Walmart has higher total sales and profits, Costco pays employees more on average at $16 per hour compared to $9.68 at Walmart. Costco also has lower employee turnover at 24% versus 50% at Walmart. Costco focuses on providing value to members through low prices and high quality goods, while maintaining higher wages and benefits for employees.
Presentation by Helen Spandler at Sociology of Mental Health Study Group symposium: What does sociology need to contribute towards or against the wellbeing agenda? on 10 June 2013.
This document provides an overview of new performance and scalability improvements in Java SE 6, including runtime optimizations like biased locking and lock coarsening, garbage collection enhancements like parallel compaction, and client-side improvements such as reduced application startup time. Benchmark results demonstrate performance gains of 10-20% on SPECjbb2005, I/O tests, and VolanoMark compared to Java SE 5. The document discusses the new features and their impact in detail over several sections.
The document discusses celebrating festivals in memory of the narrator's grandfather who recently passed away. The narrator, a child, wants to celebrate the festivals as usual but others in the family feel celebrations should be skipped this year due to the grandfather's death. The narrator's father understands his perspective and explains that celebrations can take a different form, focusing more on togetherness and traditions than decorations and gifts. The father feels this approach would honor the grandfather's memory better. The narrator is convinced and agrees celebrations should happen to make his grandfather happy.
This document provides an overview and teaching ideas for a sociology update on various topics relating to education and technology. It includes international comparisons of education systems using PISA test results and videos. Other topics covered include cybercrime, surveillance, international students in the UK, the impact of Brexit on university research, and cyberbullying. Resources like websites, documentaries and TED talks are provided for each topic.
Video streaming using light-weight transcoding and in-network intelligenceMinh Nguyen
In this paper, we introduce a novel approach, LwTE, which reduces streaming costs in HTTP Adaptive Streaming (HAS) by enabling light-weight transcoding at the edge. In LwTE, during encoding of a video segment in the origin server, a metadata is generated which stores the optimal encoding decisions. LwTE enables us to store only the highest bitrate plus corresponding metadata (of very small size) for unpopular video segments/bitrates. Since metadata is of very small size, replacing unpopular video segments/bitrates with their metadata results in considerable saving in the storage costs. The metadata is reused at the edge servers to reduce the required time and computational resources for on-the-fly transcoding.
The document discusses distributed multimedia systems. It describes characteristics of multimedia data including being time-based and bulky. It also covers quality of service (QoS) management which involves resource scheduling, admission control, and traffic shaping algorithms. Stream adaptation techniques like scaling and filtering allow applications to adapt to changing resource availability. The case study describes the Tiger video file server system which uses striping, mirroring and a distributed scheduling algorithm to deliver video on demand with high performance and scalability.
Multihop Routing In Camera Sensor NetworksChuka Okoye
This poster abstract summarizes an experimental study of multihop routing in camera sensor networks. The experiments tested the Collection Tree Protocol (CTP) using CITRIC camera motes and TelosB motes. The experiments varied payload size and delay between packet transmissions to evaluate data rate, reception rate, and latency over different hop counts. The results show that there is a tradeoff between reception rate and latency. Adding a delay between transmissions can improve both data rate and reception rate compared to best effort transmission. The optimal delay depends on the network density and hop count.
1. The document describes a hierarchical framework for allocating network resources for robust home video streaming.
2. Key aspects of the framework include using TCP for transport, scalable video coding with temporal and SNR layers to adapt to bandwidth fluctuations, and prioritizing frame types to minimize quality impacts when frames must be dropped.
3. The framework aims to provide a recognizable video stream even during periods of network overload by gracefully degrading quality instead of causing stalls or failures.
The document discusses key concepts for engineering quality of service (QoS) on the Internet, including QoS frameworks, traffic source types, traffic parameters, QoS parameters, signalling, resource reservation, admission control, policing, shaping, queuing, scheduling, and congestion control. It provides examples and explanations of how these concepts work together to provide QoS guarantees for different types of network traffic.
QOS - LIQUIDSTREAM: SCALABLE MONITORING AND BANDWIDTH CONTROL IN PEER TO PEER...ijp2p
The vast majority of research in P2P live streaming systems focuses on system architectures that offer to
participating peers: high upload bandwidth utilization, low delays during the video stream diffusion,
robustness and stability under dynamic network conditions and peers behavior. On the other hand in order
to guarantee the complete and on time video distribution to every participating peer, the average upload
bandwidth of the participating peers should be always greater than the playback rate of the video stream.
Most of the approaches do not take into consideration this requirement. Thus, in this paper we propose a
very scalable monitoring mechanism of the total upload bandwidth of the participating peers, which is
dynamic, accurate and with low overhead. Moreover, by exploiting this monitoring mechanism we present
and evaluate an algorithm that allows the accurate and on time estimation of the minimal required
additional bandwidth that an external set of resources (e.g. auxiliary peers) have to contribute. In this way
we guarantee the uninterrupted the stream delivery and provide high Quality of Service (QoS) in live
streaming.
QOS - LIQUIDSTREAM: SCALABLE MONITORING AND BANDWIDTH CONTROL IN PEER TO PEER...ijp2p
The vast majority of research in P2P live streaming systems focuses on system architectures that offer to
participating peers: high upload bandwidth utilization, low delays during the video stream diffusion,
robustness and stability under dynamic network conditions and peers behavior. On the other hand in order
to guarantee the complete and on time video distribution to every participating peer, the average upload
bandwidth of the participating peers should be always greater than the playback rate of the video stream.
Most of the approaches do not take into consideration this requirement. Thus, in this paper we propose a
very scalable monitoring mechanism of the total upload bandwidth of the participating peers, which is
dynamic, accurate and with low overhead. Moreover, by exploiting this monitoring mechanism we present
and evaluate an algorithm that allows the accurate and on time estimation of the minimal required
additional bandwidth that an external set of resources (e.g. auxiliary peers) have to contribute. In this way
we guarantee the uninterrupted the stream delivery and provide high Quality of Service (QoS) in live
streaming.
ENHANCEMENT OF TCP FAIRNESS IN IEEE 802.11 NETWORKScscpconf
The usage of fixed buffers in 802.11 networks has a number of disadvantages associated with
it. This includes high delay, reduced throughput and inefficient channel utilisation. To
overcome this, a dynamic buffer sizing algorithm, the A* algorithm has been implemented at
the access point. In this algorithm buffer size is dynamically adjusted depending upon the
current channel conditions and hence delay is reduced and the throughput is maintained. But
in 802.11 networks with DCF collision avoidance mechanism, it creates significant amount of
unfairness between the upstream and downstream TCP flows, with clusters of upstream ACKs
blocking downstream data at the access point. Thus a variation of the Explicit Window
Adaptation (EWA) scheme has been used to regulate the queuing time of the upload clients by
calculating the feedback value at the access point. This creates fairness and increases the number of transmission opportunities for the downstream traffic
Delay jitter control for real time communicationMasud Rana
This document proposes a method for controlling delay jitter for real-time communication channels in a packet-switching network. It extends an existing scheme that provides bounds on maximum delay. The key aspects are:
1) Each network node contains "regulators" for each channel that reconstruct the original packet arrival pattern to preserve jitter, and a scheduler that ensures low distortion of patterns between nodes.
2) Clients specify maximum delay and jitter bounds when establishing a channel. The establishment procedure sets local jitter bounds at each node to ensure the end-to-end bound is met.
3) Regulators at each node attempt to faithfully preserve the original packet arrival pattern, so that the last node sees essentially the original
This document discusses using network coding to improve live video streaming over peer-to-peer mesh networks. It begins by introducing live video streaming and its challenges. It then discusses peer-to-peer and wireless mesh networks as infrastructures for video distribution. Network coding is presented as a technique to increase bandwidth utilization, robustness, and video quality by allowing intermediate nodes to combine packets before forwarding. The results showed that network coding can reduce delay and jitter, increase data localization, and improve bandwidth utilization and network scalability.
Server-based and Network-assisted Solutions for Adaptive Video StreamingEswar Publications
This document discusses server-based and network-assisted solutions for adaptive video streaming. It begins with an abstract that outlines server-based adaptive streaming is gaining popularity as clients and network devices are not powerful enough to run advanced adaptation algorithms. The document then provides a taxonomy that categorizes adaptive video streaming solutions and focuses on server-based and network-assisted solutions. It discusses various classical computing approaches relevant to server-based solutions such as traffic shaping, video pacing, and rate limiting. The document also proposes a taxonomy of server-based approaches and discusses state-of-the-art solutions for traffic management and protocol/parameter-centric categories. Finally, it discusses network-assisted solutions and recent approaches that show the advantages of using network-
This document discusses the interaction between application layer multicast (ALM) trees and MPEG-4 video streaming. It examines how different coding parameters and tree structures affect end-user video quality as measured by PSNR. The paper presents a simulation system to test various combinations of NICE ALM trees and MPEG-4 parameters. Results show that coding choices and tree organization depend on network characteristics like packet loss and bandwidth distribution. Large GOP sizes and many B-frames optimize quality with rare losses, while small GOPs and fewer B-frames work best with frequent losses. Uniform bandwidth favors small clusters and long paths, while varied bandwidth prefers larger clusters and shorter paths.
In the last few years, video streaming facilities over TCP or UDP, such as YouTube, Facetime, Daily-motion, Mobile video calling have become more and more popular. The important
challenge in streaming broadcasting over the Internet is to spread the uppermost potential quality,
observe to the broadcasting play out time limitation, and efficiently and equally share the offered
bandwidth with TCP or UDP, and additional traffic types. This work familiarizes the Streaming
Media Data Congestion Control protocol (SMDCC), a new adaptive broadcasting streaming
congestion management protocol in which the connection’s data packets transmission frequency is
adjusted allowing to the dynamic bandwidth share of connection using SMDCC, the bandwidth share
of a connection is projected using algorithms similar to those introduced in TCP Westwood. SMDCC
avoids the Slow Jump phase in TCP. As a result, SMDCC does not show the pronounced rate
alternations distinguishing of modern TCP, so providing congestion control that is more appropriate
for streaming broadcasting applications. Besides, SMDCC is fair, sharing the bandwidth equitably
among a set of SMDCC connections. Main benefit is robustness when packet harms are due to
indiscriminate errors, which is typical of wireless links and is becoming an increasing concern due to
the emergence of wireless Internet access. In the presence of indiscriminate errors, SMDCC is also
approachable to TCP Tahoe and Reno (TTR). We provide simulation results using the ns3 simulator
for our protocol running together with TCP Tahoe and Reno.
This document summarizes key topics related to data link control and protocols. It discusses framing methods like fixed-size and variable-size framing. It also covers flow control, error control, and protocols for both noiseless and noisy channels. Specific protocols described include the Simplest Protocol, Stop-and-Wait Protocol, Stop-and-Wait ARQ, Go-Back-N ARQ, and Selective Repeat ARQ. The document provides details on their design, algorithms, and flow diagrams to illustrate how each protocol handles framing, flow control, and error control.
This document discusses a novel receiver-based traffic redundancy elimination (TRE) technique called Prediction-based Cloud Bandwidth and Cost Reduction (PACK) for cloud computing environments. PACK aims to reduce bandwidth costs and server load by having receivers detect redundant data and send predictions to senders about future chunks. This avoids sending redundant data and allows receivers to locally store chunks instead of downloading them. The document evaluates PACK using video traces and estimates its potential for cost savings compared to traditional sender-based TRE and no TRE.
Providing Controlled Quality Assurance in Video Streaming ...Videoguy
The document discusses providing quality assurance for video streaming across the internet using a proxy server system. It proposes a staggered two-flow streaming approach where an unreliable flow for enhanced video data is one segment ahead of a reliable flow for essential data. This allows the reliable flow to be prefetched and cached at the proxy server to ensure quality even with bandwidth limitations in the best-effort network. Experimental results show the approach can provide stable performance with low packet losses compared to using standard TCP. Future work areas include improving scalability and implementing application-aware bandwidth management and admission control.
Performance evaluation of bandwidth optimization algorithm (boa) in atm networkEditor Jacotech
domains: none of them are suitable, alone, for the wide range of traffic services expected in ATM-based networks. Therefore, some integration of these basic schemes should be considered. In this paper, we propose a new traffic control algorithm, called the Bandwidth optimization Algorithm (BOA). BOA is a multi-level control algorithm that attempts to optimally manage network resources and perform traffic control among a wide range of traffic services in ATM-based networks. The basic objective of BOA is to meet the quality of service requirements for different traffic sources, while making the best possible use of network bandwidth. In addition. BOA attempts to minimize network congestion in a preventive way.
The document describes a system for optimizing media streaming in the cloud. It involves predicting future demand for streaming capacity and allocating cloud resources accordingly using an algorithm. The algorithm aims to minimize costs by reserving resources over time periods that provide discounts, while ensuring sufficient resources are available. It does this by varying the reservation time window size based on predicted demand and pricing tariffs. The system uses demand forecasting to continuously update predictions and improve resource allocation decisions over time. The goal is to improve quality of service for media streaming systems while reducing operational costs through efficient cloud resource utilization.
This document summarizes a survey on congestion control mechanisms. It discusses how congestion control plays an important role in computer networks and modern telecommunications to prevent network congestion. It categorizes major congestion control mechanisms and algorithms, including black box approaches like TCP that rely only on binary feedback and grey/green box approaches that use more network information. Common goals for congestion control algorithms are discussed like efficiency, fairness and smooth convergence.
Similar to A Two-Tiered On-Line Server-Side Bandwidth Reservation Framework for the Real-Time Delivery of Multiple Video Streams (20)
Java provides strong security features that are built into its design and well-suited for distributed computing. Its security model uses sandboxes, class loaders, bytecode verification, and security managers to prevent untrusted applications from accessing system resources. Java also supports protected domains that extend security through flexible user-defined permissions for applications. Effective security requires ongoing diligence through techniques, training, and adapting to new threats.
This document provides an overview of security in the Java platform, covering topics like the Java language's security features, bytecode verification, the basic security architecture including security providers and file locations, cryptography, public key infrastructure (PKI), authentication, secure communication techniques, access control including permissions and policy, and built-in security providers. It describes the key principles of implementation independence, interoperability, and extensibility that the Java security APIs are designed around.
This guide helps developers migrate Java applications from version 1.3 to 5.0. While many 1.3 applications run without changes, some compatibility issues exist as described in this guide. The guide covers runtime issues, deployment issues, and tooling issues to assist with the migration. Developers should consult this guide and release notes for any late-breaking issues when migrating applications to Java 5.0.
The document summarizes performance enhancements and new features in J2SE 5.0, including ergonomics in the Java Virtual Machine to automatically select optimal settings, improved string handling with StringBuilder, enhancements to Java 2D and image I/O, and reduced startup time and memory footprint through class data sharing. Benchmark results show significant performance improvements over J2SE 1.4.2 in SPECjbb2000 and VolanoMark, as well as up to 22% faster startup for applications. Memory footprint is also reduced for applications on various platforms including Windows XP and Linux.
This document provides an overview of new performance and scalability improvements in Java SE 6, including runtime optimizations for locking, compilation, and garbage collection. It discusses features like biased locking, lock coarsening, parallel compaction collection, improved ergonomics, and reduced application startup times. Benchmark results demonstrate performance gains of 10-20% for SPECjbb2005, I/O tests, and VolanoMark compared to Java SE 5.
This document provides an overview of new performance and scalability improvements in Java Standard Edition 6, including runtime optimizations like biased locking and lock coarsening, garbage collection enhancements like parallel compaction, and compiler optimizations. It discusses these changes and provides references for more detailed information.
This document provides an overview of new performance and scalability improvements in Java SE 6, including runtime optimizations, garbage collection enhancements, and client-side improvements. Key changes include biased locking for faster uncontended synchronization, parallel compaction for faster major garbage collections, background compilation for improved multicore utilization, and boot class loader optimizations for faster application startup times. Benchmark results demonstrate performance gains of 10-20% for SPECjbb2005, I/O tests, and VolanoMark compared to Java SE 5.
This document provides an overview of new performance and scalability improvements in Java Standard Edition 6, including runtime optimizations like biased locking and lock coarsening, garbage collection enhancements like parallel compaction, and compiler optimizations. It discusses these changes and provides references for more detailed information.
Memory Management in the Java HotSpot Virtual Machinewhite paper
This document provides an overview of memory management and garbage collection in the Java HotSpot Virtual Machine. It describes the different garbage collectors available, including the serial, parallel, parallel compacting, and concurrent mark-sweep collectors. It also discusses generational memory organization, garbage collection concepts, and tools for evaluating performance.
This document discusses the version numbering for Java SE 6. It states that the platform name has changed from J2SETM to JavaTM SE and the official name is JavaTM Platform, Standard Edition 6. It also discusses that version 6 is used for both the product and developer versions to reflect the maturity of Java SE.
1. Java Web Start allows users to launch Java applications from a web page with a single click without complicated installation. It manages installation of the required Java Runtime Environment version and updates applications automatically.
2. When a user clicks to launch the Notepad application, Java Web Start installs the needed Java version if missing and runs the application, caching it for future use.
3. The application is integrated with the user's desktop and can be launched from the Java Application Cache viewer or a desktop shortcut in the future without returning to the web page. Java Web Start handles running multiple applications requiring different Java versions.
This document provides a summary of tuning techniques for Java applications. It begins with best practices like using the latest Java version and release. It emphasizes the importance of making decisions based on data through statistical analysis and benchmarks. The document then gives tuning ideas covering JVM settings like garbage collection policies and heap sizing. It aims to provide guidance on performance optimization in a methodical, data-driven manner.
Java Apis For Imaging Enterprise-Scale, Distributed 2d Applicationswhite paper
The document discusses the Java 2D and Java Advanced Imaging APIs. It provides an overview of the Java 2D API and its capabilities for 2D graphics and imaging. It then describes the Java Advanced Imaging API, which builds on the Java 2D API to provide more sophisticated image processing capabilities. Key features of the Java Advanced Imaging API include tiling of large images, resolution-independent processing, and support for distributed and network-based imaging applications. The APIs allow for cross-platform development of imaging applications.
Introduction to the Java(TM) Advanced Imaging APIwhite paper
The document introduces the Java Advanced Imaging (JAI) API, which provides advanced image processing capabilities for Java applications. It describes key JAI functionality like tiled images, lazy evaluation, multi-resolution imaging, and network imaging. The course will cover pixel-based and resolution-independent imaging, writing JAI extensions, and an example application.
* Evaluation of Java Advanced Imaging (1.0.2) as a Basis for Image Proce...white paper
This document describes a project between Sun Microsystems and Utah State University to evaluate Java Advanced Imaging (JAI) as a basis for image processing applications in earth sciences. The project's primary objective was to determine if JAI provided adequate functionality for earth sciences applications by assessing it against a matrix of requirements. Some demonstration software was also developed using JAI. The results found that JAI satisfied most requirements directly or through extensibility, and the remaining requirements were outside the project scope. The demonstration software showed JAI's ability to handle typical earth sciences data formats and generate classification maps. The project helped Sun understand how well JAI meets earth sciences application needs.
Java 2D API: Enhanced Graphics and Imaging for the Java Platformwhite paper
The document discusses the Java 2D API, which provides a powerful framework for device- and resolution-independent 2D graphics in Java programs. The Java 2D API extends the graphics and imaging classes defined by java.awt while maintaining compatibility. It enables developers to easily incorporate high-quality 2D graphics, text, and images. Key features of the Java 2D API include support for images, fonts, layout, paths, transformations, strokes, fills, and rendering.
The Java 2 platform includes a new package of concurrency utilities that are designed to simplify building concurrent applications. The concurrency utilities provide commonly used building blocks such as thread pools, asynchronous task execution frameworks, concurrent collections, atomic variables, locks, and condition variables. Using these utilities reduces programming effort, improves performance, reliability, maintainability and productivity when building concurrent applications compared to developing these components from scratch. The concurrency utilities package aims to make concurrent programs clearer, shorter, faster, more reliable, scalable, easier to write, read and maintain.
Defining a Summative Usability Test for Voting Systemswhite paper
This document outlines a proposed approach for conducting a summative usability test of a voting system to evaluate whether it meets specified usability requirements. The test is intended to identify failures in effectiveness, efficiency, and satisfaction rather than diagnose their causes. Key elements that must be defined include the purpose of the test, voting system, user tasks, data collection, and how results will be analyzed. Gaps are noted in establishing specific usability benchmarks and criteria for voting systems to pass or fail the test.
This document summarizes research conducted to develop usability performance benchmarks for voting systems to be included in the Voluntary Voting System Guidelines (VVSG). The research established a standardized testing methodology to measure usability across different voting system technologies. Over 450 test participants used four different voting systems and their interactions were analyzed to determine accuracy, completion rates, and error rates. The results were used to propose benchmark values for three usability measures that all systems must meet: a Total Completion Score of 98%, a Voter Inclusion Index of 0.35, and a Perfect Ballot Index of 2.33. The benchmarks and testing methodology are intended to improve the usability of future voting systems.
This document summarizes a study comparing the usability perceptions and performance of Taiwanese and North American users of an MP3 player. Surveys showed North American users had lower satisfaction and perceptions of effectiveness and efficiency than Taiwanese users. However, performance results were unclear, with similar effectiveness but conflicting results on efficiency between the groups. The study involved surveys and task observations with 23 Taiwanese and North American subjects to measure the impact of culture on usability factors like satisfaction, effectiveness and efficiency.
3 Simple Steps To Buy Verified Payoneer Account In 2024SEOSMMEARTH
Buy Verified Payoneer Account: Quick and Secure Way to Receive Payments
Buy Verified Payoneer Account With 100% secure documents, [ USA, UK, CA ]. Are you looking for a reliable and safe way to receive payments online? Then you need buy verified Payoneer account ! Payoneer is a global payment platform that allows businesses and individuals to send and receive money in over 200 countries.
If You Want To More Information just Contact Now:
Skype: SEOSMMEARTH
Telegram: @seosmmearth
Gmail: seosmmearth@gmail.com
Unveiling the Dynamic Personalities, Key Dates, and Horoscope Insights: Gemin...my Pandit
Explore the fascinating world of the Gemini Zodiac Sign. Discover the unique personality traits, key dates, and horoscope insights of Gemini individuals. Learn how their sociable, communicative nature and boundless curiosity make them the dynamic explorers of the zodiac. Dive into the duality of the Gemini sign and understand their intellectual and adventurous spirit.
[To download this presentation, visit:
https://www.oeconsulting.com.sg/training-presentations]
This PowerPoint compilation offers a comprehensive overview of 20 leading innovation management frameworks and methodologies, selected for their broad applicability across various industries and organizational contexts. These frameworks are valuable resources for a wide range of users, including business professionals, educators, and consultants.
Each framework is presented with visually engaging diagrams and templates, ensuring the content is both informative and appealing. While this compilation is thorough, please note that the slides are intended as supplementary resources and may not be sufficient for standalone instructional purposes.
This compilation is ideal for anyone looking to enhance their understanding of innovation management and drive meaningful change within their organization. Whether you aim to improve product development processes, enhance customer experiences, or drive digital transformation, these frameworks offer valuable insights and tools to help you achieve your goals.
INCLUDED FRAMEWORKS/MODELS:
1. Stanford’s Design Thinking
2. IDEO’s Human-Centered Design
3. Strategyzer’s Business Model Innovation
4. Lean Startup Methodology
5. Agile Innovation Framework
6. Doblin’s Ten Types of Innovation
7. McKinsey’s Three Horizons of Growth
8. Customer Journey Map
9. Christensen’s Disruptive Innovation Theory
10. Blue Ocean Strategy
11. Strategyn’s Jobs-To-Be-Done (JTBD) Framework with Job Map
12. Design Sprint Framework
13. The Double Diamond
14. Lean Six Sigma DMAIC
15. TRIZ Problem-Solving Framework
16. Edward de Bono’s Six Thinking Hats
17. Stage-Gate Model
18. Toyota’s Six Steps of Kaizen
19. Microsoft’s Digital Transformation Framework
20. Design for Six Sigma (DFSS)
To download this presentation, visit:
https://www.oeconsulting.com.sg/training-presentations
The Genesis of BriansClub.cm Famous Dark WEb PlatformSabaaSudozai
BriansClub.cm, a famous platform on the dark web, has become one of the most infamous carding marketplaces, specializing in the sale of stolen credit card data.
Starting a business is like embarking on an unpredictable adventure. It’s a journey filled with highs and lows, victories and defeats. But what if I told you that those setbacks and failures could be the very stepping stones that lead you to fortune? Let’s explore how resilience, adaptability, and strategic thinking can transform adversity into opportunity.
Anny Serafina Love - Letter of Recommendation by Kellen Harkins, MS.AnnySerafinaLove
This letter, written by Kellen Harkins, Course Director at Full Sail University, commends Anny Love's exemplary performance in the Video Sharing Platforms class. It highlights her dedication, willingness to challenge herself, and exceptional skills in production, editing, and marketing across various video platforms like YouTube, TikTok, and Instagram.
Easily Verify Compliance and Security with Binance KYCAny kyc Account
Use our simple KYC verification guide to make sure your Binance account is safe and compliant. Discover the fundamentals, appreciate the significance of KYC, and trade on one of the biggest cryptocurrency exchanges with confidence.
Navigating the world of forex trading can be challenging, especially for beginners. To help you make an informed decision, we have comprehensively compared the best forex brokers in India for 2024. This article, reviewed by Top Forex Brokers Review, will cover featured award winners, the best forex brokers, featured offers, the best copy trading platforms, the best forex brokers for beginners, the best MetaTrader brokers, and recently updated reviews. We will focus on FP Markets, Black Bull, EightCap, IC Markets, and Octa.
Industrial Tech SW: Category Renewal and CreationChristian Dahlen
Every industrial revolution has created a new set of categories and a new set of players.
Multiple new technologies have emerged, but Samsara and C3.ai are only two companies which have gone public so far.
Manufacturing startups constitute the largest pipeline share of unicorns and IPO candidates in the SF Bay Area, and software startups dominate in Germany.
Understanding User Needs and Satisfying ThemAggregage
https://www.productmanagementtoday.com/frs/26903918/understanding-user-needs-and-satisfying-them
We know we want to create products which our customers find to be valuable. Whether we label it as customer-centric or product-led depends on how long we've been doing product management. There are three challenges we face when doing this. The obvious challenge is figuring out what our users need; the non-obvious challenges are in creating a shared understanding of those needs and in sensing if what we're doing is meeting those needs.
In this webinar, we won't focus on the research methods for discovering user-needs. We will focus on synthesis of the needs we discover, communication and alignment tools, and how we operationalize addressing those needs.
Industry expert Scott Sehlhorst will:
• Introduce a taxonomy for user goals with real world examples
• Present the Onion Diagram, a tool for contextualizing task-level goals
• Illustrate how customer journey maps capture activity-level and task-level goals
• Demonstrate the best approach to selection and prioritization of user-goals to address
• Highlight the crucial benchmarks, observable changes, in ensuring fulfillment of customer needs
The APCO Geopolitical Radar - Q3 2024 The Global Operating Environment for Bu...APCO
The Radar reflects input from APCO’s teams located around the world. It distils a host of interconnected events and trends into insights to inform operational and strategic decisions. Issues covered in this edition include:
Building Your Employer Brand with Social MediaLuanWise
Presented at The Global HR Summit, 6th June 2024
In this keynote, Luan Wise will provide invaluable insights to elevate your employer brand on social media platforms including LinkedIn, Facebook, Instagram, X (formerly Twitter) and TikTok. You'll learn how compelling content can authentically showcase your company culture, values, and employee experiences to support your talent acquisition and retention objectives. Additionally, you'll understand the power of employee advocacy to amplify reach and engagement – helping to position your organization as an employer of choice in today's competitive talent landscape.
At Techbox Square, in Singapore, we're not just creative web designers and developers, we're the driving force behind your brand identity. Contact us today.
How MJ Global Leads the Packaging Industry.pdfMJ Global
MJ Global's success in staying ahead of the curve in the packaging industry is a testament to its dedication to innovation, sustainability, and customer-centricity. By embracing technological advancements, leading in eco-friendly solutions, collaborating with industry leaders, and adapting to evolving consumer preferences, MJ Global continues to set new standards in the packaging sector.
Top mailing list providers in the USA.pptxJeremyPeirce1
Discover the top mailing list providers in the USA, offering targeted lists, segmentation, and analytics to optimize your marketing campaigns and drive engagement.
A Two-Tiered On-Line Server-Side Bandwidth Reservation Framework for the Real-Time Delivery of Multiple Video Streams
1. A Two-Tiered On-Line Server-Side Bandwidth Reservation
Framework for the Real-Time Delivery of Multiple Video Streams
Jorge M. Londo˜ o and Azer Bestavros
n
Computer Science Department, Boston University, Boston, MA, USA
July 1st, 2008
BUCS-TR-2008-012
ABSTRACT
The advent of virtualization and cloud computing technologies necessitates the development of effective mechanisms for
the estimation and reservation of resources needed by content providers to deliver large numbers of video-on-demand
(VOD) streams through the cloud. Unfortunately, capacity planning for the QoS-constrained delivery of a large number
of VOD streams is inherently difficult as VBR encoding schemes exhibit significant bandwidth variability. In this paper,
we present a novel resource management scheme to make such allocation decisions using a mixture of per-stream reser-
vations and an aggregate reservation, shared across all streams to accommodate peak demands. The shared reservation
provides capacity slack that enables statistical multiplexing of peak rates, while assuring analytically bounded frame-drop
probabilities, which can be adjusted by trading off buffer space (and consequently delay) and bandwidth. Our two-tiered
bandwidth allocation scheme enables the delivery of any set of streams with less bandwidth (or equivalently with higher
link utilization) than state-of-the-art deterministic smoothing approaches. The algorithm underlying our proposed frame-
work uses three per-stream parameters and is linear in the number of servers, making it particularly well suited for use in
an on-line setting. We present results from extensive trace-driven simulations, which confirm the efficiency of our scheme
especially for small buffer sizes and delay bounds, and which underscore the significant realizable bandwidth savings,
typically yielding losses that are an order of magnitude or more below our analytically derived bounds.
Keywords: Video on demand, quality of service, admission control, streaming media
1. INTRODUCTION
Motivation: On-demand video streaming over the Internet is here to stay. Over the years much research has been devoted
to resource management issues of Internet video streaming from the perspectives of the client and the network. From the
client’s perspective, packets traversing the network may suffer from losses and high delay variability. Delay variability
is typically managed by buffering a short period of the stream on the client side1 to avoid underflow conditions which
would lead to interruptions of the playback. Proper sizing of the client-side buffer is important because if underestimated,
it would lead to overflow conditions whereby packets are lost with great detriment to playback quality. From the network’s
perspective, the high variability of the video streaming rates makes it very difficult to provision adequate bandwidth, while
achieving either high utilization or low delay and buffer requirements. To that end, smoothing techniques2 provide the
means to achieve higher utilization by trading off a small amount of buffering/delay.
While significant attention has been devoted to the allocation of client and network resources in support of Internet
video streaming applications, less attention has been given to the allocation of server-side resources. This is primarily
due to the fact that, traditionally, such resources were assumed to be dedicated, well-provisioned (if not over-provisioned)
server farms, for example. However, the advent of virtualization and cloud computing technologies requires us to revisit
such assumptions, and consequently to develop effective mechanisms for the estimation and reservation of the server-side
resources needed by content providers to deliver large numbers of video-on-demand (VOD) streams through the cloud.
In this paper, we focus on the (cloud/virtual) server-side upload bandwidth needs necessary to service a large number of
Jorge Londo˜ o / E-mail: jmlon@cs.bu.edu
n
Azer Bestavros / E-mail: best@cs.bu.edu
2. VOD streams subject to minimal QoS constraints. Moreover, since cloud resources are likely to be acquired (and paid
for) dynamically in response to variable demand patterns, we focus our attention to approaches that are amenable to on-
line/real-time use.
Clearly, the bandwidth allocation approaches used to provision network paths1, 3–6 can be used to provision the server
bandwidth needs as well, but this would be inefficient resource-wise. In particular, the variability of VBR traffic patterns
creates the opportunity for achieving better utilizations while guaranteeing the bandwidth demanded by individual streams,
as proposed by Krunz et al.7 Other proposals8, 9 advocate the use of adaptive encoding schemes so that the system may
trade-off quality for bandwidth and CPU cycles.
In this paper we present an alternative scheme: by exploiting the opportunities for statistically multiplexing VBR
streams at the server, we are able to achieve higher utilization while providing probabilistic bounds on dropped frames and
deterministic bounds on delay and buffer requirements.
Scope and Contributions: We consider a VOD distribution system whereby a pool of acquired (cloud) servers must
handle requests coming from a large population of clients. The system we envision has the following attributes:
1. On-line: The system has no a priory knowledge of service requests.
2. Real-Time: The system must make admission/allocation decisions within a bounded amount of time.
3. Quality-of-Service: The system must satisfy per-stream bandwidth, delay and loss requirements, which are necessary
to meet user’s perceived quality expectations.
4. Efficient and Scalable: The system must be able to handle a large number of streams and must ensure high bandwidth
utilization.
To that end, we present a server-side bandwidth and buffer management scheme which gives deterministic bounds on
delays, and provisions bandwidth at rates that are typically much lower than other well known techniques. In addition, we
present an extention to our scheme which trades off a small loss rate for a highly effective statistical multiplexing of peak
bandwidth requirements of streams that are co-located on the server. Our scheme can be seen as providing a framework that
enables a spectrum of design choices ranging from purely best-effort to exclusive reservations, such that the loss probabil-
ities are bounded according to the system parameters. Within this framework, we present an on-line admission/allocation
algorithm which uses three pre-computed parameters per stream to characterize the stream’s bandwidth needs. Upon arrival
of a request for video stream delivery, our admission/allocation algorithm makes its decision in time linear in the number
of servers.
We present an extensive evaluation of our proposed approach using trace-driven simulations over a large collection
of H.264 and MPEG4 video streams. The results we obtain confirm the efficiency and scalability of our scheme and
underscore the significant realizable savings, typically yielding losses that are an order of magnitude or more below our
analytically derived bounds.
2. A MULTI-STREAM BANDWIDTH ALLOCATION FRAMEWORK
2.1 Background and Basic Approaches
Video encoding standards, such as MPEG-2/4 or H.264, make use of three types of frame encodings: I-frames (interframe)
contain a complete encoded frame, P-frames (predictive) which depend on the information of a previous I or P frame
to be decoded, and B-frames (bidirectional) which require information of preceding and future frames in order to be
decoded. The exact sequence of I, P and B frames used by a stream is called the pattern, and their transmission order is not
necessarily the playback order. Typically, the pattern contains a single I frame and many P and B frames. The set of frames
corresponding to one iteration of the pattern is called the Group of Pictures (GoP).∗ Figure 1 illustrates the cumulative
number of bits, A(t) for a sample GoP from a real VBR stream. The pattern is also indicated on the top.
∗
For a review of video encoding techniques see10
3. Figure 1. Temporal representation of a Group of Pictures (GoP)
In current standards the frame rate is constant, and two important parameters describing the stream are the mean bitrate
and the peak bitrate defined as follows:
a(tf )
Mean Rate = (1)
tf
a(ti+1 ) − a(ti )
Peak Rate = max (2)
i ti+1 − ti
where ti indicates the time of the ith frame and tf is the time of the last frame. (It is assumed the stream starts a t1 = 0).
Using the stream’s mean rate for reserving resources is not practical as it does not provide a small enough bound on
the playback delay and, potentially, may require a very large buffer to avoid buffer underruns at the receiver. On the other
hand, while using the peak rate would give the minimum playback delay and would minimize the amount of buffering
required, it is also wasteful of resources as bandwidth utilization will be very low, making it impossible to scale the system
to a large number of streams.
Previous work3 uses the concept of effective bandwidth as a way to characterize (using a tight constant rate envelope)
any time interval of the stream, so that buffering delay experienced by any packet during this interval is bounded. Together,
the rate and the time bound determine the maximum buffer space needed. Although this effective bandwidth envelope
method provides a deterministic, tight characterization of the stream, its direct application to reserve resources at the server
imposes an exclusive reservation mechanism that undermines the possibility of improved scaling that could be achieved by
statistically multiplexing many streams. A secondary drawback of this technique is that it is computationally expensive as
it is quadratic in the number of frames in the stream.
2.2 Efficient Characterization of a Single VBR Video Stream
We propose a variation of the effective bandwidth model that allows computing the bandwidth and required buffer space
in a single pass, i.e., linear in the number of frames in the stream, while still providing guaranteed bounds on the delay
experienced by packets in the stream. We assume that the time is discretized at time instants tk = k · TGoP and we
discretize the cumulative bits function as ak = a(tk+1 ), i.e. presenting all the GoP’s bits at the beginning of the interval.
Figure 2a illustrates this discretized version of the stream.
Let the smoothing period T indicate the length of the time intervals we will smooth. In practice, it is convenient to make
T = sTGoP , i.e. an integral multiple of the GoP period, but this not required by the analysis that follows. By grouping the
bits in the interval T , it is possible to smooth the peaks present in the interval. We define the smoothed-bandwidth ε as the
4. CDF of GoP sizes
1
0.8
P[X<x]
0.6
0.4
0.2
0
0 0.2 0.4 0.6 0.8 1
6
x (bits x10 )
(a) (b)
Figure 2.
maximum bandwidth required over all discrete groups of length T :
A(tk + T ) − A(tk )
ε = max . (3)
k T
Figure 2a illustrates two such groups for T = 3TGoP (the shadowed triangles). An important consequence of this
definition is that at times tk = nT (multiples of T ), it is guaranteed that the transmission buffer will be empty, and thefore
this gives an upper bound of T on the delay experienced by any packet belonging to this interval. For the same reason, the
maximum buffer space required is
β = εT. (4)
Clearly, equation (3) can be computed in a single pass over the stream and then equation (4) is a single operation.
Therefore, the computation is linear on the number of frames.
2.3 A Two-Tiered Bandwidth Allocation Framework
Our second step is to relax the deterministic guarantee in equation (3). It is well known10–12 that the distribution of frame
sizes exhibits long range dependence. In other words, there is a non-negligible probability of having very large frames,
although the distribution itself is skewed towards small frame sizes. Consequently, smoothing out a large frame may require
relatively long period. Even worse, large frames do not occur independently, but they tend to appear clustered in time as
evidenced by the autocorrelation of GoP sizes.10 Figure 2b illustrates the CDF of some actual video traces providing
evidence of the large mass of small frame sizes as well as the long tails.
Our proposed approach is not to use the maximum bandwidth for reservation, as this value is very unlikely to occur.
Instead, we propose the two-tiered bandwidth reservation approach detailed below:
(a) Each stream has a guaranteed reservation for an amount equal to ε(p) , which denotes the p percentile of the distribu-
tion of the values (A(tk + T ) − A(tk ))/T .
(b) Each set of co-located streams is assigned an additional slack reservation for an amount equal to η. This reserved
bandwidth is shared by all the streams. When an instantaneous burst requirement of a stream exceeds its reservation,
that stream may use the shared slack in η to accommodate that burst. This enables the statistical multiplexing of the
peaks of various streams within the shared slack, effectively providing a cushion to insure against individual streams
exceeding their reservations.
5. Figure 3. Architecture of video streaming system
2.4 Architecture of the Video Streaming System
Figure 3 illustrates the architecture of a video streaming system based on the two-tiered bandwidth allocation framework
(p)
presented above. The system reserves a fixed amount εi of bandwidth per stream, and also and additional shared band-
width of η, which can be arbitrarily distributed during each smoothing period T , so that each stream receives additional
capacity ηi , but preserving the total ηi ≤ η. The buffer capacity is also dynamically adjusted for each smoothing period.
(p)
Basically, a stream may use up to (εi + ηi )T buffer bits. As a consequence of these settings, all the bits in the buffer will
be delivered by the end of the smoothing period if the output line is never idle. In other words, the maximum bit latency is
T.
Special care must be given to ensure that the output line is never idle. Our subsequent analysis depends on having an
empty buffer at the end of each smoothing cycle. Of course, this condition can be easily enforced in practice. In the case
of stored media, all the system has to do is to prefetch all the frames belonging to the smoothing period. In the case of a
live-feed, this condition may be ensured by delaying the stream by T .
The exclusively reserved portion of the bandwidth guarantees the timely delivery of a fraction p of the GoPs. It is
also possible to give a probabilistic guarantee for the remaining 1 − p fraction as follows: Let Xi be the random variable
representing the total number of bits in an interval of size T of the ith stream at some point in time tk . If Xi < ε(p) · T ,
then the reserved bandwidth is enough, otherwise the excess amount will have to be handled using the shared slack. Let us
call Yi the excess amount per stream, so that
Xi − ε(p) · T if Xi > ε(p) · T
Yi = .
0 otherwise
The total slack η will be sufficient if Yi ≤ η · T . Otherwise, we simply drop as many frames as necessary during this
interval. Without any additional knowledge of the distribution of sizes and without any assumptions on the independence
of the streams, the Markov’s inequality allows us to bound the loss probability as follows.
E [ Yi ]
P Yi > η · T ≤
η·T
E [Yi ]
≤ , (5)
η·T
where the values E [Yi ], which depend only on the value of p, can be easily precomputed for each stream. Although this
expression could be used to estimate the slack η given a bound on the loss probability, the value thus obtained is too large
for practical purposes. Instead, we will use this equation to give a guaranteed bound, but leave open the actual computation
of η to heuristics that may perform well in practice. In particular for the video streaming application we will present in the
next section the maximum rule gives much lower looses in practice (as we will show in the experimental section).
It is important to note that our framework enables a wide spectrum of possibilities. On the one hand, by setting p = 0,
there would be no per-stream reservations (ε(p) = 0), corresponding to a best-effort system with capacity η. On the other
6. hand, by setting p = 1, the per-stream reservation would be ε(p) = ε, yielding E [Yi ] = 0, meaning that there are no losses
at the source, and of course no need for slack reservation.
Alternatively, our framework can be seen as enabling an optimization problem, whereby the valuable resource is the
(p)
bandwidth, and the total bandwidth η+ i εi may be minimized through an appropriate choice of p, subject to constraints
(e.g., on the maximum loss probability or other metrics).†
2.5 Admissibility Test
We conclude this section by presenting a strategy for admitting a new video stream i to one of the set of servers in the
system. We assume that the system consists of a farm (cloud) of n servers with bandwidth capacity Cj for server j. We
assume that the target upper bound on the loss probability per stream is l. Also, we assume that the values ε, ε(p) , and
E [Yi ] have been precomputed in advance for each one of the streams in the system.
1. For the new the stream i, compute the slack
ηi = ε − ε(p) .
2. Using the maximum rule, the new slack of server j (ηj ) is the maximum of its current slack (ηj ) and the slack of
stream i.
ηj = max{ηi , ηj }.
3. Stream i is admissible to (i.e., could be served by) server j if
(p) (p)
εi + ηj + ε k ≤ Cj
k∈j
and
E [Yi ] + k∈j E [Yk ]
≤ l.
ηj · T
4. If stream i is not admissible to any one of the n servers already in the system, then the system must acquire (from
the cloud) a new server n + 1 or else the stream i is rejected.
Clearly, the above is an on-line strategy. In particular, it is only necessary to keep three state variables per server: The
(p)
current maximum ηj , the sum εk and the sum E [Yk ]. Upon arrival of a stream the new maximum and the sums can
be easily updated to check feasibility. For this reason, the total cost of admitting a new stream is linear in the number of
servers.‡
Notice that using the above admission test, there may be more than one server that is able to “host” the incoming stream
i. This introduces a “server selection” problem. In particular if the farm is composed of heterogeneous servers, each of
which with a different associated cost, then minimizing the total cost accrued by the system is akin to an extension of the
weighted-bin-packing problem, and approximation techniques such as first-fit or best-fit could be used to select the server.
3. EXPERIMENTAL EVALUATION
In our experimental evaluation we used a collection of traces from.10, 13–17 These traces provide information about the
frames of a large collection of video streams encoded with H.264 and MPEG-4 encoders, under a wide range of encoder
configurations. We conducted our experiments with a subset of 95 streams, all of them with durations of half an hour or
longer.
†
The impact of the parameter p is considered later in Section 3.3.
‡
If we assume that streams leave (terminate), then keeping track of the maximum per server would require keeping the list of
individual ηj and finding the maximum after each departure/arrival would be O(log m) (using a heap), being m the number of streams
in a server.
7. CDF ratios ε/effbw (p)
CDF ratios ε /effbw, p=0.75
1 1
0.9 0.9
0.8 0.8
0.7 0.7
percentile
percentile
0.6 0.6
0.5 0.5
0.4 0.4
0.3 Median(s=1) = 1.33 0.3 Median(s=1) = 0.43
0.2 Median(s=2) = 1.44 0.2 Median(s=2) = 0.51
0.1 Median(s=5) = 1.82 0.1 Median(s=5) = 0.69
1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2 0 0.2 0.4 0.6 0.8 1 1.2 1.4
ratio ratio
(a) ε to effective-bw ratio (b) ε(p) to effective-bw ratio
Figure 4. CDF of the ratios of smoothed to effective bw
3.1 Comparison with Effective Bandwidth Allocation Strategy
To give a fair comparison with respect to the effective bandwidth metric, we set the delay bound to be the same in both
cases and equal to an integral number s of GoP periods. We then compute the effective bandwidth using the algorithm
from,3 and then compute the smoothed bandwidth ε using eq. 3. By computing the ratio of the smoothed bandwidth to the
effective bandwidth for each stream, we obtain the results shown in Figure 4a.
In almost all cases this ratio is well below two, but greater than one. This is expected as the smoothing technique does
not provide an envelope as tight as the effective bandwidth approach. As the smoothing period (sTGoP ) increases, the ratio
is larger, but the distribution becomes more steep as well, and the transition between the main body and the tail of the
distribution becomes sharper at about 80%, indicating that for the majority of streams the ratio is below 1.85, and for the
remaining fraction it can go as high as 2.05.
Similarly, Figure 4b shows the ratios of ε(p) to the effective bandwidth, computed for the case of p = 0.75. For the large
majority of streams this ratio is well below one, indicating the possibility of bandwidth savings when using the p-percentile
of the distribution (and of course, the 1 − p fraction will be handled using the shared slack later). Observe also that the
ratio is smaller for small values of s, i.e. when the bound on the delay is smaller as in these cases the effective bandwidth
technique needs more bandwidth to maintain the bound on the delay. This highlights the potential of our technique for
tightly delay-constrained applications.
3.2 Effect of the Smoothing Period
The next question we target in our experimental evaluation is the setting for the appropriate smoothing period T . In
particular large values of T would be undesirable as this would result in larger buffers and hence longer set-up times, and
in the case of live-feeds longer delays. Figure 5 shows the behavior of ε and ε(p) as functions of T for two illustrative
streams. For other streams the results we obtained were very similar. In general ε is very sensitive to small values of T ,
slowly converging to the mean. On the other hand, ε(p) is essentially independent of T (for T ≥ TGoP ). This means that
by choosing the reservation that guarantees the delivery of a p fraction of the groups of frames, we can ensure a low delay,
and get the additional benefit of requiring a small buffer, and all of these with just a small overhead over the stream’s mean
rate.
3.3 Effect of the Parameter p
As mentioned earlier, p is the parameter of the system that allows it to span the spectrum from being a best-effort system
(when p = 0) to a system with full reservations (when p = 1). Small values of p lead to higher link utilizations, but also to
larger values of η if we intend to maintain a bound on the number of dropped frames as given by equation (5). Therefore,
a full understanding of the choices of p requires the consideration of a specific slack assignment rule and the consideration
of multiple streams sharing this capacity. For the following experiments, we setup a trace-driven simulation of the actual
streaming process (as described in section 2.4), with arbitrary subsets of streams, and using the maximum rule for the slack
assignment.
8. Smoothed bandwidth vs T Smoothed bandwidth vs T
Stream 5, p=0.75 Stream 75, p=0.75
12
Bandwidth (x 106)
Bandwidth (x 10 ) 2.5 ε ε
6
(p) 10 (p)
2 ε ε
mean 8 mean
1.5 peak peak
6
1 4
0.5 2
0 0
0 5 10 15 20 25 30 35 40 0 5 10 15 20 25 30 35 40
smoothing period (x TGoP) smoothing period (x TGoP)
Figure 5. Relation of ε and ε(p) to the smoothing period
Figure 6 shows the relationship between the total allocated bandwidth and the utilization as p varies for two different
choices of the smoothing period, T = TGoP in part (a) and T = 5TGoP in part (b). Correspondingly, Figure 7 shows the
experimental losses (dots) and the Markov bound on the losses (continuous line) as obtained from equation (5). In many
cases the losses were zero and thus are not registered on the logarithmic scale of the plot.
The total bandwidth allocated is increasing with increasing values of p, i.e., the closer the system gets to a full reser-
vation system, the larger the bandwidth it claims. It is also possible to appreciate that for large values of p the maximum
utilization stays below the total allocated bandwidth. This supports the idea of using the p-percentile instead of the maxi-
mum. It is also worth noting that the differences between the two choices of smoothing periods (one versus five TGoP ) are
very small, as expected from the analysis underlying Figure 5.
As expected, as we increase the value of the parameter p, the actual loss rate of the system decreases. Notice that the
Markov bound for the loss rate follows the same trend, but it is always one or more orders of magnitude larger than the
actual loss rate of the system. Also, we note that unlike the results for the total allocated bandwidth, the choice of the
smoothing period has a significant impact on the loss rates. Having larger smoothing periods (and correspondingly larger
buffers) causes a significant reduction in the probability of incurring losses. As a matter of fact for many of the s = 5 cases
the losses were zero.
These two sets of graphs also underscore the trade-off between bandwidth efficiency and losses. The smaller the value
of p (and the closer to a best-effort system), the higher the bandwidth efficiency, but also the larger the losses. Therefore, the
optimization problem of minimizing the total bandwidth given QoS constraints on the delay and the losses is the problem
of finding the smallest p such that delay and loss constraints are satisfied.
3.4 Effect of the Number of Streams
The main objective of our two-tiered bandwidth allocation scheme is to pack as many streams as possible per server, while
satisfying the constraints imposed on both the maximum delay and loss rates. Therefore, in this last set of experiments we
consider the effect of the multiplexing level (m) for a fixed value of the parameter p. In Figure 8, for each value of m, ten
different random combinations of m streams give an equal number of datapoints for both the actual losses and the bound
on the losses. We conducted these experiments with a smoothing period T = TGoP and p = 0.75. In all cases the Markov
bound holds and the system scales for large numbers of streams. For large m, and particularly for H.264 streams which
exhibit more variability, the losses may go slightly above 1%, which could be compensated by defining a larger η.
Finally, it is interesting to compare our probabilistic two-tiered reservation approach to a deterministic approach in
terms of the total bandwidth utilization. As already mentioned, using an effective bandwidth envelope provides the tightest
deterministic method to allocate bandwidth to a VBR stream. Figures 9 and 10 show a comparison of the total bandwidth
used by our approach to that achievable using an effective bandwidth envelope. Figure 9 shows the actual bandwidth values
for increasing levels of concurrency. For each value of m there are 10 samples using different combinations of streams. In
the case of the H.264 traces, there were not enough streams to allow for large enough values of m, but the MPEG-4 traces
clearly indicate the tendency to achieve bandwidth savings as m increases. Figure 10 shows the tendency to reach close to
50% savings (for the parameters used, s = 1, p = 0.75) as m gets large enough.
9. Total bandwidth as function of p Total bandwidth as function of p Total bandwidth as function of p
Streams: [1,2,6,7,10,13,], s=1 Streams: [14,15,16,17,21,22,], s=1 Streams: [90,91,92,93,94,95,], s=1
bandwidths (x106 bps)
bandwidths (x106 bps)
bandwidths (x106 bps)
(p)
4.5 (p) (p)
η+∑ε 4 η+∑ε η+∑ε
3 20
Max utilization 3.5 Max utilization Max utilization
2.5 Min utilization 3 Min utilization Min utilization
Mean utilization = 0.95 Mean utilization = 1.71
15 Mean utilization = 9.30
2 2.5
1.5 2 10
1.5
1 1 5
0.5 0.5
0
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
percentile (p) percentile (p) percentile (p)
(a) s=1
Total bandwidth as function of p Total bandwidth as function of p Total bandwidth as function of p
Streams: [1,2,6,7,10,13,], s=5 Streams: [14,15,16,17,21,22,], s=5 Streams: [90,91,92,93,94,95,], s=5
bandwidths (x10 bps)
bandwidths (x10 bps)
bandwidths (x106 bps)
3 η+∑ε
(p) 4 η+∑ε
(p) 20 η+∑ε
(p)
Max utilization 3.5 Max utilization Max utilization
6
2.5
6
Min utilization 3 Min utilization 15 Min utilization
2 Mean utilization = 0.95 2.5 Mean utilization = 1.72 Mean utilization = 9.30
1.5 2 10
1 1.5
1 5
0.5 0.5
0
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
percentile (p) percentile (p) percentile (p)
(b) s=5
Figure 6. Bandwidth allocated and utilization as function of p
Loss probability as function of p Loss probability as function of p Loss probability as function of p
Streams: [1,2,6,7,10,13,], s=1 Streams: [14,15,16,17,21,22,], s=1 Streams: [90,91,92,93,94,95,], s=1
1 1 1
P[loss] P[loss] P[loss]
loss probability
loss probability
loss probability
0.1
P[∑Yi>ηT] P[∑Yi>ηT] 0.1 P[∑Yi>ηT]
0.1
0.01
0.01
0.001
0.01
0.001 0.0001
0.0001 0.001 1e-05
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
percentile (p) percentile (p) percentile (p)
(a) s=1
Loss probability as function of p Loss probability as function of p Loss probability as function of p
Streams: [1,2,6,7,10,13,], s=5 Streams: [14,15,16,17,21,22,], s=5 Streams: [90,91,92,93,94,95,], s=5
1 1 1
P[loss] P[loss] P[loss]
loss probability
loss probability
loss probability
0.1 P[∑Yi>ηT]
0.1
P[∑Yi>ηT] P[∑Yi>ηT]
0.01
0.01 0.1
0.001
0.0001 0.001
1e-05 0.0001 0.01
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
percentile (p) percentile (p) percentile (p)
(b) s=5
Figure 7. Losses and the bound on the losses as function of p
10. Losses as function of m Losses as function of m
s=1, p=0.75 s=1, p=0.75
1 1
P[loss] P[loss]
loss probability
loss probability
0.1 P[∑Yi>ηT] 0.1 P[∑Yi>ηT]
0.01 0.01
0.001 0.001
0.0001 0.0001
1e-05 1e-05
1 10 100 1 10 100
m - number of concurrent streams m - number of concurrent streams
(a) H.264 traces (b) MPEG-4 traces
Figure 8. Losses as function of the number of concurrent streams
Bandwidth comparison Bandwidth comparison
s=1, p=0.75 s=1, p=0.75
40 50
TotalBw TotalBw
35
30 EffBw 40 EffBw
bps (x106)
bps (x10 )
6
25 30
20
15 20
10 10
5
0 0
2 4 6 8 10 12 5 10 15 20 25 30 35 40 45
m - number of concurrent streams m - number of concurrent streams
(a) H.264 traces (b) MPEG-4 traces
Figure 9. Comparison of effective bandwidth against total bandwidth as function of the number of concurrent streams
Bandwidth ratio Bandwidth ratio
s=1, p=0.75 s=1, p=0.75
1.5 1.4
1.4
1.2
1.3
1.2 1
1.1
1 0.8
0.9 0.6
0.8
2 4 6 8 10 12 5 10 15 20 25 30 35 40 45
m - number of concurrent streams m - number of concurrent streams
(a) H.264 traces (b) MPEG-4 traces
Figure 10. Ratio of total bandwidth to effective bandwidth as function of the number of concurrent streams
11. 4. RELATED WORK
There is a long history of research in bandwidth management for video and multimedia applications, a significant portion
of which is devoted to the problem of bandwidth allocation and buffer sizing at the client side. For example Sen et al1
analyzed the problem of determining the minimum buffer size and the corresponding rate and startup latency as required
by the client. In doing so, they show there is one such minimum, and present an O(N log N ) algorithm to find it. More
recently, Li et al18 showed how to characterize the stream’s envelope using the corresponding parameters of a leaky-bucket,
and they give a geometric approach to compute these parameters. In our experimental evaluation, we used the effective
bandwidth envelope as defined by Mokhtar et al.3 The advantage of this approach is that it gives a tighter bound as
compared to leaky-bucket models, but it is also computationally more expensive.
Notice that none of the above-mentioned approaches considers the problem from the server side, and thus none of these
approaches is able to capitalize on the economy of scale resulting from the delivery of multiple (potentially large number of)
streams from a single server. Moreover, these models are deterministic, lossless and implicitly impose a hard-reservations
model.
When looking at the problem from the server side, an important related work is that by Park and van der Schaar,8
which considers the issue of introducing a brokering system whereby agents bid for the resources they need. In doing
so, the problem is modelled as a congestion game which gives the guarantee that there will always be a Nash equilibrium
and that the bidding process assures a notion of fairness among players. The principal drawback of this model is that it is
supported on an adjustable quality video encoding scheme. From a practical standpoint, on-line video-encoding schemes
do not scale as they involve highly CPU-intensive processing. In practice, what large content providers do is to have the
videos pre-encoded, possibly at a small discrete number of quality settings, and at delivery time the server’s only concern
is to deliver the stream under the desired QoS settings.
Su and Wu9 present another relevant multi-user streaming system that adjusts the encoding rate of each stream while
minimizing the mean-square-error and the mean-absolute-difference distortion metrics for each stream. This scheme makes
use of the Fine Granularity Scalability (FGS) and Fine Granularity Scalability Temporal (FGST) extensions of the MPEG-4
standard as mechanisms that allow real-time control of the stream’s rate. The system then divides the total capacity so that
each stream receives the same quality, thus providing for a notion of quality-fairness. As before, we argue that adaptive
encoding schemes do not scale well to allow service of a large number of concurrent streams.
Yet another server-side approach is that by Krunz and Tripathi7, 19 whereby a technique for allocating VBR streams
while minimizing the total bandwidth is proposed. This technique is based on the use of the regularity of the streams’
pattern and the maximum sizes of I, P, and B frames. Using these maximum sizes it is possible to define a periodic
envelope. By adjusting the temporal shift (phase) between the streams, it is possible to minimize the allocated bandwidth.
As with our approach, this technique also incorporates an easy admission control scheme for on-line allocation. The
principal drawback of this technique, however, is that it relies on maximum sizes to define the periodic envelope. Due
to the very large variations in frame sizes, this envelope is not as tight as that obtained by using the effective bandwidth
envelope, or other smoothing approaches, including ours.
There are several techniques4–6 that find a feasible schedule, i.e. an envelope curve, and make use of a reservation
mechanism along the path to adjust the reserved bandwidth as dictacted by the schedule. For example, McManus and
Ross4 define periodic constant-rate intervals, whereas Knightly and Zhang5 generalize the model by allowing arbitrary
constant-rate intervals. Lai et al6 present an allocation scheme for VBR streams that is monotonically decreasing, i.e.
it dynamically adjust the allocation of a stream, always going downwards. This stands in contrast to other schemes that
require adjustments of the allocated bandwidth in the network either increasing or decreasing, where the increasing changes
may not be always feasible. Feng and Rexford2 provide a survey of smoothing techniques and present a detailed evaluation
using traces of Motion-JPEG encoded videos.
5. CONCLUSION
In this paper, we presented a novel two-tiered approach to the allocation of streaming jobs to (virtual/cloud) servers, in
such a way that the allocation be performed in an on-line fashion and that it satisfies the QoS parameters of each allocated
stream. To do so, our technique requires knowledge of three static parameters per stream, and needs to keep track of
three state variables per server. This makes our proposed approach scalable and very light weight. As a bandwidth
12. management scheme, our two-tiered allocation strategy spans the full spectrum of design choices, ranging from a best-
effort model to exclusive per-stream reservations. Our experimental evaluation confirmed that having an allocation that
is shared across a number of streams allows the system to cope quite well with peak traffic demands. In addition, the
incurred overhead diminishes as the number of concurrent streams increases. Moreover, the incurred losses can be easily
kept within permissible bounds by appropriately provisioning the slack space.
The Markov inequality we used in this paper provides an upper bound on losses in such a way that the bound for a
collection of streams is linearly separable. However, this bound is loose. Indeed, our experimental evaluation revealed
that losses are typically below this bound by an order of magnitude or more. This opens the possibility of experimenting
with many shared slack management techniques, one of which (the maximum rule) was explored in depth during our
experimental evaluation. Our current and future work involves the investigation of alternative, and potentially more efficient
techniques.
ACKNOWLEDGMENTS
This work is supported in part by a number of NSF awards, including CISE/CSR Award #0720604, ENG/EFRI Award
#0735974, CISE/CNS Award #0524477, CNS/NeTS Award #0520166, CNS/ITR Award #0205294, and CISE/EIA RI
Award #0202067. Jorge Londo˜ o is supported in part by the Universidad Pontificia Bolivariana and COLCIENCIAS–
n
Instituto Colombiano para el Desarrollo de la Ciencia y la Tecnolog´a “Francisco Jos´ de Caldas”.
ı e
REFERENCES
[1] Sen, S., Dey, J., Kurose, J., Stankovic, J., and Towsley, D., “Streaming CBR transmission of VBR stored video,” in
[Proc. SPIE Symposium on Voice Video and Data Communications], (1997).
[2] Feng, W. and Rexford, J., “A comparison of bandwidth smoothing techniques for the transmission of prerecorded
compressed video,” in [Proceedings of INFOCOM ’97 ], 00, 58, IEEE Computer Society, Los Alamitos, CA, USA
(1997).
[3] Mokhtar, H. M., Pereira, R., and Merabti, M., “An effective bandwidth model for deterministic QoS guarantees of
VBR traffic,” in [Proceedings of the Eighth IEEE International Symposium on Computers and Communications ISCC
’03], 1318, IEEE Computer Society, Washington, DC, USA (2003).
[4] McManus, J. and Ross, K., “Video on demand over atm: constant-rate transmission and transport,” in [Proceedings
of the Fifteenth Annual Joint Conference of the IEEE Computer Societies. Networking the Next Generation. IEEE
INFOCOM ’96.], 3, 1357–1362 (March 1996).
[5] Knightly, E. W. and Zhang, H., “D-BIND: an accurate traffic model for providing QoS guarantees to VBR traffic,”
IEEE/ACM Trans. Netw. 5, 219–231 (April 1997).
[6] Lai, H., Lee, J. Y., and Chen, L., “A monotonic-decreasing rate scheduler for variable-bit-rate video streaming,” IEEE
Transactions on Circuits and Systems for Video Technology 15 (February 2005).
[7] Krunz, M., Apostolopoulos, G., and Tripathi, S. K., “Bandwidth allocation and admission control schemes for the
distribution of MPEG streams in VOD systems,” International Journal of Parallel and Distributed Systems and
Networks 3, 108–121 (April 2000).
[8] Park, H. and van der Schaar, M., “Congestion game modeling for brokerage based multimedia resource management,”
Packet Video 2007 , 18–25 (November 2007).
[9] Su, G. and Wu, M., “Efficient bandwidth resource allocation for low-delay multiuser video streaming,” IEEE Trans-
actions on Circuits and Systems for Video Technology 15, 1124–1137 (September 2005).
[10] Seeling, P., Reisslein, M., and Kulapala, B., “Network performance evaluation using frame size and quality traces of
single-layer and two-layer video: A tutorial,” IEEE Communications Surveys and Tutorials 6, 58–78 (2004).
[11] Garrett, M. W. and Willinger, W., “Analysis, modeling and generation of self-similar VBR video traffic,” SIGCOMM
Computer Communications Review 24, 269–280 (October 1994).
[12] Krunz, M. and Tripathi, S. K., “On the characterization of VBR MPEG streams,” in [Proceedings of the 1997 ACM
SIGMETRICS international conference on Measurement and modeling of computer systems SIGMETRICS ’97 ], 192–
202, ACM, New York, NY, USA (1997).
[13] Fitzek, F. H. P. and Reisslein, M., “MPEG-4 and H.263 video traces for network performance evaluation,” IEEE
Network , 40–54 (2001).
13. [14] Fitzek, F. H. P. and Reisslein, M., “MPEG-4 and H.263 video traces for network performance evaluation,” tech. rep.,
Technical University Berlin (2000).
[15] Seeling, P., Fitzek, F. H., and Reisslein, M., [Video Traces for Network Performance Evaluation], Springer (2007).
[16] “MPEG-4 and H.263 video traces for network performance evaluation.” http://www.tkn.tu-
berlin.de/research/trace/trace.html (2008).
[17] “Video traces for network performance evaluation.” http://trace.eas.asu.edu/tracemain.html.
[18] Li, P., Lin, W., Rahardja, S., Lin, X., Yang, X., and Li, Z., “Geometrically determining the leaky bucket parameters
for video streaming over constant bit-rate channels,” in [Proc. of ICASSP04 ], 20, 193–204 (February 2005).
[19] Krunz, M. and Tripathi, S. K., “Exploiting the temporal structure of MPEG video for the reduction of bandwidth
requirements,” in [Proceedings of INFOCOM ’97 ], 67, IEEE Computer Society, Washington, DC, USA (April 1997).