LNCS 2012
Content Delivery Networks (CDNs) have been widely used to deliver
web contents on today’s Internet. Gaining tremendous popularity, live streaming
is also increasingly being delivered by CDNs. Compared to conventional static
or dynamic web contents, the new application type of live streaming exposes
unique characteristics that pose challenges to the underlying CDN infrastructure.
Unlike traditional web-objects fetching, which allows Edge Servers to cache contents
and thus typically only involves Edge Servers for delivering contents, live
streaming requires real-time full CDN-streaming paths that span across Ingest
Servers, Origin Servers and Edge Servers.
DNS is the standard practice for enabling dynamic assignment of servers. GeoDNS,
a specialized DNS system, provides DNS resolution by taking into account the
geographical locations of end-users and CDN servers. Though GeoDNS effectively
redirects users to nearest CDN Edge Servers, it may not be able to select
the optimal Origin Server for relaying a live stream to Edge Servers due to the
unique characteristics of live streaming. In this work, we consider the requirements
of delivering live streaming with CDN, and propose advanced design for
selecting optimal Origin Streaming Servers in order to reduce network transit
cost and increase viewers’ experience. We further propose a live-streaming specific
GeoDNS design for selecting optimal Origin Servers to serve Edge Servers.
Optimized Selection of Streaming Servers with GeoDNS for CDN Delivered Live S...Zhenyun Zhuang
This document proposes a new DNS design called Sticky-DNS to optimize server selection for CDN-delivered live streaming. Sticky-DNS aims to minimize CDN transit costs while maintaining good viewer experience. Unlike traditional GeoDNS which selects the nearest origin server to an edge server, Sticky-DNS considers the full ingest-origin and origin-edge paths to potentially select a non-nearest origin server that results in lower overall transit costs. It does this by maintaining cost values for all server pairs and selecting origins to serve edges in a way that minimizes total path costs. For less popular streams, origins are chosen based on end-to-end path lengths, while for popular streams Sticky-DNS adapts to encourage reuse
Optimizing CDN Infrastructure for Live Streaming with Constrained Server Chai...Zhenyun Zhuang
This document proposes a method called Constrained Server Chaining (CSC) to optimize CDN infrastructure for live streaming. CSC allows CDN streaming servers to dynamically select upstream servers to pull live streams from, rather than only pulling from fixed ingest servers. This allows streaming servers to form constrained chains to minimize total transit costs for the CDN provider while ensuring end user experience is not compromised by capping delivery path lengths. The document outlines the problem definition, design overview, and software architecture of CSC and provides an example to motivate how CSC can reduce costs compared to traditional layered CDN structures.
Chaining Algorithm and Protocol for Peer-to-Peer Streaming Video on Demand Sy...ijwmn
As the various architectures and protocol have been implemented a true VoD system has great demand in the global users. The traditional VoD system does not provide the needs and demands of the global users. The major problem in the traditional VoD system is serving of video stream among clients is duplicated and streamed to the different clients, which consumes more server bandwidth and the client uplink bandwidth is not utilized and the performance of the system degrades. Our objective in this paper is to send one server stream sufficient to serve the many clients without duplicating the server stream. Hence we have proposed a protocol and algorithm that chains the proxy servers and subscribed clients utilize client’s uplink bandwidth such that the load on the server is reduced. We have also proved that less rejection ratio of the clients and better utilization of the buffer and bandwidth for the entire VoD system.
International Journal of Engineering and Science Invention (IJESI)inventionjournals
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
Optimal Streaming Protocol for VoD Using Clients' Residual BandwidthIDES Editor
A true VoD system has tremendous demand in the
market. The existing VoD system does not cater the needs
and demands of the market. The major problem in the VoD
system is serving of clients with expected QoS is difficult. In
this paper, we proposed a protocol and algorithm that
chains the proxy servers and subscribed clients. Our
objective is to send one server stream and this stream should
be served to N asynchronous clients. The server bandwidth
is scarcity and on the client uplink bandwidth is
underutilized. In this protocol, we are using client’s residual
bandwidth such that the load on the server bandwidth is
reduced. We have proved that optimal utilization of the
buffer and bandwidth for the entire VoD system and also
less rejection ratio of the clients.
This document provides an overview of managing message transport in Exchange Server. It covers configuring hub transport servers, accepted and remote domains, SMTP connectors, and back pressure. The lab exercises troubleshoot message transport issues and configure routing to ensure all internet messages flow through the main Vancouver site.
Survey paper on Virtualized cloud based IPTV Systemijceronline
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
Integrated services and RSVP - ProtocolPradnya Saval
- MPLS can be used to create virtual private networks (VPNs) that provide wide-area connectivity between sites of a large organization through dedicated label switched paths. This gives the appearance of a dedicated network while transmitting through a public or shared MPLS network.
- The integrated services model was developed by IETF to provide different levels of quality of service in the Internet. It uses resource reservation, packet classification, and scheduling to ensure applications receive their requested QoS.
- The Resource Reservation Protocol (RSVP) is used by the integrated services model to signal resource requirements and set up flows with a requested QoS across a network. RSVP uses soft state and receiver-initiated reservations.
Optimized Selection of Streaming Servers with GeoDNS for CDN Delivered Live S...Zhenyun Zhuang
This document proposes a new DNS design called Sticky-DNS to optimize server selection for CDN-delivered live streaming. Sticky-DNS aims to minimize CDN transit costs while maintaining good viewer experience. Unlike traditional GeoDNS which selects the nearest origin server to an edge server, Sticky-DNS considers the full ingest-origin and origin-edge paths to potentially select a non-nearest origin server that results in lower overall transit costs. It does this by maintaining cost values for all server pairs and selecting origins to serve edges in a way that minimizes total path costs. For less popular streams, origins are chosen based on end-to-end path lengths, while for popular streams Sticky-DNS adapts to encourage reuse
Optimizing CDN Infrastructure for Live Streaming with Constrained Server Chai...Zhenyun Zhuang
This document proposes a method called Constrained Server Chaining (CSC) to optimize CDN infrastructure for live streaming. CSC allows CDN streaming servers to dynamically select upstream servers to pull live streams from, rather than only pulling from fixed ingest servers. This allows streaming servers to form constrained chains to minimize total transit costs for the CDN provider while ensuring end user experience is not compromised by capping delivery path lengths. The document outlines the problem definition, design overview, and software architecture of CSC and provides an example to motivate how CSC can reduce costs compared to traditional layered CDN structures.
Chaining Algorithm and Protocol for Peer-to-Peer Streaming Video on Demand Sy...ijwmn
As the various architectures and protocol have been implemented a true VoD system has great demand in the global users. The traditional VoD system does not provide the needs and demands of the global users. The major problem in the traditional VoD system is serving of video stream among clients is duplicated and streamed to the different clients, which consumes more server bandwidth and the client uplink bandwidth is not utilized and the performance of the system degrades. Our objective in this paper is to send one server stream sufficient to serve the many clients without duplicating the server stream. Hence we have proposed a protocol and algorithm that chains the proxy servers and subscribed clients utilize client’s uplink bandwidth such that the load on the server is reduced. We have also proved that less rejection ratio of the clients and better utilization of the buffer and bandwidth for the entire VoD system.
International Journal of Engineering and Science Invention (IJESI)inventionjournals
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
Optimal Streaming Protocol for VoD Using Clients' Residual BandwidthIDES Editor
A true VoD system has tremendous demand in the
market. The existing VoD system does not cater the needs
and demands of the market. The major problem in the VoD
system is serving of clients with expected QoS is difficult. In
this paper, we proposed a protocol and algorithm that
chains the proxy servers and subscribed clients. Our
objective is to send one server stream and this stream should
be served to N asynchronous clients. The server bandwidth
is scarcity and on the client uplink bandwidth is
underutilized. In this protocol, we are using client’s residual
bandwidth such that the load on the server bandwidth is
reduced. We have proved that optimal utilization of the
buffer and bandwidth for the entire VoD system and also
less rejection ratio of the clients.
This document provides an overview of managing message transport in Exchange Server. It covers configuring hub transport servers, accepted and remote domains, SMTP connectors, and back pressure. The lab exercises troubleshoot message transport issues and configure routing to ensure all internet messages flow through the main Vancouver site.
Survey paper on Virtualized cloud based IPTV Systemijceronline
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
Integrated services and RSVP - ProtocolPradnya Saval
- MPLS can be used to create virtual private networks (VPNs) that provide wide-area connectivity between sites of a large organization through dedicated label switched paths. This gives the appearance of a dedicated network while transmitting through a public or shared MPLS network.
- The integrated services model was developed by IETF to provide different levels of quality of service in the Internet. It uses resource reservation, packet classification, and scheduling to ensure applications receive their requested QoS.
- The Resource Reservation Protocol (RSVP) is used by the integrated services model to signal resource requirements and set up flows with a requested QoS across a network. RSVP uses soft state and receiver-initiated reservations.
Inlet Technologies - Powering Smooth StreamingSematron UK Ltd
The white paper discusses Microsoft's Smooth Streaming technology which allows for adaptive bitrate streaming of video over HTTP. It works by encoding video into small chunks at multiple bitrates, allowing clients to switch between chunks to maintain high quality playback based on bandwidth. Inlet provides products that help content creators generate and deliver Smooth Streaming video assets for live and on-demand streaming.
Enrich multi-channel P2P VoD streaming based on dynamic replication strategyIJAAS Team
Peer-to-Peer Video-on-Demand (VoD) is a favorable solution which compromises thousands of videos to millions of users with completeinteractive video watching stream. Most of the profitable P2P streaming groupsPPLive, PPStream and UUSee have announced a multichannel P2P VoD system that approvals user to view extra one channel at a time. The present multiple channel P2P VoD system resonant a video at a low streaming rate due to the channel resource inequity and channel churn. In order to growth the streaming capacity, this paper highlights completely different effective helpers created resource balancing scheme that actively recognizes the supply-and-demand inequity in multiple channels. Moreover, peers in an extra channel help its unused bandwidth resources to peers in a shortage channel that minimizes the server bandwidth consumption. To provide a desired replication ratio for optimal caching, it develops a dynamic replication strategy that optimally tunes the number of replicas based on dynamic popularity in a distributed and dynamic routine. This work accurately forecasts the varying popularity over time using Auto-Regressive Integrated Moving Average (ARIMA) model, an effective time-series forecasting technique that supports dynamic environment. Experimental assessment displays that the offered dynamic replication strategy which should achieves high streaming capacity under reduced server workload when associated to existing replication algorithms.
Impact of Video Encoding Parameters on Dynamic Video TranscodingVideoguy
This document summarizes research on the impact of different video encoding parameters on dynamic video transcoding. It describes how a transcoding proxy server can dynamically change encoding parameters like frame size, quantization scale (Q-scale), and color depth in response to varying client devices, network bandwidth, and content demand. Experiments were conducted to measure how these encoding parameters affect network bandwidth usage, CPU load, and energy consumption on the transcoding server. The results provide insight into designing algorithms for dynamic transcoding control.
Fast Near-Optimal Delivery of Live Streams in CDNGwendal Simon
CDNs are confronted with a sharp increase in traffic related to live video (channel) streaming. Previous theoretical models that deal with streaming
capacity problems do not capture the emerging reality faced by today’s CDNs, in particular rate-adaptive streaming. In this presentation, we identify a new, discretized streaming model for live video delivery in CDNs. For this model we formulate a general optimization problem. Then we study a practical scenario that occurs in real CDNs. We present a fast, easy to implement, and near-optimal algorithm with performance approximation ratios that are negligible for large network.
More details in:
http://enstb.org/~gsimon/Resources/algotel13.pdf
http://enstb.org/~gsimon/Resources/icccn13.pdf
Fast Distribution of Replicated Content to Multi- Homed ClientsIDES Editor
Clients can potentially have access to more than
one communication network nowadays due to the availability
of a wide variety of access technologies. On the other hand,
service replication has become a trivial approach in overlay
networks to provide a high availability of data and better QoS.
In this paper, we consider such a multi-homed client seeking
a replicated service in overlay network (e.g., CDN, peer-topeer).
Our aim is to improve the content distribution by
proposing a new model for being applied at the applicationlevel
and in a fully distributed way. Basically, our model
proposes to determine the best mirror server that could be
reached through each client’s network interface based on
application utility function. Then, it consists of downloading
the requested content from the determined best servers
simultaneously through their associated interfaces. Each best
server should deliver a specific estimated range of bytes (i.e.,
content chunk) to an independent TCP socket opened at the
client side for being finally aggregated at the applicationlevel.
Our real experiments show that our model is able to
considerably improve the QoS (e.g., content transfer time)
perceived by the client comparing to the traditional content
distribution techniques.
Supporting Real-time Traffic: Preparing Your IP Network for ...Videoguy
This document provides guidance on preparing an IP network to support real-time video conferencing traffic. It discusses how real-time traffic differs from typical data traffic in terms of bandwidth utilization and sensitivity to delay and packet loss. It recommends implementing Quality of Service (QoS) using Differentiated Services across the network to prioritize real-time traffic. The document also covers classifying and managing bandwidth demand, and testing and monitoring the network to support video conferencing.
A study of reserach paper they investigate the problem of seamless VM migrations in the DCN. Leveraging the benefit of decoupling a service from its physical location in the emerging technique named data networking; we propose a named service framework to support seamless VM migrations. In comparison with other approaches, their approach has following advantages: 1) the VM migration is interruption free; 2) the overhead to maintain the routing information is less than that caused by classic NDN; 3) the routing protocol is robust to both link and node failures; 4) the framework inherently supports the implementation of a distributed load balancing algorithm, via which requests are distributed to VMs in balance. The analysis and simulation results verify these benefits.
This document discusses using the IEEE 802.11e standard to ensure quality of service for multimedia streaming over wireless networks. It proposes a Hop-Based Priority technique that assigns higher priority to packets after each hop of transmission to minimize contention between packets. The technique uses 802.11e's Enhanced Distributed Channel Access to control channel access based on packet priority. Simulation results show the Hop-Based Priority technique can reduce delay and improve throughput for multimedia streaming over wireless mesh networks.
A3: application-aware acceleration for wireless data networksZhenyun Zhuang
This document discusses application-aware acceleration (A3) for improving application performance over wireless networks. It presents results showing that while enhanced transport protocols improve performance for FTP, they provide little benefit for other popular applications like CIFS, SMTP, and HTTP. This is because the behavior of these applications, designed for reliable LANs, negatively impacts their performance over lossy wireless links. The document proposes A3 as a middleware solution that offsets these behavioral problems through application-specific design principles, while remaining transparent to applications.
AOTO: Adaptive overlay topology optimization in unstructured P2P systemsZhenyun Zhuang
IEEE GLOBECOM 2003
Peer-to-Peer (P2P) systems are self-organized and
decentralized. However, the mechanism of a peer randomly
joining and leaving a P2P network causes topology mismatch-
ing between the P2P logical overlay network and the physical
underlying network. The topology mismatching problem brings
great stress on the Internet infrastructure and seriously limits
the performance gain from various search or routing tech-
niques. We propose the Adaptive Overlay Topology Optimiza-
tion (AOTO) technique, an algorithm of building an overlay
multicast tree among each source node and its direct logical
neighbors so as to alleviate the mismatching problem by choos-
ing closer nodes as logical neighbors, while providing a larger
query coverage range. AOTO is scalable and completely dis-
tributed in the sense that it does not require global knowledge
of the whole overlay network when each node is optimizing the
organization of its logical neighbors. The simulation shows that
AOTO can effectively solve the mismatching problem and re-
duce more than 55% of the traffic generated by the P2P system itself.
On the Impact of Mobile Hosts in Peer-to-Peer Data NetworksZhenyun Zhuang
This document analyzes the performance issues faced by mobile hosts participating in peer-to-peer (P2P) data networks like BitTorrent. It finds that the design of P2P networks is incompatible with the characteristics of wireless networks, causing poor performance for mobile users. It then presents a solution called wireless P2P (wP2P) that addresses these issues through techniques only applied on mobile hosts, improving performance for both mobile and fixed peers. An evaluation shows wP2P provides significant gains over existing P2P applications on mobile networks.
PAIDS: A Proximity-Assisted Intrusion Detection System for Unidentified WormsZhenyun Zhuang
This document proposes a new intrusion detection system called PAIDS (Proximity-Assisted Intrusion Detection System) to identify unknown worms. Existing signature-based and anomaly-based detection systems are ineffective against new worms that spread quickly. PAIDS takes advantage of the clustered spread of worms among nearby hosts, especially in the early stages, rather than relying on signatures. It aims to detect worm outbreaks when they first begin spreading to limit their propagation. Preliminary simulations show PAIDS has a high detection rate and low false positive rate.
Hazard avoidance in wireless sensor and actor networksZhenyun Zhuang
This document discusses hazards that can occur in wireless sensor and actor networks due to out-of-order execution of queries and commands. It identifies three types of hazards:
1) Command-after-command (CAC) hazard occurs when the order of two sequential commands is reversed.
2) Query-after-command (QAC) hazard occurs when a query is executed before the corresponding command.
3) Command-after-query (CAQ) hazard is the reverse of QAC, where a command is executed before its preceding query.
The document uses an example of a fire detection and suppression system to illustrate these hazards and their undesirable consequences. It also discusses challenges in addressing hazards such as parallel
Client-side web acceleration for low-bandwidth hostsZhenyun Zhuang
This document proposes client-side optimizations to reduce web page load times for users on low-bandwidth networks. It analyzes problems with how current web browsers fetch entire pages greedily without prioritizing visible content. This wastes bandwidth and increases load times. The document proposes three browser-side mechanisms: 1) prioritizing the fetching of objects visible on the initial screen over other objects, 2) reordering object fetching to better utilize bandwidth, and 3) improving connection management. Simulations show these techniques can significantly reduce user-perceived response times compared to current browsers for low-bandwidth conditions.
Ensuring High-performance of Mission-critical Java Applications in Multi-tena...Zhenyun Zhuang
The document discusses problems with ensuring high performance of mission-critical Java applications in multi-tenant cloud environments. It identifies issues caused by resource sharing between applications on the same platform, such as memory pressure triggering page swapping and direct reclaiming, which can severely degrade Java application performance through increased garbage collection pauses and reduced throughput. The authors investigate two scenarios in a production environment and determine that transparent huge pages, memory pressure from other applications, and interactions between the JVM and Linux memory management are key factors impacting Java application performance in multi-tenant cloud setups.
Programmatic Right Here, Right Now ( English Version )Xavier Garrido
Presentation done at II Forum Programmatic during March the 1rst 2016 in Madrid.
In this presentation, i tried to bring closer programmatic in a simple way to the audience emphasizing that we are facing a paradigm shift in the advertising industry.
Legal aspects of religion in the workplaceRonald Brown
1. Managing religion in the workplace is challenging as people have strong spiritual beliefs but companies must also avoid religious discrimination. While some firms welcome religious expressions, others keep religion out of the workplace.
2. A survey found 20% of workers reported experiencing religious prejudice or knew of others facing discriminatory treatment. This has led to lawsuits against companies over religious discrimination or failure to accommodate religious practices.
3. Employers must strive to balance religious expression with inclusion of all beliefs to avoid lawsuits. They should provide religious accommodations when possible without causing undue hardship. Anti-harassment policies and training can help prevent problems related to religion in the workplace.
Inlet Technologies - Powering Smooth StreamingSematron UK Ltd
The white paper discusses Microsoft's Smooth Streaming technology which allows for adaptive bitrate streaming of video over HTTP. It works by encoding video into small chunks at multiple bitrates, allowing clients to switch between chunks to maintain high quality playback based on bandwidth. Inlet provides products that help content creators generate and deliver Smooth Streaming video assets for live and on-demand streaming.
Enrich multi-channel P2P VoD streaming based on dynamic replication strategyIJAAS Team
Peer-to-Peer Video-on-Demand (VoD) is a favorable solution which compromises thousands of videos to millions of users with completeinteractive video watching stream. Most of the profitable P2P streaming groupsPPLive, PPStream and UUSee have announced a multichannel P2P VoD system that approvals user to view extra one channel at a time. The present multiple channel P2P VoD system resonant a video at a low streaming rate due to the channel resource inequity and channel churn. In order to growth the streaming capacity, this paper highlights completely different effective helpers created resource balancing scheme that actively recognizes the supply-and-demand inequity in multiple channels. Moreover, peers in an extra channel help its unused bandwidth resources to peers in a shortage channel that minimizes the server bandwidth consumption. To provide a desired replication ratio for optimal caching, it develops a dynamic replication strategy that optimally tunes the number of replicas based on dynamic popularity in a distributed and dynamic routine. This work accurately forecasts the varying popularity over time using Auto-Regressive Integrated Moving Average (ARIMA) model, an effective time-series forecasting technique that supports dynamic environment. Experimental assessment displays that the offered dynamic replication strategy which should achieves high streaming capacity under reduced server workload when associated to existing replication algorithms.
Impact of Video Encoding Parameters on Dynamic Video TranscodingVideoguy
This document summarizes research on the impact of different video encoding parameters on dynamic video transcoding. It describes how a transcoding proxy server can dynamically change encoding parameters like frame size, quantization scale (Q-scale), and color depth in response to varying client devices, network bandwidth, and content demand. Experiments were conducted to measure how these encoding parameters affect network bandwidth usage, CPU load, and energy consumption on the transcoding server. The results provide insight into designing algorithms for dynamic transcoding control.
Fast Near-Optimal Delivery of Live Streams in CDNGwendal Simon
CDNs are confronted with a sharp increase in traffic related to live video (channel) streaming. Previous theoretical models that deal with streaming
capacity problems do not capture the emerging reality faced by today’s CDNs, in particular rate-adaptive streaming. In this presentation, we identify a new, discretized streaming model for live video delivery in CDNs. For this model we formulate a general optimization problem. Then we study a practical scenario that occurs in real CDNs. We present a fast, easy to implement, and near-optimal algorithm with performance approximation ratios that are negligible for large network.
More details in:
http://enstb.org/~gsimon/Resources/algotel13.pdf
http://enstb.org/~gsimon/Resources/icccn13.pdf
Fast Distribution of Replicated Content to Multi- Homed ClientsIDES Editor
Clients can potentially have access to more than
one communication network nowadays due to the availability
of a wide variety of access technologies. On the other hand,
service replication has become a trivial approach in overlay
networks to provide a high availability of data and better QoS.
In this paper, we consider such a multi-homed client seeking
a replicated service in overlay network (e.g., CDN, peer-topeer).
Our aim is to improve the content distribution by
proposing a new model for being applied at the applicationlevel
and in a fully distributed way. Basically, our model
proposes to determine the best mirror server that could be
reached through each client’s network interface based on
application utility function. Then, it consists of downloading
the requested content from the determined best servers
simultaneously through their associated interfaces. Each best
server should deliver a specific estimated range of bytes (i.e.,
content chunk) to an independent TCP socket opened at the
client side for being finally aggregated at the applicationlevel.
Our real experiments show that our model is able to
considerably improve the QoS (e.g., content transfer time)
perceived by the client comparing to the traditional content
distribution techniques.
Supporting Real-time Traffic: Preparing Your IP Network for ...Videoguy
This document provides guidance on preparing an IP network to support real-time video conferencing traffic. It discusses how real-time traffic differs from typical data traffic in terms of bandwidth utilization and sensitivity to delay and packet loss. It recommends implementing Quality of Service (QoS) using Differentiated Services across the network to prioritize real-time traffic. The document also covers classifying and managing bandwidth demand, and testing and monitoring the network to support video conferencing.
A study of reserach paper they investigate the problem of seamless VM migrations in the DCN. Leveraging the benefit of decoupling a service from its physical location in the emerging technique named data networking; we propose a named service framework to support seamless VM migrations. In comparison with other approaches, their approach has following advantages: 1) the VM migration is interruption free; 2) the overhead to maintain the routing information is less than that caused by classic NDN; 3) the routing protocol is robust to both link and node failures; 4) the framework inherently supports the implementation of a distributed load balancing algorithm, via which requests are distributed to VMs in balance. The analysis and simulation results verify these benefits.
This document discusses using the IEEE 802.11e standard to ensure quality of service for multimedia streaming over wireless networks. It proposes a Hop-Based Priority technique that assigns higher priority to packets after each hop of transmission to minimize contention between packets. The technique uses 802.11e's Enhanced Distributed Channel Access to control channel access based on packet priority. Simulation results show the Hop-Based Priority technique can reduce delay and improve throughput for multimedia streaming over wireless mesh networks.
A3: application-aware acceleration for wireless data networksZhenyun Zhuang
This document discusses application-aware acceleration (A3) for improving application performance over wireless networks. It presents results showing that while enhanced transport protocols improve performance for FTP, they provide little benefit for other popular applications like CIFS, SMTP, and HTTP. This is because the behavior of these applications, designed for reliable LANs, negatively impacts their performance over lossy wireless links. The document proposes A3 as a middleware solution that offsets these behavioral problems through application-specific design principles, while remaining transparent to applications.
AOTO: Adaptive overlay topology optimization in unstructured P2P systemsZhenyun Zhuang
IEEE GLOBECOM 2003
Peer-to-Peer (P2P) systems are self-organized and
decentralized. However, the mechanism of a peer randomly
joining and leaving a P2P network causes topology mismatch-
ing between the P2P logical overlay network and the physical
underlying network. The topology mismatching problem brings
great stress on the Internet infrastructure and seriously limits
the performance gain from various search or routing tech-
niques. We propose the Adaptive Overlay Topology Optimiza-
tion (AOTO) technique, an algorithm of building an overlay
multicast tree among each source node and its direct logical
neighbors so as to alleviate the mismatching problem by choos-
ing closer nodes as logical neighbors, while providing a larger
query coverage range. AOTO is scalable and completely dis-
tributed in the sense that it does not require global knowledge
of the whole overlay network when each node is optimizing the
organization of its logical neighbors. The simulation shows that
AOTO can effectively solve the mismatching problem and re-
duce more than 55% of the traffic generated by the P2P system itself.
On the Impact of Mobile Hosts in Peer-to-Peer Data NetworksZhenyun Zhuang
This document analyzes the performance issues faced by mobile hosts participating in peer-to-peer (P2P) data networks like BitTorrent. It finds that the design of P2P networks is incompatible with the characteristics of wireless networks, causing poor performance for mobile users. It then presents a solution called wireless P2P (wP2P) that addresses these issues through techniques only applied on mobile hosts, improving performance for both mobile and fixed peers. An evaluation shows wP2P provides significant gains over existing P2P applications on mobile networks.
PAIDS: A Proximity-Assisted Intrusion Detection System for Unidentified WormsZhenyun Zhuang
This document proposes a new intrusion detection system called PAIDS (Proximity-Assisted Intrusion Detection System) to identify unknown worms. Existing signature-based and anomaly-based detection systems are ineffective against new worms that spread quickly. PAIDS takes advantage of the clustered spread of worms among nearby hosts, especially in the early stages, rather than relying on signatures. It aims to detect worm outbreaks when they first begin spreading to limit their propagation. Preliminary simulations show PAIDS has a high detection rate and low false positive rate.
Hazard avoidance in wireless sensor and actor networksZhenyun Zhuang
This document discusses hazards that can occur in wireless sensor and actor networks due to out-of-order execution of queries and commands. It identifies three types of hazards:
1) Command-after-command (CAC) hazard occurs when the order of two sequential commands is reversed.
2) Query-after-command (QAC) hazard occurs when a query is executed before the corresponding command.
3) Command-after-query (CAQ) hazard is the reverse of QAC, where a command is executed before its preceding query.
The document uses an example of a fire detection and suppression system to illustrate these hazards and their undesirable consequences. It also discusses challenges in addressing hazards such as parallel
Client-side web acceleration for low-bandwidth hostsZhenyun Zhuang
This document proposes client-side optimizations to reduce web page load times for users on low-bandwidth networks. It analyzes problems with how current web browsers fetch entire pages greedily without prioritizing visible content. This wastes bandwidth and increases load times. The document proposes three browser-side mechanisms: 1) prioritizing the fetching of objects visible on the initial screen over other objects, 2) reordering object fetching to better utilize bandwidth, and 3) improving connection management. Simulations show these techniques can significantly reduce user-perceived response times compared to current browsers for low-bandwidth conditions.
Ensuring High-performance of Mission-critical Java Applications in Multi-tena...Zhenyun Zhuang
The document discusses problems with ensuring high performance of mission-critical Java applications in multi-tenant cloud environments. It identifies issues caused by resource sharing between applications on the same platform, such as memory pressure triggering page swapping and direct reclaiming, which can severely degrade Java application performance through increased garbage collection pauses and reduced throughput. The authors investigate two scenarios in a production environment and determine that transparent huge pages, memory pressure from other applications, and interactions between the JVM and Linux memory management are key factors impacting Java application performance in multi-tenant cloud setups.
Programmatic Right Here, Right Now ( English Version )Xavier Garrido
Presentation done at II Forum Programmatic during March the 1rst 2016 in Madrid.
In this presentation, i tried to bring closer programmatic in a simple way to the audience emphasizing that we are facing a paradigm shift in the advertising industry.
Legal aspects of religion in the workplaceRonald Brown
1. Managing religion in the workplace is challenging as people have strong spiritual beliefs but companies must also avoid religious discrimination. While some firms welcome religious expressions, others keep religion out of the workplace.
2. A survey found 20% of workers reported experiencing religious prejudice or knew of others facing discriminatory treatment. This has led to lawsuits against companies over religious discrimination or failure to accommodate religious practices.
3. Employers must strive to balance religious expression with inclusion of all beliefs to avoid lawsuits. They should provide religious accommodations when possible without causing undue hardship. Anti-harassment policies and training can help prevent problems related to religion in the workplace.
The document discusses definitions of leadership and what makes a good leader. It provides several definitions:
1) "Leadership [is] creating the conditions in organizational systems so that people can do their best work" and "Leaders define or clarify goals for a group, which can be as small as a seminar or as large as a nation-state and mobilize the energies of members of the group to pursue those goals."
2) "Problem solving is the core of leadership" and "the art of accomplishing more than the science of management says is possible."
3) Leadership "is about coping with change", about "motivating and inspiring---keeping people moving in the right direction, despite major obstacles
What makes a leader and what is leadershpRonald Brown
Leadership is defined in multiple ways in the document. It involves creating conditions for people to do their best work, defining goals for a group, and motivating people to pursue those goals. It also involves problem solving and accomplishing more than what seems possible. Effective leadership requires vision, managing change, and having a clear sense of direction.
The document discusses biometrics, which uses physiological or behavioral human characteristics to identify individuals. It defines biometrics and describes a generic biometric system involving enrollment, sensors, feature extraction, and matching. The document outlines several types of biometrics including face recognition, fingerprints, hand geometry, iris/retina scans, DNA, keystrokes, and voice. It also discusses vulnerabilities in biometric systems such as spoofing attacks, template database leaks, and intrinsic limitations like false matches. The document proposes security approaches like feature transformations and cryptosystems to enhance biometric security.
Mutual Exclusion in Wireless Sensor and Actor NetworksZhenyun Zhuang
This document discusses mutual exclusion in wireless sensor and actor networks. It begins by introducing wireless sensor networks and how they have evolved into wireless sensor and actor networks which can both sense and act on their environments. This introduces new challenges around resource utilization that must be addressed. Specifically, the document identifies the problem of mutual exclusion - ensuring only a minimum necessary subset of actors take action for a given event to avoid issues like inefficient resource usage. It defines different types of mutual exclusion and proposes both a greedy centralized approach and a distributed localized approach to address this problem efficiently while meeting application-specific delay bounds and fully covering the event region.
Mobile Hosts Participating in Peer-to-Peer Data Networks: Challenges and Solu...Zhenyun Zhuang
Wireless Networks (2010)
http://dl.acm.org/citation.cfm?id=1873504
Peer-to-peer (P2P) data networks dominate
Internet traffic, accounting for over 60% of the overall
traffic in a recent study. In this work, we study the
problems that arise when mobile hosts participate in
P2P networks. We primarily focus on the performance
issues as experienced by the mobile host, but also study
the impact on other fixed peers. Using BitTorrent as a
key example, we identify several unique problems that
arise due to the design aspects of P2P networks being
incompatible with typical characteristics of wireless
and mobile environments. Using the insights gained
through our study, we present a wireless P2P (wP2P)
client application that is backward compatible with existing
fixed-peer client applications, but when used on
mobile hosts can provide significant performance improvements.
Eliminating OS-caused Large JVM Pauses for Latency-sensitive Java-based Cloud...Zhenyun Zhuang
For PaaS-deployed (Platform as a Service)
customer-facing applications (e.g., online gaming and online
chatting), ensuring low latencies is not just a preferred feature,
but a must-have feature. Given the popularity and powerful-
ness of Java platforms, a significant portion of today’s PaaS
platforms run Java. JVM (Java Virtual Machine) manages a
heap space to hold application objects. The heap space can be
frequently GC-ed (Garbage Collected), and applications can be
occasionally stopped for long time during some GC and JVM
activities.
In this work, we investigated the JVM pause problem.
We found out that there are some (and large) JVM STW
pauses cannot be explained by application-level activities and
JVM activities during GC; instead, they are caused by OS
mechanisms. We successfully reproduced such problems and
root-cause-ed the reasons. The findings can be used to enhance
JVM implementation. We also proposed a set of solutions to
mitigate and eliminate these large STW pauses. We share the
knowledge and experiences in this writing.
Hybrid Periodical Flooding in Unstructured Peer-to-Peer NetworksZhenyun Zhuang
This document proposes a new search mechanism called Hybrid Periodical Flooding (HPF) for unstructured peer-to-peer networks. HPF aims to reduce unnecessary traffic like blind flooding while also addressing the "partial coverage problem" of some statistics-based search mechanisms. It introduces the concept of Periodical Flooding (PF), which controls the number of neighbors a query is forwarded to based on the time-to-live value. This allows the forwarding behavior to change periodically over the query's lifetime. HPF then combines PF with weighted selection of neighbors based on multiple metrics to guide queries towards potentially relevant results while exploring more of the network.
Building Cloud-ready Video Transcoding System for Content Delivery Networks (...Zhenyun Zhuang
GLOBECOM 2012
Video streaming traffic of both VoD (Video on
Demand) and Live is exploding. Various types of businesses
and many people are relying on video streaming to attract
customers/users and for other purposes. Given the vast number
of video stream formats (e.g., MP4, FLV) and transmission
protocols (e.g., HTTP, RTMP, RTSP) for supporting varying
types of playback terminals (particularly mobile devices such as
iphone/ipad and Android phones), video content providers often
need to transcode videos to multiple formats in order to stream
to different types of users.
Being time-sensitive and requiring high bandwidth, video
streaming exerts high pressure on underlying delivery networks.
Content Delivery Network (CDN) providers can help their
customers quickly and reliably distribute stream contents to end
users. In addition to distributing video streams, CDN providers
typically allow their customers to perform video transcoding on
CDN platforms. With the high volume of video streams and the
bursty transcoding workload, CDN providers are eager to deploy
elastic and optimized cloud-based transcoding platforms.
OCPA: An Algorithm for Fast and Effective Virtual Machine Placement and Assig...Zhenyun Zhuang
This document proposes a method called Constrained Server Chaining (CSC) to optimize CDN infrastructure for live streaming. CSC allows CDN streaming servers to dynamically select upstream servers to pull live streams from, rather than only pulling from fixed ingest servers. This can reduce transit costs for CDN providers by creating more direct paths between servers. However, CSC also imposes a delivery length cap to avoid compromising end user experience with longer paths. The document describes the problem CSC addresses, an illustrative example of how CSC works, and the key components of CSC including cost determination, length cap determination, and server connection monitoring.
A Survey on CDN Vulnerability to DoS AttacksIJCNCJournal
This document summarizes research on vulnerabilities in content delivery networks (CDNs) to denial-of-service (DoS) attacks, specifically forwarding loop attacks. CDNs improve website performance but expose nodes to disproportionate traffic from requests circulating endlessly between nodes. The paper analyzes vulnerabilities in commercial CDNs and proposes defensive strategies. Key findings include that adding random search strings to URLs forces edge servers to retrieve content from origin servers, rather than caches. This can be exploited to amplify attacks by overloading origin servers with traffic even if the attacking client reduces its own bandwidth.
A Survey on CDN Vulnerability to DoS AttacksIJCNCJournal
Content Delivery Networks (CDN), or ”content distribution networks” have been introduced to improve performance, scalability, and security of data distributed through the web. To reduce the response time of a web page when certain content is requested, the CDN redirects requests from users’ browsers to geographically distributed surrogate nodes, thus having a positive impact on the response time and network load. As a side effect, the surrogate servers manage possible attacks, especially denial of service attacks, by distributing the considerable amount of traffic generated by malicious activities among different data centers. Some CDNs provide additional services to normalize traffic and filter intrusion attacks, thus further mitigating the effects of possible unpleasant scenarios. Despite the presence of these native protective mechanisms, a malicious user can undermine the stability of a CDN by generating a disproportionate amount of traffic within a CDN thanks to endless cycles of requests circulating between nodes of the same network or between several distinct networks. We refer in particular to Forwarding Loops Attacks, a collection of techniques that can alter the regular forwarding process inside CDNs. In this paper, we analyze the vulnerability of some commercial CDNs to this type of attacks and then propose some possible useful defensive strategies.
This document discusses overlay networks, with a focus on content distribution networks (CDNs). It defines what an overlay network is and provides examples like MBone, 6Bone, and peer-to-peer networks. It then describes the motivations for content networks like reducing congestion and improving performance. Finally, it explains the components of a typical CDN including request routing, content distribution, and accounting infrastructures.
RICHTER is a hybrid P2P-CDN architecture for low latency live video streaming that employs virtualized edge servers. It addresses challenges in CDN- and HAS-based streaming by leveraging characteristics of P2P networks and CDN systems. RICHTER utilizes peers' resources through a distributed transcoding approach in addition to video transmission. Virtual tracker servers located near base stations direct clients' requests and respond based on fetching content from peers, edge servers, CDN servers or origin server depending on latency. An optimization problem and heuristic approach are proposed to guide system operation and answer research questions on optimal placement, response approach, sufficient resources and seeder replacement.
RICHTER: hybrid P2P-CDN architecture for low latency live video streamingMinh Nguyen
Content Distribution Networks (CDN) and HTTP Adaptive Streaming (HAS) are considered the principal video delivery technologies over the Internet. Despite the wide usage of these technologies, designing cost-effective, scalable, and flexible architectures that support low latency and high quality live video streaming is still a challenge. To address this issue, we leverage existing works that have combined the characteristics of Peer-to-Peer (P2P) networks and CDN-based systems and introduce a hybrid CDN-P2P live streaming architecture. When dealing with the technical complexity of managing hundreds or thousands of concurrent streams, such hybrid systems can provide low latency and high quality streams by enabling the delivery architecture to switch between the CDN and the P2P modes. However, modern networking paradigms such as Edge Computing, Network Function Virtualization (NFV), and distributed video transcoding have not been extensively employed to design hybrid P2P-CDN streaming systems. To bridge the aforementioned gaps, we introduce a hybRId P2P-CDN arcHiTecture for low LatEncy live video stReaming (RICHTER), discuss the details of its design, and finally give a few directions of the future work.
A content delivery network (CDN) is a system of distributed servers that deliver web content to users based on their location. CDNs work by copying website content to multiple servers located around the world. When a user requests a page, the request is redirected to the closest server to improve page load times. Businesses use CDNs to accelerate load times and reduce bandwidth usage globally. The main benefits of CDNs are improved performance, availability of content, security against attacks, and providing intelligence about users.
Load balancing in Content Delivery Networks in Novel Distributed EquilibriumIJMER
In today’s world’s to provide service to netizen’s with good availability of data, content
delivery networks (CDNs) must balance requests between servers while assigning clients to closet
servers. In this paper, we describe a new CDN design that associates artificial load-aware coordinates
with clients and data servers and uses them to direct content requests to cached data. This approach
helps achieve good accuracy and service when request workloads and resource availability in the CDN
are dynamic. A deployment and evaluation of our system on Planet Lab demonstrates how it achieves low
request times with high cache hit ratios when compared to other CDN approaches.
This document describes a new content delivery network (CDN) design that uses load-aware network coordinates (LANCs) to direct client requests to appropriate content servers. LANCs incorporate both the network location and load of servers, allowing the CDN to balance load while still assigning clients to nearby servers. The CDN maps clients to servers with the closest LANCs, selecting servers with good network performance. An evaluation on PlanetLab found this LANC-based CDN achieved much lower request times than approaches without load balancing, especially for locality-heavy workloads.
A content delivery network (CDN) distributes content across multiple servers located in different geographic locations. This improves performance by serving content from the server closest to the user. CDNs provide advantages like decreasing server load, faster content delivery, 100% availability even during outages, increased concurrent users, and more control over asset delivery. While CDNs improve performance, they also have disadvantages to consider like increased complexity and upfront costs.
Provide a cost-effective and easy to use feature in order to distribute load across multiple servers, located in different geographic regions, according to their availability, resources and client's proximity

This document discusses various initiatives for content localization to improve user experience on the internet. It describes how users connect to the internet and how content is delivered. Content localization involves optimizing how data is accessed and delivered from the closest servers to end users. This improves speed and performance. The document recommends implementing caching servers on premises and using a content delivery network to localize content closer to users. It provides guidelines on caching best practices and considerations for choosing a CDN. The overall goal is to focus on content localization as soon as possible to optimize performance for users.
A content delivery network or content distribution network (CDN) is a large distributed system of servers deployed in multiple data centers across the Internet. The goal of a CDN is to serve content to end-users with high availability and high performance. CDNs serve a large fraction of the Internet content today, including web objects (text, graphics and scripts), downloadable objects (media files, software, documents), applications (e-commerce, portals), live streaming media, on-demand streaming media, and social networks.
Live multimedia streaming and video on demand issues and challengeseSAT Journals
Live streaming and video on demand face challenges in providing quality service over the internet. The key issues are quality of service due to network instability and degraded performance when serving large numbers of users. While peer-to-peer architectures can help solve issues of limited server resources, they introduce challenges such as ensuring sufficient upload bandwidth from peers and maintaining stable connectivity. Additional issues include firewalls and internet service provider throttling impacting streaming quality, and challenges in peer discovery and storage overhead in peer-to-peer video on demand systems. Addressing these architectural and performance issues is important for providing the best experience of streaming and video on demand services.
This document discusses content distribution networks (CDNs) and ActiveCDN, a proposed improvement using programmable network capabilities. CDNs distribute content requests to nearby nodes to improve performance. ActiveCDN would allow dynamic deployment of in-network CDN nodes based on traffic and optimal placement. The demo at GEC9 showed on-path instantiation of NetServ nodes for content redirection and processing, including initial requests served directly and subsequent requests redirected and served from the node. Background details are provided on technologies used like watermarking, weather data retrieval, and mapping private IP addresses to locations.
Cedexis provides services to help companies optimize the delivery of video content globally to end users. Cedexis Radar monitors CDN performance data and Cedexis Openmix uses this data to select the best or most cost-effective CDN for initial page loads and subsequent video requests. Openmix can direct traffic based on performance, geography, availability, or costs with no required client changes. Cedexis helps deliver improved video experiences through real-time CDN performance scoring and load balancing across multiple CDNs.
The document discusses how content delivery networks (CDNs) use traffic engineering and DNS-based mapping to direct user requests to optimal edge servers. It provides examples of Akamai's global infrastructure and mapping techniques. Key points made include that CDNs have no backbone and operate as independent clusters, and that BGP configurations between networks may not achieve the intended traffic steering results for CDNs. Cooperation between networks and CDNs is suggested to optimize traffic patterns.
CDNs improve response times and enable streaming of audio/video by distributing content across multiple servers located close to users. DNS-based CDNs select the nearest surrogate server using DNS lookups, but this has delays. Service-oriented routers select servers based on packet contents and server loads, improving performance. NetServ routers can dynamically deploy "MicroCDN" modules to cache content on nearby routers for future user requests.
This document provides an overview of content delivery networks (CDNs). It discusses CDN architecture and components, including origin servers, distribution systems, request routing systems, and account systems. The document also covers CDN request routing mechanisms, content delivery management, pricing models, common CDN services and functionalities, and suitable versus unsuitable content for CDNs. In summary, a CDN is a system of distributed servers that deliver web content to users based on their geographic location for improved performance.
A content delivery network (CDN) distributes content across a global network of servers to improve page load times. It works by caching content such as website assets, videos, and images on edge servers located close to users. When a user requests content, the request is directed to the closest edge server that can deliver the content quickly from its cache, rather than sending all requests to the original site server. This reduces bandwidth usage and improves performance. CDNs are widely used for websites, apps, online advertising, media streaming, online gaming, e-commerce, and more.
Similar to Optimizing Streaming Server Selection for CDN-delivered Live Streaming (20)
Designing SSD-friendly Applications for Better Application Performance and Hi...Zhenyun Zhuang
This document discusses how applications can be designed to take advantage of the unique characteristics of solid state drives (SSDs) in order to improve application performance, storage input/output (IO) efficiency, and SSD lifespan. It proposes nine SSD-friendly application design changes and explains how they can result in better application performance by fully utilizing SSDs' internal parallelism, more efficient storage IO by reducing write amplification, and longer SSD lifespan by decreasing write amplification.
Application-Aware Acceleration for Wireless Data Networks: Design Elements an...Zhenyun Zhuang
This document discusses an approach called Application-Aware Acceleration (A3) to improve application performance over wireless networks. It finds that while transport layer protocols improve performance for FTP, they provide little benefit for other applications like CIFS, SMTP, and HTTP due to the applications' behaviors. A3 addresses this by using principles like transaction prediction, prioritized fetching, and redundant transmissions to offset applications' typical problems when used over wireless networks. The document presents the motivation and design of A3, and evaluates its effectiveness through emulations and a proof-of-concept prototype using NetFilter.
WebAccel: Accelerating Web access for low-bandwidth hostsZhenyun Zhuang
The document describes problems with how current web browsers access web pages in low-bandwidth environments. It analyzes factors that cause large response times, such as properties of typical web pages, interactions between HTTP and TCP protocols, and impact of server-side optimizations. It proposes a new solution called WebAccel that uses three browser-side mechanisms - prioritized fetching, object reordering, and connection management - to reduce user response time in an easy-to-deploy way. Simulation results and a prototype implementation show that WebAccel brings significant performance benefits over current browsers.
A Distributed Approach to Solving Overlay Mismatching ProblemZhenyun Zhuang
This document proposes an algorithm called Adaptive Connection Establishment (ACE) to address the topology mismatch problem between the logical overlay network and physical underlying network in unstructured peer-to-peer systems. ACE builds a minimum spanning tree among each source node and its neighbors within a certain diameter, optimizes connections not on the tree to reduce redundant traffic, while retaining search scope. It evaluates tradeoffs between topology optimization and information exchange overhead by changing the diameter. Simulation results show ACE can significantly reduce unnecessary P2P traffic by efficiently matching the overlay and physical network topologies.
Enhancing Intrusion Detection System with Proximity InformationZhenyun Zhuang
This document proposes PAIDS, a Proximity-Assisted Intrusion Detection System that identifies unknown worm outbreaks by leveraging proximity information of compromised hosts. PAIDS operates independently from existing signature-based and anomaly-based IDS approaches. It observes that compromised hosts tend to cluster geographically and remain active for long periods, allowing proximity to infected machines to indicate higher infection risk. The document motivates PAIDS based on limitations of other IDSes and clustered/long-term nature of worm spread. It then outlines PAIDS design, deployment model, software architecture, and key components for detecting outbreaks using proximity information.
Guarding Fast Data Delivery in Cloud: an Effective Approach to Isolating Perf...Zhenyun Zhuang
LNCS 2015
Cloud-based products heavily rely on the fast data
delivery between data centers and remote users - when data
delivery is slow, the products’ performance is crippled. When
slow data delivery occurs, engineers need to investigate the issue
and find the root cause. The investigation requires experience
and time, as data delivery involves multiple playing parts
including sender/receiver/network.
To facilitate the investigations, we propose an algorithm
to automatically identify the performance bottleneck. The
algorithm aggregates information from multiple layers of
data sender and receiver. It helps to automatically isolate
the problem type by identifying which component of
sender/receiver/network is the bottleneck. After isolation, successive
efforts can be taken to root cause the exact problem.
We also build a prototype to demonstrate the effectiveness of
the algorithm.
SLA-aware Dynamic CPU Scaling in Business Cloud Computing EnvironmentsZhenyun Zhuang
IEEE CLOUD 2015
Modern cloud computing platforms (e.g. Linux
on Intel CPUs) feature ACPI-based (Advanced Configuration
and Power Interface) mechanism, which dynamically scales
CPU frequencies/voltages to adjust the CPU frequencies based
on the workload intensity. With this feature, CPU frequency
is reduced when the workload is relatively light in order to
save energy; while increased when the workload intensity is
relatively high.
In business cloud computing environments, software products/
services often need to “scale out” to multiple machines to
form a cluster to achieve a pre-defined aggregated performance
goal (e.g., SLA-devised throughput). To reduce business operation
cost, minimizing the provisioned cluster size is critical.
However, as we show in this work, the working of ACPI
in today’s modern OS may result in more machines being
provisioned, hence higher business operation cost,
To deal with this problem, we propose a SLA-aware CPU
scaling algorithm based on business SLA (Service Level Agreement
aware). The proposed design rational and algorithm are
a fundamental rethinking of how ACPI mechanisms should be
implemented in business cloud computing environments. Contrary
to the current forms of ACPI which simply adapt CPU
power levels only based on workload intensity, the proposed
SLA-aware algorithm is primarily based on current application
performance relative to the pre-defined SLA. Specifically, the
algorithm targets at achieving the pre-defined SLA as the toplevel
goal, while saving energy as the second-level goal.
Optimizing JMS Performance for Cloud-based Application ServersZhenyun Zhuang
IEEE CLOUD 2012
http://dl.acm.org/citation.cfm?id=2353798
Many business-oriented services will be gradually
offered in the Cloud. Java Message Service (JMS) is a critical
messaging technology in Java-based business applications, particularly
to those that are based on the Java Enterprise Edition
(Java EE) open standard. Maintaining high performance in
the horizontally scaled, and elastic, cloud environment is
critical to the success of the business applications. In this
paper, we present practical considerations in optimizing JMS
performance for the cloud deployment, where some of the
findings may also serve to improve the design of JMS container
so it adapts well to cloud computing. Our work also includes
performance evaluation on the proposed strategies.
Capacity Planning and Headroom Analysis for Taming Database Replication LatencyZhenyun Zhuang
ACM ICPE 2015
http://dl.acm.org/citation.cfm?id=2688054
Internet companies like LinkedIn handle a large amount of
incoming web traffic. Events generated in response to user
input or actions are stored in a source database. These
database events feature the typical characteristics of Big
Data: high volume, high velocity and high variability. Data-
base events are replicated to isolate source database and
form a consistent view across data centers. Ensuring a low
replication latency of database events is critical to business
values. Given the inherent characteristics of Big Data, min-
imizing the replication latency is a challenging task.
In this work we study the problem of taming the database
replication latency by effective capacity planning. Based
on our observations into LinkedIn’s production traffic and
various playing parts, we develop a practical and effective
model to answer a set of business-critical questions related
to capacity planning. These questions include: future traffic
rate forecasting, replication latency prediction, replication
capacity determination, replication headroom determination
and SLA determination.
OS caused Large JVM pauses: Deep dive and solutionsZhenyun Zhuang
We have found many large JVM GC pauses are not caused by application itself, but by the interactions between JVM and OS. We characterize these issues into 3 scenarios: (1) application startup state; (2) application steady state with memory pressure; and (3) application steady state with heavy IO. The root causes are quite complicated, so we share our experiences about this.
//This slide deck is for Qcon Beijing 2016 talk.
Wireless memory: Eliminating communication redundancy in Wi-Fi networksZhenyun Zhuang
This document describes a proposed system called Wireless Memory (WM) to eliminate communication redundancy in Wi-Fi networks. The authors first analyze real Wi-Fi traces from multiple buildings and observe significant redundancy both between users and over time for individual users. Based on these insights, they propose WM, which equips access points and clients with memory to store transmitted data. When sending new data, the access point can retrieve stored data from the client's memory by sending a reference rather than the full data, reducing transmission size. The authors evaluate WM through simulations using the collected traces and find it can improve network throughput by up to 93% in some scenarios by eliminating redundancy.
Improving energy efficiency of location sensing on smartphonesZhenyun Zhuang
The document proposes an adaptive location-sensing framework to improve energy efficiency on smartphones running location-based applications. The framework uses four design principles: substitution replaces GPS with less power-intensive location services when possible; suppression avoids unnecessary GPS use through sensors like accelerometers; piggybacking synchronizes location requests from multiple apps; and adaptation adjusts location sensing based on battery level. An implementation on Android phones reduces GPS use by up to 98% and improves battery life by up to 75%.
ACEP Magazine edition 4th launched on 05.06.2024Rahul
This document provides information about the third edition of the magazine "Sthapatya" published by the Association of Civil Engineers (Practicing) Aurangabad. It includes messages from current and past presidents of ACEP, memories and photos from past ACEP events, information on life time achievement awards given by ACEP, and a technical article on concrete maintenance, repairs and strengthening. The document highlights activities of ACEP and provides a technical educational article for members.
Low power architecture of logic gates using adiabatic techniquesnooriasukmaningtyas
The growing significance of portable systems to limit power consumption in ultra-large-scale-integration chips of very high density, has recently led to rapid and inventive progresses in low-power design. The most effective technique is adiabatic logic circuit design in energy-efficient hardware. This paper presents two adiabatic approaches for the design of low power circuits, modified positive feedback adiabatic logic (modified PFAL) and the other is direct current diode based positive feedback adiabatic logic (DC-DB PFAL). Logic gates are the preliminary components in any digital circuit design. By improving the performance of basic gates, one can improvise the whole system performance. In this paper proposed circuit design of the low power architecture of OR/NOR, AND/NAND, and XOR/XNOR gates are presented using the said approaches and their results are analyzed for powerdissipation, delay, power-delay-product and rise time and compared with the other adiabatic techniques along with the conventional complementary metal oxide semiconductor (CMOS) designs reported in the literature. It has been found that the designs with DC-DB PFAL technique outperform with the percentage improvement of 65% for NOR gate and 7% for NAND gate and 34% for XNOR gate over the modified PFAL techniques at 10 MHz respectively.
HEAP SORT ILLUSTRATED WITH HEAPIFY, BUILD HEAP FOR DYNAMIC ARRAYS.
Heap sort is a comparison-based sorting technique based on Binary Heap data structure. It is similar to the selection sort where we first find the minimum element and place the minimum element at the beginning. Repeat the same process for the remaining elements.
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
Optimizing Streaming Server Selection for CDN-delivered Live Streaming
1. Optimizing Streaming Server Selection for
CDN-delivered Live Streaming
Zhenyun Zhuang1 and Shun Kwok1,2
1 Ying-Da-Ji Tech., 603 Overseas High-Tech Venture Park, Shenzhen 518057, China
2 Corresponding Author
Abstract. Content Delivery Networks (CDNs) have been widely used to deliver
web contents on today’s Internet. Gaining tremendous popularity, live streaming
is also increasingly being delivered by CDNs. Compared to conventional static
or dynamic web contents, the new application type of live streaming exposes
unique characteristics that pose challenges to the underlying CDN infrastructure.
Unlike traditional web-objects fetching, which allows Edge Servers to cache con-
tents and thus typically only involves Edge Servers for delivering contents, live
streaming requires real-time full CDN-streaming paths that span across Ingest
Servers, Origin Servers and Edge Servers.
DNS is the standard practice for enabling dynamic assignment of servers. GeoDNS,
a specialized DNS system, provides DNS resolution by taking into account the
geographical locations of end-users and CDN servers. Though GeoDNS effec-
tively redirects users to nearest CDN Edge Servers, it may not be able to select
the optimal Origin Server for relaying a live stream to Edge Servers due to the
unique characteristics of live streaming. In this work, we consider the require-
ments of delivering live streaming with CDN, and propose advanced design for
selecting optimal Origin Streaming Servers in order to reduce network transit
cost and increase viewers’ experience. We further propose a live-streaming spe-
cific GeoDNS design for selecting optimal Origin Servers to serve Edge Servers.
1 Introduction
Today’s web contents are increasingly being delivered by Content Delivery Networks
(aka Content Distribution Networks, or CDNs) [1,2]. a typical CDN infrastructure con-
sists of two types of servers based on their functionalities: Ingest Servers for accepting
customers’ web contents; and (ii) Delivery Servers for serving end web users. Live
streaming is becoming increasingly popular, and it is expected to be more pervasive in
the near future. Major CDN providers such as Akamai [1] and Level3 [2] are support-
ing delivering live streaming with their CDNs. When supporting live streaming, CDN
providers provide a layered infrastructure which consists of three types of streaming
servers: (i) Ingest Streaming Servers (ISS) which accept customers’ live streams; and
(ii) Origin Streaming Servers (OSS) which transit live streams from ingest servers to
Edge Streaming Servers; and (iii) Edge Streaming Servers (ESS) that directly serve the
end viewers. Depending on the scale of a CDN, the number of Origin Streaming Servers
varies from several to hundreds, and Edge Streaming Servers can be even thousands.
Unlike other traditional web-based applications, live streaming is associated with
unique characteristics and in turn presents special challenges to CDN infrastructure.
2. (a) Default Delivery (b) Optimized Delivery
Fig. 1. Live streaming delivered with CDN
First, live streaming has to be delivered from the CDN Ingest Server to Edge Servers in a
real-time fashion, which is in sharp contrast with traditional CDN-based web delivering
where typically only the last hop (i.e., from CDN Edge Servers to web users) needs
to be optimized. Such uniqueness requires the optimization of the entire CDN transit
path which consists of two transit hops of Ingest-Origin and Origin-Edge. Second, live
streaming often features high-bandwidth requirement. A typical live videos can easily
consumes up to 3 Mbps per stream, thus there is a higher need for reducing transit
cost on CDN networks. Third, live streaming typically is time-sensitive, and the CDN
infrastructure design needs to consider such timeliness by choosing the shortest path
from the streaming source to the end users.
After a live stream is ingested into the CDN infrastructure through an Ingest Server,
when an end viewer requests to play a live stream, an appropriate Edge Server and
an Origin Server need to be chosen, so that the player pulls the stream from the Edge
Server and the Edge Server pulls the stream from the Origin Server. DNS is the stan-
dard practice for making the above server selections. GeoDNS [3], typically used by
CDNs, is a specialized DNS service by taking into account geographical locations of
CDN Origin/Edge servers and end users. It translates host names into IP address and
redirects the requests to optimal Origin/Edge servers. Though many factors such as
server loads contribute to this selection process, the critical factor is the proximity (i.e.,
geographical) information. For instance, in a CDN that deploys Edge Servers on both
coasts of USA, an end user coming from California most likely will be served by an
Edge Server located in west coast rather than east coast. Traditional GeoDNS solutions
work effectively by offering the end users lower response time and higher throughput.
Inside CDN infrastructure, GeoDNS is also used to determine the optimal Origin Server
to serve Edge Server requests, with the goal of offering lower transit cost and response
time.
When supporting live streaming, GeoDNS is used for dynamical assignment of Ori-
gin Streaming Servers in response to Edge Streaming Server requests. In this process,
3. traditional GeoDNS only considers the properties (e.g. proximity) of Origin Servers
with respect to a particular Edge Server, which may result in sub-par performance.
Consider an example shown in Figure 1 where Source S is ingesting a live stream
into CDN’s Ingest Server I. Assuming Edge Server E needs to serve User U the live
stream originating from Ingest Server I. E has two optional Origin Servers to retrieve
the stream from: O1 and O2. Since O1 is closer to E, default GeoDNS will assign O1
to E’s DNS request, thus resulting in a CDN transit path of I − O1 − E, as shown in
Figure 1(a). However, as shown in Figure 1(b), there exists a better CDN transit path of
I −O2 −E, which has smaller CDN transit cost in terms of stream delivery distance. In
addition to less transit cost, U’s viewing experience is also expected to be better, as the
entire live stream path of S−I −O2 −E −U is shorter. The CDN transit path shown in
Figure 1(b) can be achieved by an optimized GeoDNS solution which assigns to E the
Origin Server of O2, as opposed to O1.
Though effectively addressing all challenges associated with live streaming requires
advanced design/redesign of many aspects of CDN infrastructure, in this work, we focus
on optimizing the Origin Streaming Server selection through GeoDNS, and propose a
design called Sticky-DNS for supporting live streaming. The key idea of Sticky-DNS is
that when performing GeoDNS and selecting Origin Servers to serve Edge Servers, the
optimal Origin Server is determined with the consideration of two CDN hops of both
Ingest-to-Origin and Origin-to-Edge rather than the single hop of Origin-to-Edge as in
traditional GeoDNS.
In the following paper, we first provide some background information and motivate
our design of Sticky-DNS in Section 2. We then present the problem definition and
detailed design in Section 3 and Section 4, respectively. We perform prototype-based
evaluation and show the results in Section 5. We present related works and conclude the
work in Section 6 and Section 7, respectively.
2 Background
In this section, we first provide some backgrounds about CDN, GeoDNS, and live
streaming. We then present some initial results about the popularity of live streams
on Ustream.tv, a major live streaming web site. Finally, we summarize this section by
providing insights into the design of new GeoDNS for live streaming.
2.1 Backgrounds
CDN: Content Delivery Networks (CDNs) can be used to deliver both static web con-
tents and live streams. A typical CDN infrastructure consists of Ingest Servers (for ac-
cepting customer contents), Origin Servers (for serving edge servers) and Edge Servers
(for serving end users directly), forming a tree-like structure. Various servers are orga-
nized into POPs (i.e., Point Of Presence).
GeoDNS: CDNs aim to deliver contents directly from Edge Servers that are closer
to end users, thus minimizing the adverse impact of dynamic network path and reducing
network transit cost. For this, a specialized DNS service is typically used for redirecting
requesting end users to appropriate Edge Servers.
4. 0
200
400
600
0 10 20 30 40 50 60
Ustream.TV Live Streams
Fig. 2. Number of viewers for the first 55 streams on Ustream.tv
Live Streaming: Live streaming is a new type of application that is increasingly
delivered by CDNs. Featuring different properties from static contents, live streaming
typically has high bandwidth requirement which exerts pressure on the CDN infrastruc-
ture. Unlike static web contents, live streams cannot be cached and have to maintain a
full path from the customer source to the end viewers (i.e., end users).
2.2 Popularity of live streams
On today’s Internet, vast amount of live streams are being supported by CDNs. Though
the sizes of viewer set vary greatly, many live streams today feature small viewer sets.
Specifically, we consider a major live streaming website of Ustream.tv and study the
live streams that fall into the category of “most-total-views” at a particular time. We
count the numbers of viewers for each of the first 55 streams, and the results are shown
in Figure 2. Note that the figure only shows the active live streams, and other streams are
off-air at the particular time. We observe that though the most popular stream has more
than 500 viewers, only 6 of them have more than 100 viewers. The average number of
live viewers is only about 53.
2.3 Observations
As we observed from the illustrate example shown in Figure 1, CDN transit cost (i.e.,
Ingest-Origin and Origin-Edge paths) can be saved by assigning appropriate Origin
Servers to requesting Edge Servers for live streaming applications. Instead of always
assigning the nearest Origin Server to the Edge Server that issues DNS request, a
specially-designed GeoDNS could determine the assignment based on the entire CDN
delivery path. Such a special GeoDNS can significantly save CDN transit cost for sce-
narios where a live stream is not popular as they incur shorter CDN delivery paths.
Though the popularity of streams (i.e., viewer set size) varies greatly, many live
streams today feature small viewer sets. On the other hand, today’s major CDN providers
typically have hundreds or even thousands of Edge/Origin streaming servers for sup-
porting live streaming. The mismatch between the sizes of viewer sets and server sets
5. motivates our design of a specialized DNS for less-popular live streams, with the benefit
of reduced CDN transit cost.
For popular live streams, however, the same approach needs to be adapted to re-
utilize delivery path at Ingest-Origin level. Specifically, default GeoDNS practically
encourages path reutilization at Ingest-Origin level since each Origin Server can ag-
gregate the streaming requests from closer-by Edge Servers, thus reducing the CDN
transit cost. Allowing each Edge Server to connect to far-away Origin Servers, on the
other hand, discourages path reutilization, thus could result in larger aggregate delivery
distance inside CDN. Therefore, an appropriate design of GeoDNS has to consider the
popularity of a live stream, and adapts its behavior correspondingly.
3 Design of Sticky-DNS
In this section, we first formalize the problem we attempt to address. We then present
the design overview of Sticky-DNS, its software structure and the entire process.
3.1 Problem Definition
We consider scenarios where a CDN provider has three types of streaming servers for
supporting live streaming: Ingest Streaming Servers, Origin Streaming Servers, and
Edge Streaming Servers. We use SSI, SSO and SSE to denote the three server sets, re-
spectively. The CDN provider delivers a set of live streams {Lm}. We further assume
each stream Lm has a single ingesting source which connects to an Ingest Server SSIm.
The problem we are addressing with Sticky-DNS design can be defined as follows.
For a live stream denoted by Lm and a viewer Vn, we assume the Edge Server that
serves viewer Vn is SSEmn, which is determined by regular GeoDNS. Typically SSEmn
is the closest Edge server to the end viewer. Sticky-DNS needs to decide the optimal
Origin Server SSOmn to serve SSEmn such that the CDN transit cost can be minimized
and the end viewers’ experience is not compromised. We use CIO and COE to denote the
transit costs corresponding to the paths of Ingest-Origin and Origin-Edge, respectively.
With these assumptions, the input, output and objective function of Sticky-DNS are
presented in Figure 3.
3.2 Design Overview
Sticky-DNS is a specialized DNS service designed for live streaming applications. It
aims at saving CDN transit cost but not compromising end user experience. 3 Sticky-
DNS achieves the cost savings by assigning proper Origin Servers to serve Edge Servers
with shorter CDN delivery paths.
Sticky-DNS’s operations are per-stream based, thus it maintains appropriate states
for all live streams. At a higher level, Sticky-DNS adjusts its behavior as the particular
live stream changes popularity. Specifically, when a live stream is not popular, Origin
3 Note that for ease of presentation, we use the distance as the primary cost metric; but it would
be easy to extend the cost metric to include other factors such as pricing.
6. Input:
1 {Lm}: Live stream set of individual stream Lm;
2 Vm = {Vmn}: Viewer set for live stream Lm;
3 SSIm: The Ingest Server for live stream Lm;
4 SSOm: CDN Origin Server set for Lm;
5 SSEm = {SSEmn}: CDN Edge Server set for Lm; // Note that each Edge Server
6 // Emn for each individual viewer Vn is determined by regular GeoDNS;
7 CIO: Cost arrays of all Ingest-Origin pairs;
8 COE: Cost arrays of all Origin-Edge pairs;
Output:
1 Optimal Origin Server set SSOm = {SSOmn} for Edge Server set SSEm,
where each SSOmn is serving the Edge Server of SSEmn ;
Objective Function:
1 Total CDN transit cost Σ(CIO +CEO) for all live streams {Lm} is minimized;
2 Non-compromised viewer experience;
Fig. 3. Problem Definition
Servers are chosen based on the lengths of entire Edge-Origin-Ingest paths. For this,
Sticky-DNS maintains the cost values of every Edge-Origin and Origin-Ingest pairs.
Initially, all the costs are assigned according to certain cost-determination methods.
Whenever an Origin Server is arealdy serving the live stream, the corresponding Origin-
Ingest cost is set to zero to reflect the fact that serving an additional Edge Server with
this Origin Server will incur no additional cost on the Origin-Ingest path.
However, as a stream becomes more popular, Sticky-DNS needs to adapt back to the
default behavior determined by regular DNS for reusing Ingest-Origin paths. The reason
is that when serving popular streams, since Origin Servers are more likely serving that
live stream, the behavior of Sticky-DNS will more likely be that of regular GeoDNS,
and the proximity-closest Origin Server is typically chosen for each Edge Server so that
the Ingest-Origin paths are reutilized. To encourage this, we introduce the concept of
“would-be” behavior. When an Origin Server is determined by regular DNS for Origin
request issued by an Edge Server, the particular Origin is the “would-be” Origin for the
Edge, while the Edge is the ”would-be” Edge for the Origin. Even if a different Origin
is determined by Sticky-DNS, the “would-be” state is recorded, which is used to adjust
the corresponding Origin-Ingest cost. Specifically, with more “would-be” Edge Servers
retrieving from an Origin, the corresponding Origin-Ingest cost is reduced further. The
idea behind this treatment is that the Origin-Ingest cost is shared by all downstream
Edge Servers, thus encouraging the reuse of the particular Origin-Ingest path.
These cost values will be used to choose optimal Origin Servers for incoming Origin
Server DNS requests so that the smallest Σ(CIO +CEO) will result. To achieve this,
Sticky-DNS uses the notion of “Additional Cost” when deciding on an Origin Server.
Specifically, given an incoming viewer Vn and the corresponding Edge Server SSEmn,
Sticky-DNS always return the Origin Server that causes minimal “Additional Cost”.
7. Fig. 4. Software Architecture
The additional cost is defined as the extra CDN transit cost that needs to be paid when
serving the Edge Server with a particular Origin Server. Recall that CDN transit cost
contains two parts: the Ingest-Origin cost and Edge-Origin cost. Whenever an Origin
Server first connects to an Ingest Server to retrieve a live stream, CDN pays Ingest-
Origin cost, but succeeding retrieval of the same live stream from the same Ingest Server
to the same Origin Server will incur zero additional cost.
3.3 Software Architecture
The software architecture of Sticky-DNS is shown in Figure 4. The heart is Sticky-DNS
Engine (SDE), which takes the DNS request, interacts with other components, and then
decides the optimal Origin Server. Sticky-DNS assigns Origin Servers based on the cost
associated with the live streaming delivery path, and the cost is CDN-specific and de-
termined by another component of Cost Metric Determination (CMD) and updated by
Server Cost Probing (SCP). Meanwhile, to accommodate network dynamics, a com-
ponent of Edge-Origin Connection Monitoring (ECM) is used to update the cost state.
Sticky-DNS maintains the server streaming state which basically records information
of which Edge Server is serving which streams and which Origin Server is retrieving
which streams. This state is updated by the events of viewers joining and leaving.
The overall process of how Sticky-DNS works is as below. When a new live stream-
ing DNS request comes, the optimal Edge Server will be determined by regular GeoDNS.
CDN will then attempt to determine the optimal Origin Server for the determined Edge
Server. For this, the Edge Server issues DNS request to DNS service asking for the
proper Origin Server for retrieving the live stream from. Sticky-DNS takes the Origin-
Server request and returns the determined Origin Server. Meanwhile, it updates the
server streaming states.
4 Sticky-DNS Components
4.1 Cost Metric Determination (CMD)
In order to perform Sticky-DNS operations, each delivery hop (i.e., Ingest-Origin and
Origin-Edge) on CDN infrastructure has to be associated with quantifiable cost. Note
8. that though physical distance is often used as the path cost, cost metrics may also in-
clude network delay, price, server load, reliability, loss rate, etc. Since different cost
metrics often belong to distinct dimensions and they are often prioritized. Though how
to prioritize them varies with different CDN providers and deployment scenarios, in
this work we provide the following guidelines: (i) Server Load is prioritized over all
other metrics. An Origin server is put into the DNS pool (i.e., available to be returned
by Sticky-DNS) only if it can take more loads (i.e., serving more requests); (ii) Loss
rate has to be under certain threshold. Since most live streaming (e.g., RTMP, HTTP)
use TCP to transmit, large loss rate cannot sustain a reasonable level of performance;
(iii) Reliability has to be above certain threshold. Live streaming requires a real-time
transmission path, thus reliability is important during the stream playback; (iv) Pricing
is prioritized over network delay and physical distance. Though pricing is often cor-
related with the latter two metrics, it is prioritized over them if they are not identical;
(v) Network delay and physical distance. The impact of these two metrics are buffering
time and playback delay. In reality, they can be used as the substitute for pricing metric
when the latter is not available or hard to define.
Thus, with the above guidelines in mind, we propose to enforce the cost determina-
tion by keeping track of the following state information.
– An array of server load for the Origin Servers, denoted by SL[M], where M is the
number of Origin Servers. SL[i] represents the current load of Origin Server SSOi;
– A 2-D array of loss rate for each Origin-Edge path, denoted by LR[M][N], where
M is the number of Origin Servers and N is the number of Edge Servers. LR[i][j]
represents the loss rate of the path between Origin Server SSOi and Edge Server
SSE j; Another array of loss rate for each Ingest-Origin pair, denoted by LRI[M],
assuming a single Ingest Server;
– A 2-D array of reliability for each Origin-Edge path, denoted by RE[M][N], where
M is the number of Origin Servers and N is the number of Edge Servers. RE[i][j]
represents the reliability of the path between Origin Server SSOi and Edge Server
SSE j; Another array of reliability for each Ingest-Origin pair, denoted by REI[M];
– Pricing is represented by PR[M][N] for each pair of Origin-Edge; Another array of
PRI[M] for each pair of Ingest-Origin;
– Similarly, network delay and physical distance are represented by ND[M][N] and
PD[M][N], respectively. Two other arrays of NDI[M] and PDI[M] are also used.
4.2 Server Cost Probing (SCP)
SCP updates the various cost state by proactively or reactively monitoring/probing the
network. Specifically,
– Origin Server Load (SL[M]). The load can be obtained by installing services which
monitor CPU/Memory utilization;
– Loss rate (LR[M][N] and LRI[M][N]). Loss rate monitoring between servers can
be obtained by observing existing connections’ behaviors (e.g., TCP loss recovery
behaviors) or proactively sending test packets. When a network path (e.g., Ingest-
Origin-Edge) consists of two network hops, the aggregate loss rate of path SSEi −
SSO j −SSI can be calculated as LRij = (1−(1−LR[j][i])(1−LRI[j]));
9. Core Algorithm:
Initialize all cost states;
For an incoming new viewer Vn requesting Lm, determine the Edge Server SSEmn;
1 If SSEmn is already serving Lm;
2 Return; //No Origin Server request since the Edge Server is serving Lm now;
3 Else
4 Determine the candidate Origin Server set based on load, loss rate, etc.;
5 For every Origin Server SSOi, if (SL[i] ≤ α and LRj,k ≤ β and REj,k ≥ γ, where
6 α, β,γ are the respective threshold values), SSOi is a candidate Origin Server;
7 Determine the optimal Origin Server SSOq that has the smallest aggregate
8 cost of CIO and COE for SSEmn;
9 NV[q][m]++;
10// Update AD[q][m];
11 If (NV[i][j] ≥ 1)
12 AD[i][j] = 0;
13 Else
14 If ( WB[i][j] ==0 ): AD[i][j] = initial cost;
15 Else : AD[i][j] = initialcost
WB[M][T]+1
If Edge Server SSEi is disconnecting from SSOj for live stream Lm:
1 // Update NV[I][J] and AD[I][J];
2 NV[j][m]–;
3 If (NV[j][m] ≥=1): AD[j][m] = Initialcost
(NV[j][m]+1) ;
4 Else : AD[j][m] = {initialcost}.
Fig. 5. Core Algorithm
– Similarly, reliability (RE[i][j] and REI[i][j]) can be obtained by monitoring the path
up/down time. The aggregate reliability of path SSEi −SSOj −SSI can be calculated
as REij = (RE[j][i]∗REI[j]);
– Pricing (PR[M][N] and PRI[M][N]) can be adjusted by business logics. Whenever
the CDN operators change the network pricing rates, the pricing used by Sticky-
DNS can be correspondingly adjusted.
– Network delays (ND[M][N] and NDI[M]) and physical distances (PD[M][N] and
PDI[M]) are easy to obtain with known techniques.
4.3 Edge-Origin Connection Monitoring (ECM)
ECM is used to keep the server streaming state maintained by OSS fresh, in the sense
that the additional cost associated with each Origin Server is up-to-date, and the popu-
larity state of a stream on all Origin Servers is up-to-date. For all Origin Servers, there is
a server streaming state. The server streaming state is an array, and each element corre-
sponds to an Origin Server and is defined as the Additional Cost of using the particular
Origin Server to serve a stream. The state can be realized by a 2-D array of AD[M][T],
10. (a) Original, no viewers (a) E1 is on (a) E2 is on
Fig. 6. An illustrative example (I)
Source: S
Ingest: I
Origin: O1
Origin: O2
Edge: E1
Edge: E2
Edge: E3
(a) E3 is on (a) E4 is on
Fig. 7. An illustrative example (II)
where M is the number of Origin Servers and T is the number of streams. AD[i][j]
records the additional cost of using Origin Server SSOi to serve stream Lj. The values
of AD[M][T] are based on whether an Origin Server serves a stream or not.
Two other 2-D arrays are used: NV[i][j] records the number of Edge Servers Origin
Server SSOi is serving stream Lj to. WB[M][T] is used to indicate the stream popular-
ity on a particular Origin Server. The indicator uses the number of “Would-be” Edge
Servers for each Origin Server. ECM monitors the connecting/disconnecting events be-
tween Origin and Ingest Servers for all live streams. Whenever the events occur, CVM
updates NV[M][T], WB[M][T] and AD[M][T] correspondingly.
4.4 Sticky DNS Engine (SDE)
Dynamic DNS Engine (SDE) performs the actually computation and comparison of
costs, then selects the most appropriate Origin Server given a live streaming DNS re-
quest. The algorithms are shown in Figure 5. SDE handles the joining of an Edge Server
(i.e., begins to serve a stream) and the leaving (i.e., begins to drop a stream) separately.
For both events, it updates the AD and NV states.
11. 4.5 An illustrative example
We use the following example to illustrate how Sticky-DNS works. Assuming there are
one Ingest Server (I) and two Origin Servers (O1 and O2), as shown in Figure 6. When
the first end-viewer joins and is bound to Edge Server E1, E1 needs to decide which
Origin Server (O1 or O2) for E1. Assuming the distance is the cost metric, O2 is chosen
since the additional cost of pulling from O2 (i.e., sum of I −O2 and O2 −E1) is smaller.
When the second Edge Server E2 requests an Origin Server, though it is closer to
Origin Server O1, the additional cost of pulling from O1 (i.e., I − O1 and O1 − E2)
is larger than that of pulling from O2 (i.e, only O2 − E2, since I − O2 has cost of be-
ing zero). Thus E2 chooses O2 as the optimal Origin Server. Note that since E2 is the
“would-be” edge of O2, the cost of I − O1 is reduced in half to reflect the benefit of
path reuse. When the third Edge Server E3 joins, as shown in Figure 7, it compares the
additional costs of pulling from O1 and pulling from O2. Since the cost of I −O1 is not
in half, assuming the cost sum of I −O1 and O1 −E3 is smaller than the cost of O2 −E3,
E3 chooses O1 to be the optimal Origin Server. Finally, when Edge Server E4 joins, it
chooses O1 as the Origin Server.
5 Evaluation
To gain concrete understanding of GeoDNS regarding the impact on both CDN and
viewers, we consider the following simplified scenario to quantify the saving on CDN
transit cost and the performance experienced by end viewers. We assume an USA
mainland CDN provider which has 1 Ingest POP (Point of Presence) and 10 Origin
POPs, where each POP has only a single streaming server. We further assume that Edge
Servers have to retrieve live streams from Origin Servers, which then retrieve from In-
gest Server. For simplicity, we consider a performance metric of physical distance in the
unit of thousand-miles (i.e., KMiles) for both CDN transit-cost saving and live-stream
viewers. Thus, the larger the distances are, the higher cost both CDN and viewers pay.
We built a prototype with Adobe Flash Media Server (FMS) 4.0. The prototype
consists of 11 servers deployed in 11 cities, all running Windows Server 2008 R2. One
of the servers serves as the Ingest Server and receives a live stream from a stream source,
and the other 10 servers serve as Origin Servers and are relaying the live stream to end
viewers. We then invoke 10 stream viewers from different locations. Since viewers are
always bound to the closest Edge Servers, we use viewers to mimic the edge servers in
typical CDN setup as they need to decide the origin servers to pull stream from.
We first consider the default regular DNS, which directs each viewer to the closest
Origin Server. For comparison, we also consider an optimized DNS which directs each
Edge POP to the closest Origin POP that leads to smallest CDN transit path (i.e., from
Ingest Server to Edge Server).
CDN Transit Cost: For both scenarios, we count the lengths of Ingest-Origin paths,
Origin-Edge paths, and aggregated CDN delivery path. As shown in Figure 8, the blue
bars show the default scenario, while the red bars show the optimized scenario. The ag-
gregated CDN delivery distance is the sum of all Ingest-Origin and Origin-Edge paths.
Specifically, first, the aggregated Ingest-Origin distances are 10.48 Kmiles and 1.81
12. 0.00
5.00
10.00
15.00
Origin Transit
Cost
Edge Transit
Cost
Aggregate
Transit Cost
Default GeoDNS Optimized GeoDNS
KM
Fig. 8. CDN Transit Cost
Kmiles for two scenarios, respectively. The reason for reduced distance in optimized-
DNS scenario is that some Origin POPs are not serving any Edge Servers, thus they
are free from stream delivery. Second, the aggregated Origin-Edge distances are 2.92
Kmiles and 5.40 Kmiles, respectively. Since Edge Servers in optimized-scenarios may
connect to far-away Origin POPs, the Origin-Edge paths are inflated. Third, the aggre-
gated path distances, which correspond to the CDN transit cost, are 13.41 Kmiles and
7.17 Kmiles, respectively. We see that for the particular setup, optimized-DNS results
in much smaller CDN transit cost (more than 46% reduction).
These results suggest that CDN is able to gain significant transit cost saving by op-
timizing GeoDNS inside CDN. Note that in this simple scenario, we assume each POP
only has a single Edge Server. With more servers, the saved transit cost can be more sig-
nificant as each server will incur a separate delivery path. Note that with shorter delivery
paths, end viewers also see shorter playback buffers and better viewing experiences.
Viewers’ Experience: Since each live stream has to delivered from the stream source
all the way to the end users in a real-time fashion, the end-users’ performance is heavily
impacted by the length of the entire delivery path. In other words, though there are other
factors impacting live-stream users’ performance, it is generally acceptable to claim
that the longer path an end user relies on, the worse performance the user experiences.
For both scenarios, since the distances from streaming sources to the Ingest Server
and from Edge Server to end viewer are the same, we now focus only on the CDN
delivery distance from Ingest Server to Edge Server for each viewer and show them
in Figure 9. We observer that all paths in optimized-DNS scenario are shorter, which
is not surprising since Edge Servers choose the Origin Servers that result in shortest
paths. The aggregated distances are 13.41 Kmiles and 10.35 Kmiles, respectively, or
23% decrease. These results suggest that end viewer’s experience can also improved
with optimized GeoDNS.
Money Talks: Taking a more direct business perspective, we now compare the
transit cost with respect to pricing of bandwidth usage. Depending on subscription
13. 0.00
0.50
1.00
1.50
2.00
2.50
3.00
C D F H J K N O Q T
Default Optimized
Fig. 9. Stream Delivery Paths
packages, CDN providers may pay ISPs for the bandwidth used. We assume each
live stream is 3Mbps. In the above scenario where only 4 Origin Servers are deliver-
ing the same live stream, the saved bandwidth per day would be 0.20 TB*Kmiles (or
73.74 TB*KMiles per year). With 2000 streams, the saved bandwidth would be 400
TB*Kmiles (or 147.47 PB * Kmiles per year). Simply assuming the transit cost of
0.10 per GB/Kmile, optimized-DNS can help save $14.74 Million per year with 2000
streams, or about $7K per stream, as shown in Table 1.
6 Related Work
Content Delivery Networks (CDNs) have been carried out by various providers to ex-
pedite the web access [1]. The techniques used by CDNs for delivering conventional
web contents are explained in related writings [4]. Despite the popularity and perva-
siveness of CDNs, many research problems and system-building challenges persist for
optimizing CDN infrastructure and addressing challenges associated supporting various
types of applications. For instance, as we demonstrated in this work, how to efficiently
support the application of live streaming with CDNs deserves more study.
Though various network architectures have been proposed and utilized for different
deployment scenarios with the rapid development of Internet technology [5, 6], CDN
providers typically apply layered structure (e.g., ingest-layer, origin-layer and edge-
layer) due to the caching-property of delivering static web contents tracing back to
Day-1 of CDN industry. Realizing that CDNs and P2P can complement each other,
Table 1. Bandwidth/Cost Saving of Sticky-DNS
Number of streams Daily BW Saving Yearly BW Saving Yearly Cost Saving
1 0.20 TB 73.74 TB $7K
2000 400 TB 147.47 PB $14.74M
14. many research efforts also attempt to integrate these two realms into one [7]. Unlike
these works that combine the merits of both CDNs and P2P, our work takes an inside
perspective and attempts to refine CDN delivery infrastructure.
Live streaming has been studied from various perspectives [8–10]. The unique prop-
erties of live streaming, coupled with CDN-assisted delivery, justifies a specialized de-
sign of CDN infrastructure that specifically serves live streaming and saves CDN transit
cost. One work [11] proposes to combine the benefits of CDN and P2P to enhance live
streaming. Another work [12] proposes to build a flat-layer infrastructure (as opposed
to the traditional 3-layer infrastructure) for supporting live streaming with CDNs. To
our best knowledge, this work is the first to consider and address the problem of saving
transit cost for delivering live streaming with traditional layered-infrastructure CDNs.
7 Conclusion
In this work, we optimize CDN streaming server selection to better support live stream-
ing with CDN. The proposed design dynamically selects streaming servers based on the
popularity of the live stream. Significantly improving both the CDN transit cost saving
and end viewers’ experience, the design can be deployed with a specialized GeoDNS.
References
1. “Akamai technologies,” http://www.akamai.com/.
2. “Level 3 communications, llc.” http://www.level3.com/.
3. “Bind geodns,” http://www.caraytech.com/geodns/.
4. D. C. Verma, S. Calo, and K. Amiri, “Policy based management of content distribution net-
works,” IEEE Network Magazine, vol. 16, pp. 34–39, 2002.
5. I. Stoica, R. Morris, D. Karger, M. F. Kaashoek, and H. Balakrishnan, “Chord: A scalable
peer-to-peer lookup service for internet applications,” in SIGCOMM: Proceedings of the
2001 conference on Applications, technologies, architectures, and protocols for computer
communications, San Diego,CA, USA, 2001.
6. S. Ratnasamy, P. Francis, M. Handley, R. Karp, and S. Schenker, “A scalable content-
addressable network,” in SIGCOMM: Proceedings of the conference on Applications, tech-
nologies, architectures, and protocols for computer communications, San Diego,USA, 2001.
7. H. Yin, X. Liu, T. Zhan, V. Sekar, F. Qiu, C. Lin, H. Zhang, and B. Li, “Livesky: Enhancing
cdn with p2p,” ACM Trans. Multimedia Comput. Commun. Appl., August 2010.
8. K. Sripanidkulchai, B. Maggs, and H. Zhang, “An analysis of live streaming workloads on the
internet,” in Proceedings of the 4th ACM SIGCOMM conference on Internet measurement,
ser. IMC ’04, 2004.
9. J. He, A. Chaintreau, and C. Diot, “A performance evaluation of scalable live video streaming
with nano data centers,” Comput. Netw., vol. 53, pp. 153–167, February 2009.
10. C. Vicari, C. Petrioli, and F. L. Presti, “Dynamic replica placement and traffic redirection in
content delivery networks,” SIGMETRICS Perform. Eval. Rev., vol. 35, December 2007.
11. Z. H. Lu, X. H. Gao, S. J. Huang, and Y. Huang, “Scalable and reliable live streaming service
through coordinating cdn and p2p,” in Proceedings of the 2011 IEEE 17th International
Conference on Parallel and Distributed Systems, ser. ICPADS, Washington, DC, USA, 2011.
12. Z. Zhuang and C. Guo, “Optimizing cdn infrastructure for live streaming with constrained
server chaining,” in Proceedings of the 2011 IEEE Ninth International Symposium on Paral-
lel and Distributed Processing with Applications, ser. ISPA, Washington, DC, USA, 2011.