This document analyzes delays in unicast video streaming over IEEE 802.11 WLAN networks. It describes conducting an experiment using a testbed with a Darwin Streaming Server and WLAN probe to capture packets. The analysis found that video bitrate variations, packetization scheme, bandwidth load, and frame-based nature of video all impacted mean delay. Bursts of packets from video frames caused per-packet delay to increase in a sawtooth pattern. Increasing uplink load was also found to affect delay variations.
Optimal Streaming Protocol for VoD Using Clients' Residual BandwidthIDES Editor
A true VoD system has tremendous demand in the
market. The existing VoD system does not cater the needs
and demands of the market. The major problem in the VoD
system is serving of clients with expected QoS is difficult. In
this paper, we proposed a protocol and algorithm that
chains the proxy servers and subscribed clients. Our
objective is to send one server stream and this stream should
be served to N asynchronous clients. The server bandwidth
is scarcity and on the client uplink bandwidth is
underutilized. In this protocol, we are using client’s residual
bandwidth such that the load on the server bandwidth is
reduced. We have proved that optimal utilization of the
buffer and bandwidth for the entire VoD system and also
less rejection ratio of the clients.
Optimal Streaming Protocol for VoD Using Clients' Residual BandwidthIDES Editor
A true VoD system has tremendous demand in the
market. The existing VoD system does not cater the needs
and demands of the market. The major problem in the VoD
system is serving of clients with expected QoS is difficult. In
this paper, we proposed a protocol and algorithm that
chains the proxy servers and subscribed clients. Our
objective is to send one server stream and this stream should
be served to N asynchronous clients. The server bandwidth
is scarcity and on the client uplink bandwidth is
underutilized. In this protocol, we are using client’s residual
bandwidth such that the load on the server bandwidth is
reduced. We have proved that optimal utilization of the
buffer and bandwidth for the entire VoD system and also
less rejection ratio of the clients.
Multicasting Of Adaptively-Encoded MPEG4 Over Qos-Cognizant IP NetworksEditor IJMTER
we propose a novel architectural planning for multicasting of adaptively-encoded
layered MPEG4 over a QoS-aware IP network. We re-quire a QoS-aware IP network in this case to
(1) Support priority dropping of packets in time of congestion. (2) Provide congestion notification to
the multicast sender. For the first requirement, we use RED's extension for service differentiation. It
recognizes the priority of packets when they need to be dropped and drops lower priority packets
first. We couple RED with our proposal for the second requirement which is the adoption of
Backward Explicit Congestion Notification (BECN) for use with IP multicast. BECN will provide
early congestion notification at the IP layer level to the video sender. BECN detects upcoming
congestion based on size of the RED queue in the routers. The MPEG4 adaptive-encoder can change
the sending rate and also can divide the video packets into lower priority packets and high priority
packets. Based on BECN messages from the routers, a simple flow controller at the sender sets the
rate for the adaptive MPEG4 encoder and also sets the ratio between the high priority and low
priority packets within the video stream. We use a TES model for generating the MPEG4 traffic that
is based on real video traces. Simulation results show that combining priority dropping, MPEG4
adaptive encoding, and multicast BECN: (1) Improves bandwidth utilization (2) Reduces time to
react to congestion and hence improves the received video quality (3) Maintains graceful degradation
in quality with congestion and provides minimum quality even if congestion persists.
A review over multimedia and how to share multimedia data between clients and servers.
Find me on:
AFCIT
http://www.afcit.xyz
YouTube
https://www.youtube.com/channel/UCuewOYbBXH5gwhfOrQOZOdw
Google Plus
https://plus.google.com/u/0/+AhmedGadIT
SlideShare
https://www.slideshare.net/AhmedGadFCIT
LinkedIn
https://www.linkedin.com/in/ahmedfgad/
ResearchGate
https://www.researchgate.net/profile/Ahmed_Gad13
Academia
https://www.academia.edu/
Google Scholar
https://scholar.google.com.eg/citations?user=r07tjocAAAAJ&hl=en
Mendelay
https://www.mendeley.com/profiles/ahmed-gad12/
ORCID
https://orcid.org/0000-0003-1978-8574
StackOverFlow
http://stackoverflow.com/users/5426539/ahmed-gad
Twitter
https://twitter.com/ahmedfgad
Facebook
https://www.facebook.com/ahmed.f.gadd
Pinterest
https://www.pinterest.com/ahmedfgad/
Motion Vector Recovery for Real-time H.264 Video StreamsIDES Editor
Among the various network protocols that can be
used to stream the video data, RTP over UDP is the best to do
with real time streaming in H.264 based video streams. Videos
transmitted over a communication channel are highly prone
to errors; it can become critical when UDP is used. In such
cases real time error concealment becomes an important
aspect. A subclass of the error concealment is the motion
vector recovery which is used to conceal errors at the decoder
side. Lagrange Interpolation is the fastest and a popular
technique for the motion vector recovery. This paper proposes
a new system architecture which enables the RTP-UDP based
real time video streaming as well as the Lagrange
interpolation based real time motion vector recovery in H.264
coded video streams. A completely open source H.264 video
codec called FFmpeg is chosen to implement the proposed
system. Proposed implementation was tested against the
different standard benchmark video sequences and the
quality of the recovered videos was measured at the decoder
side using various quality measurement metrics.
Experimental results show that the real time motion vector
recovery does not introduce any noticeable difference or
latency during display of the recovered video.
Webcast: Reduce latency, improve analytics and maximize asset utilization in ...Emulex Corporation
Join Emulex and Myricom experts to learn how to maximize performance in HFT, network security, network analytics and video content delivery environments with Emulex Network Xceleration (NX) solutions.
This webcast will discuss ways of reducing latency, increasing asset utilization and improving network analytics in high performance networks.
Multicasting Of Adaptively-Encoded MPEG4 Over Qos-Cognizant IP NetworksEditor IJMTER
we propose a novel architectural planning for multicasting of adaptively-encoded
layered MPEG4 over a QoS-aware IP network. We re-quire a QoS-aware IP network in this case to
(1) Support priority dropping of packets in time of congestion. (2) Provide congestion notification to
the multicast sender. For the first requirement, we use RED's extension for service differentiation. It
recognizes the priority of packets when they need to be dropped and drops lower priority packets
first. We couple RED with our proposal for the second requirement which is the adoption of
Backward Explicit Congestion Notification (BECN) for use with IP multicast. BECN will provide
early congestion notification at the IP layer level to the video sender. BECN detects upcoming
congestion based on size of the RED queue in the routers. The MPEG4 adaptive-encoder can change
the sending rate and also can divide the video packets into lower priority packets and high priority
packets. Based on BECN messages from the routers, a simple flow controller at the sender sets the
rate for the adaptive MPEG4 encoder and also sets the ratio between the high priority and low
priority packets within the video stream. We use a TES model for generating the MPEG4 traffic that
is based on real video traces. Simulation results show that combining priority dropping, MPEG4
adaptive encoding, and multicast BECN: (1) Improves bandwidth utilization (2) Reduces time to
react to congestion and hence improves the received video quality (3) Maintains graceful degradation
in quality with congestion and provides minimum quality even if congestion persists.
A review over multimedia and how to share multimedia data between clients and servers.
Find me on:
AFCIT
http://www.afcit.xyz
YouTube
https://www.youtube.com/channel/UCuewOYbBXH5gwhfOrQOZOdw
Google Plus
https://plus.google.com/u/0/+AhmedGadIT
SlideShare
https://www.slideshare.net/AhmedGadFCIT
LinkedIn
https://www.linkedin.com/in/ahmedfgad/
ResearchGate
https://www.researchgate.net/profile/Ahmed_Gad13
Academia
https://www.academia.edu/
Google Scholar
https://scholar.google.com.eg/citations?user=r07tjocAAAAJ&hl=en
Mendelay
https://www.mendeley.com/profiles/ahmed-gad12/
ORCID
https://orcid.org/0000-0003-1978-8574
StackOverFlow
http://stackoverflow.com/users/5426539/ahmed-gad
Twitter
https://twitter.com/ahmedfgad
Facebook
https://www.facebook.com/ahmed.f.gadd
Pinterest
https://www.pinterest.com/ahmedfgad/
Motion Vector Recovery for Real-time H.264 Video StreamsIDES Editor
Among the various network protocols that can be
used to stream the video data, RTP over UDP is the best to do
with real time streaming in H.264 based video streams. Videos
transmitted over a communication channel are highly prone
to errors; it can become critical when UDP is used. In such
cases real time error concealment becomes an important
aspect. A subclass of the error concealment is the motion
vector recovery which is used to conceal errors at the decoder
side. Lagrange Interpolation is the fastest and a popular
technique for the motion vector recovery. This paper proposes
a new system architecture which enables the RTP-UDP based
real time video streaming as well as the Lagrange
interpolation based real time motion vector recovery in H.264
coded video streams. A completely open source H.264 video
codec called FFmpeg is chosen to implement the proposed
system. Proposed implementation was tested against the
different standard benchmark video sequences and the
quality of the recovered videos was measured at the decoder
side using various quality measurement metrics.
Experimental results show that the real time motion vector
recovery does not introduce any noticeable difference or
latency during display of the recovered video.
Webcast: Reduce latency, improve analytics and maximize asset utilization in ...Emulex Corporation
Join Emulex and Myricom experts to learn how to maximize performance in HFT, network security, network analytics and video content delivery environments with Emulex Network Xceleration (NX) solutions.
This webcast will discuss ways of reducing latency, increasing asset utilization and improving network analytics in high performance networks.
Partagez des données sur votre réseau
Vous pouvez désormais partager des données au cours de vos appels vidéo HD à partir de tout LifeSize® Passport™ ou LGExecutive, développé par LifeSize®, sans besoin d\'interrompre la réunion pour permuter les câbles DVI. Passez aisément d\'un intervenant à l\'autre grâce à ce téléchargement gratuit et facile à utiliser sur Mac et PC. Que vous présentiez à partir du même emplacement ou que vous vous joigniez à distance, transmettez et recevez des contenus partagés en haute résolution, afin d\'améliorer la productivité et la collaboration des visioconférences H
[Nov./2010] Adaptive Video Streaming over Wireless LAN with ns-2 Hayoung Yoon
I was invite to give a lecture about NS-2 Simulation on adaptative multimedia delivery by KICS (The Korean Institute of Communications and Information Sciences)
This Technical Note describes the Message formats used in PathMATE Multi-Process deployments when communicating between any two process instances.
Section 2 provides an overview of the different message protocol layers involved during
transmission defining basic terminology and the basic concepts.
Section 3 describes in the detail the PathMATE Application Messaging Protocol and all supported message formats, as defined for the CPP Transformation Maps in 8.2.0 software releases.
Appendix A lists sources for referenced information for Ethernet and TCPIP protocols.
The security of multimedia transmission is very important in today’s life. It is a challenge to transfer the
huge multimedia data mostly because of big file sizes and limited bit rates, which are still at a premium.
Hence, the data to be sent has to be compressed. H.264/AVC is the best and practical solution available
today to achieve this. If the data is made secure by good encryption after compression without changing
the size of the compressed file, it is even better. This paper makes an attempt to achieve exactly this. If the
data is encrypted after the compression, the software used for compression need not be changed. In this
technique, use of selective encryption is applied without altering the compression software. Here, the data
is first identified for I-frame which is selected for encryption. This selected data is then sliced and each
slice data without slice header is encrypted using AES algorithm. After decoding the file, the frame can be
seen to be distorted compared to the original one. When the video is decrypted with proper key, the video
obtained which is the same as decoded without encryption.
This Technical Note describes the Message formats used in PathMATE Multi-Process
deployments when communicating between any two process instances.
Section 2 provides an overview of the different message protocol layers involved during
transmission defining basic terminology and the basic concepts.
Section 3 describes in the detail the PathMATE Application Messaging Protocol and all supported message formats, as defined for the CPP Transformation Maps in 8.2.0 software releases.
Appendix A lists sources for referenced information for Ethernet and TCPIP protocols.
LLL-CAdViSE: Live Low-Latency Cloud-based Adaptive Video Streaming Evaluation...Alpen-Adria-Universität
Live media streaming is a challenging task by itself, and when it comes to use cases that define low-latency as a must, the complexity will rise multiple times. In a typical media streaming session, the main goal can be declared as providing the highest possible Quality of Experience (QoE), which has proved to be measurable using quality models and various metrics. In a low-latency media streaming session, the requirements are to provide the lowest possible delay between the moment a frame of video is captured and the moment that the captured frame is rendered on the client screen, also known as end-to-end (E2E) latency and maintain the QoE. This paper proposes a sophisticated cloud-based and open-source testbed that facilitates evaluating a low-latency live streaming session as the primary contribution. Live Low-Latency Cloud-based Adaptive Video Streaming Evaluation (LLL-CAdViSE) framework is enabled to asses the live streaming systems running on two major HTTP Adaptive Streaming (HAS) formats, Dynamic Adaptive Streaming over HTTP (MPEG-DASH) and HTTP Live Streaming (HLS). We use Chunked Transfer Encoding (CTE) to deliver Common Media Application Format (CMAF) chunks to the media players. Our testbed generates the test content (audiovisual streams). Therefore, no test sequence is required, and the encoding parameters (e.g., encoder, bitrate, resolution, latency) are defined separately for each experiment. We have integrated the ITU-T P.1203 quality model inside our testbed. To demonstrate the flexibility and power of LLL-CAdViSE, we have presented a secondary contribution in this paper; we have conducted a set of experiments with different network traces, media players, ABR algorithms, and with various requirements (e.g., E2E latency (typical/reduced/low/ultra-low), diverse bitrate ladders, and catch-up logic) and presented the essential findings and the experimental results.
The latest video compression standard, H.264 (also known as MPEG-4 Part 10/AVC for Advanced Video
Coding), is expected to become the video standard of choice in the coming years.
H.264 is an open, licensed standard that supports the most efficient video compression techniques available
today. Without compromising image quality, an H.264 encoder can reduce the size of a digital video file by
more than 80% compared with the Motion JPEG format and as much as 50% more than with the MPEG-4
Part 2 standard. This means that much less network bandwidth and storage space are required for a video
file. Or seen another way, much higher video quality can be achieved for a given bit rate.
1. Delay Analysis of Unicast Video Streaming over IEEE
802.11 WLAN Networks
21st June 2005
Nicola Cranley
Communications Network Research Institute
School of Electronic and Communications Engineering
Dublin Institute of Technology,
Dublin, Ireland
2. Outline
Multimedia Streaming
MPEG-4
Hint Tracks
Video Analysis
Experimental test-bed
Test set-up
WLAN Probe
Resource Usage
Delay Analysis
Conclusions
Future Work
3. Variables in Multimedia
Streaming
Content and Complexity of the content
Affects the efficiency of the encoder to compress the stream, for example animation clips.
Compression scheme being used
Differing levels of efficiency and target applications. i.e. MPEG-2, MPEG-4, H.264
Encoding configuration
Frame rate,
I-frame rate,
Quantization parameter,
Target bit rate (if any) supplied and
Target stream type i.e. VBR, CBR or near CBR.
Packetisation scheme
If the file to be streamed is .MP4 or .3gp, then a hint track must be prepared that indicates to the
server how the content should be streamed.
The streaming server being used
Rate control adaptation algorithm being used, and the methods of bit rate adaptation used by the
server.
4. MPEG-4
In the MPEG-4 standard, there are a number of profiles.
Profiles determine the capabilities of the player to play out encoded
content.
Codec only needs to implement a subset of the MPEG-4 standard whilst
maintaining inter-working with other MPEG-4 devices built to the same
profiles.
Two main profiles: Simple Profile (SP) and Advanced Simple Profile
(ASP) and are part of the non-scalable subset of visual profiles.
MP4 files contain a number of tracks (media tracks and hint tracks).
A trak represents a single independent data stream and an MP4 file may
contain any number of video, audio, hint, Binary Format for Scenes (BIFS)
or Object Descriptor (OD) tracks.
Hint tracks are required to stream MP4 and .3gp files.
5. Hint Tracks
Each track in a media file is sent as a separate stream.
Each sample in a hint track tells the server how to optimally
packetise a specific amount of media data.
Reduce processing on server.
7. Test-bed
Darwin Streaming Server (DSS)
Compliant to MPEG-4 standard profiles, ISMA
streaming standards and all IETF protocols.
RTP/UDP/IP stack with RTCP/UDP/IP with
RTSP.
Playout Delay
WinDump
Promiscuous capture of all RTP/UDP/IP packets
at both client and server.
NetTime
Clock sync
Skew removal using Paxsons alg.
MGEN
8. Playout Delay
Need to isolate the streaming
application from adaptation
algorithm.
Use large pre-buffering delay.
Ensure no adaptation.
From delay measurements and
setting playout delay constraints,
we can find the packet loss rates.
Statistical analysis
Quality of Delivery (QoD)
3gpp
14. Summary Delay Data
Mean Delay Burst Details
Clip MTU 1024 MTU 512
Delay Mean Mean Max Pkts/Burst Delay Mean Mean Max Pkts/Burst
Slope Delay Burst Delay Slope Delay Burst Delay
(msec) (msec) (msec) (msec)
JR1 1.27 7.69 13.42 10.0 0.96 11.82 19.77 17.7
JR4 1.27 8.08 13.68 10.0 0.96 12.33 20.06 17.9
JR5 1.27 7.66 13.45 9.8 0.96 11.44 19.44 17.3
JR6 1.27 7.38 13.15 9.6 0.96 11.26 19.75 18.4
JR7 1.26 6.79 9.69 6.8 0.96 11.24 19.92 18.6
15. Delay Variations with
Background Uplink Load
Preliminary results of mean delay variations
with increasing uplink load and pkts/sec.
16. Conclusions
Relationship between video bit rate,
packetisation scheme, bandwidth load and
mean delay.
Frame based nature of video results in packet
bursts.
These bursts cause the per-packet delay to
increase in see-saw manner.
17. Future Work (1)
Analysis of the effects of contention and load on delay.
Finish analysis of delay variations with increasing uplink load with
varying packet rates and number of STA creating the load.
Interleaving traffic on downlink.
23. 3gp Content
Developed for the creation, delivery and playback of multimedia over wireless
networks on a variety of devices.
3gp is based on ISO base file format upon which MPEG-4 is based.
Wrapper or container file supporting:
MPEG-4, H.263, H.263+
Advanced Audio Coding (AAC) and Adaptive Multi-Rate (AMR)
Timed text tracks.
Media consists of a hierarchy of atoms containing meta-data and media data.
(3gp has new user data atoms defined by DoCoMo – Copyright, Author,
Title and Description)
Tracks consist of a single independent media data stream.
Each media stream must have its own Hint Track. Hint tracks support streaming
by the server and indicate how the server should packetize the data e.g. MTU,
sample durations
24. 3gp Profiles
3GP files may conform to one ore more profiles but it is not mandatory.
Basic profile: The 3GP Basic profile is used in MMS and PSS. This profile
guarantees the server to inter-work with MMS, as well as the 3GPP file format
to be used internally within the MMS service.
Streaming server profile: This profile allows interoperability between content
creation tools and streaming servers, in particular for the selection of alternative
encodings of content and adaptation during streaming.
25. 3gp Structure
Groupings of alternative tracks:
Tracks that are alternatives to each
other can be grouped into an
alternate group. Tracks in an
alternate group that can be used for
switching can be further grouped into
a switch group.
Alternate group: Only one track
within an alternate group should be
streamed or played at any time and
must be distinguishable from other
tracks in the group via attributes such
as bit rate, codec, language, packet
size etc.
Switch group: Tracks that belong to
the same switch group, belong to the
same alternate group.
Hint tracks: All media tracks must
have their own associated RTP hint
track.