Header compression and multiplexing in LISPJose Saldana
When small payloads are transmitted through a packet-switched network, the resulting overhead may result significant. This is stressed in the case of LISP, where a number of headers are prepended to a packet, as new headers have to be added to each packet.
This presentation proposes to send together a number of small packets, which are in the buffer of a ITR, having the same ETR as destination, into a single packet. Therefore, they will share a single LISP header, and therefore bandwidth savings can be obtained, and a reduction in the overall number of packets sent to the network can be achieved.
The low efficiency caused by the high amount of small packets present in the network can be alleviated by means of packet aggregation.
There are some situations in which multiplexing a number of small packets into a bigger one is desirable. For example, a number of small packets can be sent together between a pair of machines if they share a common network path. Thus, the traffic profile can be shifted from small to larger packets, reducing the network overhead and the number of packets per second to be managed by intermediaterouters.
This presentation describes Simplemux, a protocol able to encapsulate a number of packets belonging to different protocols into a single packet. It includes the "Protocol" field on each multiplexing header, thus allowing the inclusion of a number of packets belonging to different protocols (multiplexed packets) on a packet of another protocol (tunneling protocol).
In order to reduce the overhead, the size of the multiplexing headers is kept very low (it may be a single byte when multiplexing small packets).
Header compression and multiplexing in LISPJose Saldana
When small payloads are transmitted through a packet-switched network, the resulting overhead may result significant. This is stressed in the case of LISP, where a number of headers are prepended to a packet, as new headers have to be added to each packet.
This presentation proposes to send together a number of small packets, which are in the buffer of a ITR, having the same ETR as destination, into a single packet. Therefore, they will share a single LISP header, and therefore bandwidth savings can be obtained, and a reduction in the overall number of packets sent to the network can be achieved.
The low efficiency caused by the high amount of small packets present in the network can be alleviated by means of packet aggregation.
There are some situations in which multiplexing a number of small packets into a bigger one is desirable. For example, a number of small packets can be sent together between a pair of machines if they share a common network path. Thus, the traffic profile can be shifted from small to larger packets, reducing the network overhead and the number of packets per second to be managed by intermediaterouters.
This presentation describes Simplemux, a protocol able to encapsulate a number of packets belonging to different protocols into a single packet. It includes the "Protocol" field on each multiplexing header, thus allowing the inclusion of a number of packets belonging to different protocols (multiplexed packets) on a packet of another protocol (tunneling protocol).
In order to reduce the overhead, the size of the multiplexing headers is kept very low (it may be a single byte when multiplexing small packets).
Transmission Control Protocol (TCP) is a fundamental protocol of the Internet Protocol Suite. TCP complements the Internet Protocol (IP), therefore it is common to refer to the internet protocol suit as TCP/IP. TCP is used for error detection, detection of packet loss or out of order delivery of data. TCP requests retransmission, rearranges data and helps with network congestion.
Several congestion control algorithms have been developed, over the last years, to improve TCP's performance over various technologies and network conditions.
The purpose of this assignment is to present TCP, network congestion, congestion algorithms and simulate different algorithms in different network conditions to measure their performance. For this assignment's needs, OPNET IT Guru Academic Edition software was used to accomplish the reproduction of projects that have been already published and gave the wanted results.
"Performance Evaluation and Comparison of Westwood+, New Reno and Vegas TCP ...losalamos
Luigi A. Grieco, Saverio Mascolo.
ACM CCR, Vol.34 No.2, April 2004.
This article aims at evaluating a comparison between three TCP congestion control algorithms. A really interesting reading.
Computer networks have experienced an explosive growth over the past few years and with
that growth have come severe congestion problems. For example, it is now common to see
internet gateways drop 10% of the incoming packets because of local buffer overflows.
Our investigation of some of these problems has shown that much of the cause lies in
transport protocol implementations (
not
in the protocols themselves): The ‘obvious’ ways
to implement a window-based transport protocol can result in exactly the wrong behavior
in response to network congestion. We give examples of ‘wrong’ behavior and describe
some simple algorithms that can be used to make right things happen. The algorithms are
rooted in the idea of achieving network stability by forcing the transport connection to obey
a ‘packet conservation’ principle. We show how the algorithms derive from this principle
and what effect they have on traffic over congested networks.
In October of ’86, the Internet had the first of what became a series of ‘congestion col-
lapses’. During this period, the data throughput from LBL to UC Berkeley (sites separated
by 400 yards and two IMP hops) dropped from 32 Kbps to 40 bps. We were fascinated by
this sudden factor-of-thousand drop in bandwidth and embarked on an investigation of why
things had gotten so bad. In particular, we wondered if the 4.3
BSD
(Berkeley U
NIX
)
TCP
was mis-behaving or if it could be tuned to work better under abysmal network conditions.
The answer to both of these questions was “yes”.
These slides are used in the presentation at https://vimeo.com/156386656 .
In that video, Daan Pareit (iMinds / Ghent University) explains how to calculate Wi-Fi throughput ("your Wi-Fi speed") based on the theory for WLAN medium access. It is a good starting point before doing their online lab which uses live actual Wi-Fi hardware remotely and which is explained at https://vimeo.com/152678614. That online lab itself is accessible at forge.test.iminds.be/wlan .
More information about the FORGE project which enabled the succeeding lab session: at ict-forge.eu .
MiPSO: Multi-Period Per-Scene Optimization For HTTP Adaptive StreamingAlpen-Adria-Universität
Video delivery over the Internet has become more and more established in recent years due to the widespread use of Dynamic Adaptive Streaming over HTTP (DASH). The current DASH specification defines a hierarchical data model for Media Presentation Descriptions (MPDs) in terms of periods, adaptation sets, representations, and segments. Although multi-period MPDs are widely used in live streaming scenarios, they are not fully utilized in Video-on-Demand (VoD) HTTP adaptive streaming (HAS) scenarios. In this paper, we introduce MiPSO, a framework for Multi-Period per-Scene optimization, to examine multiple periods in VoD HAS scenarios. MiPSO provides different encoded representations of a video at either (i) maximum possible quality or (ii) minimum possible bitrate, beneficial to both service providers and subscribers. In each period, the proposed framework adjusts the video representations (resolution-bitrate pairs) by taking into account the complexities of the video content, with the aim of achieving streams at either higher qualities or lower bitrates. The experimental evaluation with a test video data set shows that MiPSO reduces the average bitrate of streams with the same visual quality by approximately 10% or increases the visual quality of streams by at least 1 dB in terms of Peak Signal-to-Noise (PSNR) at the same bitrate compared to conventional approaches.
(Slides) P2P video broadcast based on per-peer transcoding and its evaluatio...Naoki Shibata
Shibata, N., Yasumoto, K., and Mori, M.: P2P Video Broadcast based on Per-Peer Transcoding and its Evaluation on PlanetLab, Proc. of 19th IASTED Int'l. Conf. on Parallel and Distributed Computing and Systems (PDCS2007), (November 2007).
http://ito-lab.naist.jp/themes/pdffiles/071121.shibata.pdcs2007.pdf
Investigation of Quick Handover Algorithm for Wireless Video Streaming AppRajvi Jagirdar
Developed and tested an effective handover algorithm for uninterrupted live mobile video streaming with high quality of service and performance efficiency based on 802.11g standards using an ARM 9 processor, designed a successful system architecture using JavaScript and C languages, JAX-WS Web services, MYSQL database and NetBeans integrated development environment, simulation results were observed in terms of the number of handovers versus RSS (Reduced Signal Strength.) - Presented by Rajvi Desai, Sayali Dharmadhikari, Sunayana Goswamy
Transmission Control Protocol (TCP) is a fundamental protocol of the Internet Protocol Suite. TCP complements the Internet Protocol (IP), therefore it is common to refer to the internet protocol suit as TCP/IP. TCP is used for error detection, detection of packet loss or out of order delivery of data. TCP requests retransmission, rearranges data and helps with network congestion.
Several congestion control algorithms have been developed, over the last years, to improve TCP's performance over various technologies and network conditions.
The purpose of this assignment is to present TCP, network congestion, congestion algorithms and simulate different algorithms in different network conditions to measure their performance. For this assignment's needs, OPNET IT Guru Academic Edition software was used to accomplish the reproduction of projects that have been already published and gave the wanted results.
"Performance Evaluation and Comparison of Westwood+, New Reno and Vegas TCP ...losalamos
Luigi A. Grieco, Saverio Mascolo.
ACM CCR, Vol.34 No.2, April 2004.
This article aims at evaluating a comparison between three TCP congestion control algorithms. A really interesting reading.
Computer networks have experienced an explosive growth over the past few years and with
that growth have come severe congestion problems. For example, it is now common to see
internet gateways drop 10% of the incoming packets because of local buffer overflows.
Our investigation of some of these problems has shown that much of the cause lies in
transport protocol implementations (
not
in the protocols themselves): The ‘obvious’ ways
to implement a window-based transport protocol can result in exactly the wrong behavior
in response to network congestion. We give examples of ‘wrong’ behavior and describe
some simple algorithms that can be used to make right things happen. The algorithms are
rooted in the idea of achieving network stability by forcing the transport connection to obey
a ‘packet conservation’ principle. We show how the algorithms derive from this principle
and what effect they have on traffic over congested networks.
In October of ’86, the Internet had the first of what became a series of ‘congestion col-
lapses’. During this period, the data throughput from LBL to UC Berkeley (sites separated
by 400 yards and two IMP hops) dropped from 32 Kbps to 40 bps. We were fascinated by
this sudden factor-of-thousand drop in bandwidth and embarked on an investigation of why
things had gotten so bad. In particular, we wondered if the 4.3
BSD
(Berkeley U
NIX
)
TCP
was mis-behaving or if it could be tuned to work better under abysmal network conditions.
The answer to both of these questions was “yes”.
These slides are used in the presentation at https://vimeo.com/156386656 .
In that video, Daan Pareit (iMinds / Ghent University) explains how to calculate Wi-Fi throughput ("your Wi-Fi speed") based on the theory for WLAN medium access. It is a good starting point before doing their online lab which uses live actual Wi-Fi hardware remotely and which is explained at https://vimeo.com/152678614. That online lab itself is accessible at forge.test.iminds.be/wlan .
More information about the FORGE project which enabled the succeeding lab session: at ict-forge.eu .
MiPSO: Multi-Period Per-Scene Optimization For HTTP Adaptive StreamingAlpen-Adria-Universität
Video delivery over the Internet has become more and more established in recent years due to the widespread use of Dynamic Adaptive Streaming over HTTP (DASH). The current DASH specification defines a hierarchical data model for Media Presentation Descriptions (MPDs) in terms of periods, adaptation sets, representations, and segments. Although multi-period MPDs are widely used in live streaming scenarios, they are not fully utilized in Video-on-Demand (VoD) HTTP adaptive streaming (HAS) scenarios. In this paper, we introduce MiPSO, a framework for Multi-Period per-Scene optimization, to examine multiple periods in VoD HAS scenarios. MiPSO provides different encoded representations of a video at either (i) maximum possible quality or (ii) minimum possible bitrate, beneficial to both service providers and subscribers. In each period, the proposed framework adjusts the video representations (resolution-bitrate pairs) by taking into account the complexities of the video content, with the aim of achieving streams at either higher qualities or lower bitrates. The experimental evaluation with a test video data set shows that MiPSO reduces the average bitrate of streams with the same visual quality by approximately 10% or increases the visual quality of streams by at least 1 dB in terms of Peak Signal-to-Noise (PSNR) at the same bitrate compared to conventional approaches.
(Slides) P2P video broadcast based on per-peer transcoding and its evaluatio...Naoki Shibata
Shibata, N., Yasumoto, K., and Mori, M.: P2P Video Broadcast based on Per-Peer Transcoding and its Evaluation on PlanetLab, Proc. of 19th IASTED Int'l. Conf. on Parallel and Distributed Computing and Systems (PDCS2007), (November 2007).
http://ito-lab.naist.jp/themes/pdffiles/071121.shibata.pdcs2007.pdf
Investigation of Quick Handover Algorithm for Wireless Video Streaming AppRajvi Jagirdar
Developed and tested an effective handover algorithm for uninterrupted live mobile video streaming with high quality of service and performance efficiency based on 802.11g standards using an ARM 9 processor, designed a successful system architecture using JavaScript and C languages, JAX-WS Web services, MYSQL database and NetBeans integrated development environment, simulation results were observed in terms of the number of handovers versus RSS (Reduced Signal Strength.) - Presented by Rajvi Desai, Sayali Dharmadhikari, Sunayana Goswamy
Live streaming of video and subtitles with MPEG-DASHCyril Concolato
This presentation was made at the MPEG meeting in Shanghai, China, in October 2012, related to the input contribution M26906. It gives the details about the demonstration made during the meeting. This demonstration showed the use of the Google Chrome browser to display synchronized video and subtitles, using the Media Source Extension draft specification and the WebVTT subtitle format. The video and DASH content was prepared using GPAC MP4Box tool.
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09666155510, 09849539085 or mail us - ieeefinalsemprojects@gmail.com-Visit Our Website: www.finalyearprojects.org
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09666155510, 09849539085 or mail us - ieeefinalsemprojects@gmail.com-Visit Our Website: www.finalyearprojects.org
Wireless Controller Area Network (WCAN) PROTOCOL.
WCAN uses token frame method in
providing channel access to nodes in the system. This method
allow nodes to share a common broadcast channel by taking
turns in transmitting upon receiving the token frame that are
circulating around the network for a specified amount of time.
The token frame allows nodes to access the network one at a
time, giving ‘fair’ chance to all nodes to transmit, instead of
competing with one another. Message with highest priority has
the highest priority to transmit. The token frame method
provides high throughput in a bounded latency environment.
WCAN is tested in a simulation environment and is found
outperforms IEEE 802.11 in a ring network environment in
terms of network scalability and high data rate.
[SCRIPT INCLUDED]
To be viewed in Microsoft power point.
This PPT describes the Difference between the TCP and UDP layer in the Transport layer of the OSI Model
Ideal for a team of 4 people.
Please edit the first page as per number of team members and their names.
The script includes when to click what to speak and whcih person will speak what.Please edit the team member names as required.
You can find link for script below.
LINK:
https://docs.google.com/document/d/1m2Ef8p9VNQCh4MLKvfb4sWINMdQPi4PEnpcq-KbY600/edit?usp=sharing
04/29/10 Need to precisely quantify performance during the buffering period Synching up application and network level measurements is a challenge (even on one host) We came up with a way to parse RTSP packets and application level data to do so. Point out: 1) buffering (loss 2) Not TCP-friendly 3) Responsive to capacity --- NOT FIREHOSE CBR 4) Depends on the encoding rate vs. the capacity (lots of burden on content provider)