1) The document proposes a technique called Resilient Jumbo Frames (FRJ) that combines jumbo frames, partial packet recovery, and partial recovery aware rate adaptation to improve throughput in wireless networks.
2) Experimental results show that FRJ achieves 40-68% higher throughput than existing schemes under single flow conditions and 10-64% higher throughput with multiple flows, without compromising fairness.
3) The key contributions are identifying the interplay between jumbo frames, partial packet recovery, and rate adaptation and demonstrating the effectiveness of combining these techniques through testbed experiments.
Computer networks have experienced an explosive growth over the past few years and with
that growth have come severe congestion problems. For example, it is now common to see
internet gateways drop 10% of the incoming packets because of local buffer overflows.
Our investigation of some of these problems has shown that much of the cause lies in
transport protocol implementations (
not
in the protocols themselves): The ‘obvious’ ways
to implement a window-based transport protocol can result in exactly the wrong behavior
in response to network congestion. We give examples of ‘wrong’ behavior and describe
some simple algorithms that can be used to make right things happen. The algorithms are
rooted in the idea of achieving network stability by forcing the transport connection to obey
a ‘packet conservation’ principle. We show how the algorithms derive from this principle
and what effect they have on traffic over congested networks.
In October of ’86, the Internet had the first of what became a series of ‘congestion col-
lapses’. During this period, the data throughput from LBL to UC Berkeley (sites separated
by 400 yards and two IMP hops) dropped from 32 Kbps to 40 bps. We were fascinated by
this sudden factor-of-thousand drop in bandwidth and embarked on an investigation of why
things had gotten so bad. In particular, we wondered if the 4.3
BSD
(Berkeley U
NIX
)
TCP
was mis-behaving or if it could be tuned to work better under abysmal network conditions.
The answer to both of these questions was “yes”.
Transmission Control Protocol (TCP) is a fundamental protocol of the Internet Protocol Suite. TCP complements the Internet Protocol (IP), therefore it is common to refer to the internet protocol suit as TCP/IP. TCP is used for error detection, detection of packet loss or out of order delivery of data. TCP requests retransmission, rearranges data and helps with network congestion.
Several congestion control algorithms have been developed, over the last years, to improve TCP's performance over various technologies and network conditions.
The purpose of this assignment is to present TCP, network congestion, congestion algorithms and simulate different algorithms in different network conditions to measure their performance. For this assignment's needs, OPNET IT Guru Academic Edition software was used to accomplish the reproduction of projects that have been already published and gave the wanted results.
ENHANCEMENT OF TCP FAIRNESS IN IEEE 802.11 NETWORKScscpconf
The usage of fixed buffers in 802.11 networks has a number of disadvantages associated with
it. This includes high delay, reduced throughput and inefficient channel utilisation. To
overcome this, a dynamic buffer sizing algorithm, the A* algorithm has been implemented at
the access point. In this algorithm buffer size is dynamically adjusted depending upon the
current channel conditions and hence delay is reduced and the throughput is maintained. But
in 802.11 networks with DCF collision avoidance mechanism, it creates significant amount of
unfairness between the upstream and downstream TCP flows, with clusters of upstream ACKs
blocking downstream data at the access point. Thus a variation of the Explicit Window
Adaptation (EWA) scheme has been used to regulate the queuing time of the upload clients by
calculating the feedback value at the access point. This creates fairness and increases the number of transmission opportunities for the downstream traffic
Computer networks have experienced an explosive growth over the past few years and with
that growth have come severe congestion problems. For example, it is now common to see
internet gateways drop 10% of the incoming packets because of local buffer overflows.
Our investigation of some of these problems has shown that much of the cause lies in
transport protocol implementations (
not
in the protocols themselves): The ‘obvious’ ways
to implement a window-based transport protocol can result in exactly the wrong behavior
in response to network congestion. We give examples of ‘wrong’ behavior and describe
some simple algorithms that can be used to make right things happen. The algorithms are
rooted in the idea of achieving network stability by forcing the transport connection to obey
a ‘packet conservation’ principle. We show how the algorithms derive from this principle
and what effect they have on traffic over congested networks.
In October of ’86, the Internet had the first of what became a series of ‘congestion col-
lapses’. During this period, the data throughput from LBL to UC Berkeley (sites separated
by 400 yards and two IMP hops) dropped from 32 Kbps to 40 bps. We were fascinated by
this sudden factor-of-thousand drop in bandwidth and embarked on an investigation of why
things had gotten so bad. In particular, we wondered if the 4.3
BSD
(Berkeley U
NIX
)
TCP
was mis-behaving or if it could be tuned to work better under abysmal network conditions.
The answer to both of these questions was “yes”.
Transmission Control Protocol (TCP) is a fundamental protocol of the Internet Protocol Suite. TCP complements the Internet Protocol (IP), therefore it is common to refer to the internet protocol suit as TCP/IP. TCP is used for error detection, detection of packet loss or out of order delivery of data. TCP requests retransmission, rearranges data and helps with network congestion.
Several congestion control algorithms have been developed, over the last years, to improve TCP's performance over various technologies and network conditions.
The purpose of this assignment is to present TCP, network congestion, congestion algorithms and simulate different algorithms in different network conditions to measure their performance. For this assignment's needs, OPNET IT Guru Academic Edition software was used to accomplish the reproduction of projects that have been already published and gave the wanted results.
ENHANCEMENT OF TCP FAIRNESS IN IEEE 802.11 NETWORKScscpconf
The usage of fixed buffers in 802.11 networks has a number of disadvantages associated with
it. This includes high delay, reduced throughput and inefficient channel utilisation. To
overcome this, a dynamic buffer sizing algorithm, the A* algorithm has been implemented at
the access point. In this algorithm buffer size is dynamically adjusted depending upon the
current channel conditions and hence delay is reduced and the throughput is maintained. But
in 802.11 networks with DCF collision avoidance mechanism, it creates significant amount of
unfairness between the upstream and downstream TCP flows, with clusters of upstream ACKs
blocking downstream data at the access point. Thus a variation of the Explicit Window
Adaptation (EWA) scheme has been used to regulate the queuing time of the upload clients by
calculating the feedback value at the access point. This creates fairness and increases the number of transmission opportunities for the downstream traffic
Radio Signal Classification with Deep Neural NetworksKachi Odoemene
6th place solution to 2018 Army Signal Classification Challenge.
Radio Signal Modulation Recognition.
Competition hosted by Army Rapid Capabilities Office and MITRE.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfPeter Spielvogel
Building better applications for business users with SAP Fiori.
• What is SAP Fiori and why it matters to you
• How a better user experience drives measurable business benefits
• How to get started with SAP Fiori today
• How SAP Fiori elements accelerates application development
• How SAP Build Code includes SAP Fiori tools and other generative artificial intelligence capabilities
• How SAP Fiori paves the way for using AI in SAP apps
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™UiPathCommunity
In questo evento online gratuito, organizzato dalla Community Italiana di UiPath, potrai esplorare le nuove funzionalità di Autopilot, il tool che integra l'Intelligenza Artificiale nei processi di sviluppo e utilizzo delle Automazioni.
📕 Vedremo insieme alcuni esempi dell'utilizzo di Autopilot in diversi tool della Suite UiPath:
Autopilot per Studio Web
Autopilot per Studio
Autopilot per Apps
Clipboard AI
GenAI applicata alla Document Understanding
👨🏫👨💻 Speakers:
Stefano Negro, UiPath MVPx3, RPA Tech Lead @ BSP Consultant
Flavio Martinelli, UiPath MVP 2023, Technical Account Manager @UiPath
Andrei Tasca, RPA Solutions Team Lead @NTT Data
2. Jumbo Frames Rate Adaptation
Our goal: identify the synergy between these
techniques and exploit it
2
MMoottiivvaattiioonn
• Lossy wireless medium
• Novel techniques have been proposed …
Partial Recovery
… but each of them alone is insufficient
3. 3
SSttaattee ooff tthhee AArrtt
• Jumbo Frames
– Proprietary solutions for frame aggregations [Atheros
Super G, TI frame concatenation]
– 802.11n frame aggregation standard
• Require specific hardware support
• Entire packet needs to be retransmitted
Holistic Approach is missing !
• Partial Packet Recovery
– Require specific hardware support [MRD, SOFT, PPR]
– Leverage PHY layer information [SOFT, PPR]
• if PHY layer information is available, FRJ can benefit to
provide higher gain
• Rate Adaptation
– SampleRate, ONOE (madwifi), RRAA
– Over-estimates the actual loss rate
• Adapt rate according to frame loss rate
• Over-estimates the actual loss rate
4. 4
OOuurr CCoonnttrriibbuuttiioonnss
• Identify interactions between the three
techniques
– Exploit the synergy between the schemes
– Works for both single and multi-hop topologies
• Develop resilient jumbo frames
– Achieve high throughput under both low and high
loss conditions
• Develop partial recovery aware rate
adaptation
• Develop a prototype implementation
5. SSyynneerrggyy BBeettwweeeenn DDeessiiggnn SSppaaccee
Reduces effective data loss
rate
Better partial recovery
PPPPPPaaaaaarrrrrrttttttiiiiiiaaaaaallllll RRRRRReeeeeeccccccoooooovvvvvveeeeeerrrrrryyyyyy AAAAAAwwwwwwaaaaaarrrrrreeeeee RRRRRRaaaaaatttttteeeeee AAAAAAddddddaaaaaappppppttttttaaaaaattttttiiiiiioooooonnnnnn
5
Increases effectiveness of jumbo
frames
Less collisions – effective recovery
Partial Recovery
Loss Increases with
frame size
Jumbo Frames Rate Adaptation
Constant MAC overhead
Reduces relative cost of
RTS/CTS
Higher tx rates!
Increased tx rates
reduces contention losses
Higher tx rates – increases
relative MAC overhead
More data for constant
overhead
Benefit increases with
increased tx rates
6. RReessiilliieenntt JJuummbboo FFrraammeess
S R
6
• Use jumbo frames
2.5 ACK
– High throughput in good conditions
– In bad conditions …
• … re-transmit only corrupted segments
– Saves the overhead of retransmitting complete frames
7. RReessiilliieenntt JJuummbboo FFrraammee
• Core Components
– Resilient Jumbo Frames which applies partial
recovery to jumbo frames
– Partial recovery ‘aware’ rate adaptation
Segment 1 CRC Segment 2 CRC Segment N CRC
7
• Data Frames
Header
4 4 4
Frame ID Type Rate Bitmap SS Header
Length CRC
4 1 1 4 2 2 4
8. Resilient JJuummbboo FFrraammee ((CCoonntt..))
Header CRC Frame
8
• Receiver Feedback
– Combination of MAC-layer and 2.5-layer ACKs
– MAC-layer ACKs
• Adjustment of back-off window in IEEE 802.11
• Increased reliability and efficiency than 2.5 ACKs
– 2.5-layer ACKs
• To support partial recovery
• Unicast for improved reliability and cumulative
Frame
Offset
Segment
Bitmap 1
Frame
Offset N
Segment
Bitmap N
Start Frame
Seg No Type Rate Frame
Bitmap
9. 9
AApppprrooaacchh
• Retransmission
– Disable MAC layer retransmissions
• set MAC retry count = 0
• Retransmit the frames at the 2.5-layer
– Triggered by
• 2.5-layer ACKs
– If 1st Retx: frames with higher seq nos or some segments in this frame are ACKed [first data transmissions is
in-order]
– If 2nd or higher: some new segments in this frame are ACKed
• Retransmission Timeout
– Standard approach as in TCP
10. Partial RReeccoovveerryy AAwwaarree RRaattee AAddaappttaattiioonn
– Traditional schemes identify optimal rate using frame loss rate
10
• Overestimates the loss rate
• Lower data transmissions rates are selected
– Challenges for the ‘new’ scheme
• Accurate estimation of channel condition at various data rates
• Selecting rate that maximizes throughput under partial recovery
Estimate throughput based on loss statistics !
11. Partial RReeccoovveerryy AAwwaarree RRaattee AAddaappttaattiioonn
• Estimating Channel Condition
– Sender periodically broadcasts probe packets
– Sent at different data rates
11
• CurrRater [current data rate]
• CurrRate-r
[one rate below the current data rate]
• CurrRate+
r [one rate above the current data rate]
– Sent at a frequency of 5 probes/second
• Limit the overhead
Probe ID Type Rate Header Payload
CRC
Per rate
12. Partial RReeccoovveerryy AAwwaarree RRaattee AAddaappttaattiioonn
• Probe Response
– Sent by the receiver
– Estimates the channel condition using
12
• Header Loss Rate (HL) – header corruption
• Segment Loss Rate (SL) – segment corruption
• Communicates this info using probe response
– Transmitted via MAC-layer unicast
• High reliability
– Default Probe response [HL = 1, SL = 1]
• To account for lost probes
Probe Response ID Type Rate1 Frame
BER1 HL1 Rate1 BER1 HL1 CRC
13. Partial RReeccoovveerryy AAwwaarree RRaattee AAddaappttaattiioonn
• Sender selects the rate that gives the best
throughput estimation
RTS + SIFS + CTS + SIFS
T = Σ P× (Backoff + DIFS +
i NSi
No of segments in ith tx
30 i = 1
NSi-1 × (HL + (1 – HL) × SL ) otherwise
13
i=1..MaxRetries + 1
DATA+ SIFS + ACK + useRTS + RTSOverhead )
(HS + + segmentSize)
preambleTime +
rate
Time for ith data tx
Probability of sending the ith tx
Pi =
NSi =
1 i = 1
P× (HL + (1 – HL) × (1- (1 – SL) NSi-1
))
i-1 otherwise
Throughput = (NS1 – NSMaxRetries + 2) × SegmentSize/T
14. 14
TTeessttbbeedd TTooppoollooggyy
• 24 machines
• Madwifi driver and
CLICK toolkit
• Initial rate = 24Mbps
• Tx Power = 18 dBm
Total throughput
Per flow throughput
Jain’s Fairness Index
15. 15
SScchheemmeess CCoommppaarreedd
• Sample Rate using 1500 byte frames
[SR/1500-bytes]
• Sample Rate using 3000 byte frames
[SR/3000-bytes]
– Same as SR/1500, but uses jumbo frames
– Similar to Atheros Super G Fast Frame feature
• FRJ using 3000 byte frames, 30 segments
With and without RTS/CTS
16. Experimental RReessuullttss:: SSiinnggllee FFllooww
16
Throughput (Mbps)
Cumulative Fraction
SR/1500: 0.68 Mbps
SR/3000: 0.68 Mbps
FRJ: 1.1 Mbps
SR/1500: 14.17 Mbps
SR/3000: 16.93 Mbps
FRJ: 23.81 Mbps
Moderate Link Conditions:
Partial Recovery is more
effective
FRJ benefit is 40.6% - 68.0% under single flow
17. Experimental Results: MMuullttiippllee FFlloowwss
More collisions => increase
in header losses
17
25
Randomly chosen flows!
20
15
10
5
0
-5
1 2 4 6 8
# Flows
Average Total Throughput
(Mbps)
FRJ
SR/1500 bytes
SR/3000 bytes
FRJ w/ RTS
SR/1500 bytes w/ RTS
SR/3000 bytes w/ RTS
Schemes w/o RTS/CTS
perform well
FRJ constantly outperforms
FRJ benefit ranges from 10% (1 flow) to
64% (6 flows)
19. Experimental Results: MMuullttiippllee FFlloowwss
19
• Fairness
– Difference is
within 10%
– Most cases it is
close to 0
# Flows
Fairness Index
FRJ’s performance gain does not come at the cost
of compromising fairness!
20. 20
CCoonncclluussiioonn
• Main contributions
– Identify interplay between jumbo frames, PPR and
rate adaptation
• Jumbo frames with partial recovery
• Partial recovery aware rate adaptation
– Demonstrate the effectiveness of this solution
through testbed experiments
• Future work
– More effective partial recovery schemes and
coding techniques
– Dynamically configurable RTS/CTS
– FRJ-aware route selection