TCP Fairness for Uplink and Downlink Flows in WLANs

359 views

Published on

TCP Fairness for Uplink and Downlink Flows
in WLANs

Published in: Education, Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
359
On SlideShare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
0
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

TCP Fairness for Uplink and Downlink Flows in WLANs

  1. 1. Table 1: Simulation Parameter Parameter Value Simulator Ns2 - 2.29 Simulation Time 15 min Packet Interval 0.01 sec Background Data Traffic CBR / TCP Packet Size 512 bytes Transmission Range 100,200,300,400 Kbytes Routing Protocol DSDV MAC Protocol IEEE 802.111. Scenario will be like this only2. Example screen is shown, its single queue management According to the congestion in AP dual queue will reduce the traffic3.In wired queue management can be seen but in wireless we are not able to see. Ambit lick Solutions Mail Id : Ambitlick@gmail.com , Ambitlicksolutions@gmail.Com
  2. 2. NETWORK MODULE Client-server computing or networking is a distributed application architecture that partitions tasks or workloads between service providers (servers) and service requesters, called clients. Often clients and servers operate over a computer network on separate hardware. A server machine is a high- performance host that is running one or more server programs which share its resources with clients. A client also shares any of its resources; Clients therefore initiate communication sessions with servers which await (listen to) incoming requests.PACKET SCHEDULING This packet scheduling policy is simple to implement, and yields good performance in the common case that node schedules are known, and information about node availability is accurate. A potential drawback is that a node crash (or other failure event) can lead to a number of wasted RTSs to the failed node. When added across channels, the number may exceed the limit of 7 retransmission attempts allowed for a single channel in the IEEE 802.11 Ambit lick Solutions Mail Id : Ambitlick@gmail.com , Ambitlicksolutions@gmail.Com
  3. 3. BANDWIDTH SHARING Approach where each node requests and grants as muchbandwidth as possible at each turn. Additionally, we compare the RENO algorithmfor packet scheduling to a First-In-First-Out (FIFO) scheduler where all the SDUswith the same next-hop are enqueued into the same buffer. For this purpose wesimulate a network with an increasing number of nodes, from 2 to 10, arranged in achain topology. Each node has one traffic flow directed to the chain end-pointnode, carried with a constant bit-rate stream of 1000 bytes packets emulatinginfinite bandwidth demands. Congestion control has been extensively studied fornetworks running a single protocol. However, when sources sharing the samenetwork react to different congestion signals, the existing duality model no longerexplains the behavior of bandwidth allocation. The existence and uniquenessproperties of equilibrium in heterogeneous protocol case are examined.BURSTY TRAFFIC The end-to-end throughput (or throughput, for short), which is defined asthe number of bits received by the destination node per second for a given trafficflow, without any MAC overhead. As it can be seen, the throughput steeplydecreases as the number of nodes increases, regardless of the scheme adopted. Thisis because an increasing fraction of the channel capacity is employed to relaypackets at intermediate nodes. For instance, with three nodes the end-to-end Ambit lick Solutions Mail Id : Ambitlick@gmail.com , Ambitlicksolutions@gmail.Com
  4. 4. throughput is about 2/3 of the available raw bandwidth: 1/3 is consumed by thetraffic flow that is one hop from the destination, and 2/3 is consumed by the otherone that has a length of two hops. Ambit lick Solutions Mail Id : Ambitlick@gmail.com , Ambitlicksolutions@gmail.Com

×