Your SlideShare is downloading. ×
Upcoming SlideShare
Loading in...5

Thanks for flagging this SlideShare!

Oops! An error has occurred.

Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply



Published on

  • Be the first to comment

  • Be the first to like this

No Downloads
Total Views
On Slideshare
From Embeds
Number of Embeds
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

No notes for slide
  • Japan’s 100 Mbps fiber-to-the-home network is the fastest in the world, but it’s being brought to its knees from congestions. Red, purple, and brown all due to P2P traffic
  • Transcript

    • 1. Network Management has always been and always will be essential to the Internet Testimony of George Ou Former Network Engineer FCC Broadband Industry Practices Hearing WC Docket No. 07-52 Stanford University April 17, 2008
    • 2. Internet meltdown in 1980s
      • Lack of adequate congestion control in TCP allowed too many FTP users to overload Internet around 1986
      • Van Jacobson created congestion control algorithm for TCP in 1987
        • Congested routers randomly dropped packets to force every TCP end-point (client) to cut flow rate in half
        • TCP clients then slowly increased flow rate with every successful transmission until next packet drop
        • Caused all TCP streams to home in towards equal flow rate
        • Fair bandwidth sharing, but only for applications of its time
      • Jacobson’s algorithm saved the Internet in 1987 and remains dominant standard after 20 years
      • Early example of managing network congestion
    • 3. World Wide Wait in 1990s
      • First generation of web browsers were not optimized for Internet
      • World Wide Web turned in to the World Wide Wait
      • Version 1.1 of HTTP revamped to efficiently use resources over 1.0
    • 4. Today’s crisis on the Internet
      • Video-induced congestion collapse
        • Efficient existing broadcast model migrating to bandwidth-intensive Video on Demand model over IP
        • Full migration of video could require 100- to 1000-fold increase in Internet capacity
        • Exponentially more bandwidth required as video bit-rate and resolution increase to improve quality
      • P2P is the dominant distribution model because most of its content is “free” (read pirated)
      • Video can fill any amount of bandwidth
    • 5. More bandwidth doesn’t help According to the Japanese Government 1% of users account for ~47% of traffic 10% of users account for ~75% of traffic 90% of users get leftover 25% The few throttling the many Bandwidth hogs
    • 6. Exploiting Jacobson’s algorithm 50/50 Fair 80/20 Unfair 92/8 Unfair
    • 7. Persistence advantage in P2P apps * Corporate VPN telecommuter worker using G.722 codec @ 64 kbps payload and 33.8 kbps packetization overhead ** Vonage or Lingo SIP-based VoIP service with G.726 codec @ 32 kbps payload and 18.8 kbps packetization overhead *** I calculated that I sent 29976 kilobytes of mail over the last 56 days averaging 0.04956 kbps
    • 8. Weighted TCP: Per-user fairness 92/8 Unfair 50/50 Fair
      • BT chief researcher Bob Briscoe proposes TCP fix before the IETF to neutralize multi-stream loophole
      • Changing TCP takes many years, but it’s even harder to get over a billion devices to switch to new TCP client
      • Newer network-based solutions being implemented
    • 9.
      • Present solutions use protocol throttling
        • P2P applications use disproportionately large amounts of bandwidth so they’re throttled to balance them out
        • Use conventional router de-prioritization techniques on P2P
        • Use TCP resets to occasionally stop P2P seeders
        • Potentially affect an extremely rare low-bandwidth P2P user
        • Can be fooled by protocol obfuscation techniques
      • Future solutions are protocol-agnostic
        • Weighted packet dropping at router and/or fair upstream scheduling on CMTS accomplishes per-user fairness
        • Only targets bandwidth hogs and forces them to back off
        • Cannot be fooled by protocol obfuscation
      Present and future solutions
    • 10. Fair Unfair Intelligent Dumb Reasonable Unreasonable Future Past What is reasonable network management?   Pros   Cons Cost free deployment 10% of users throttles 90% of users down to 25% of resources   Pros   Cons Solves congestion because users self-throttle for fear of massive ISP bill. Danger of massive ISP bills for consumer. Makes P2P usage cost-prohibitive. By definition a “Toll road”.   Pros   Cons Out-of-band deployment with TCP resets or use of common de-prioritization techniques. Results in fairer distribution of bandwidth. Can be fooled by protocol obfuscation. Possibility of affecting non bandwidth hogs.   Pros   Cons Equal sharing of resources between users of same tier. Can’t be fooled by protocol obfuscation. Requires more drastic in-line changes to network. Requires real-time tracking of per-user bandwidth consumption and enforcement of per-user fairness.
    • 11. Network management ensures harmonious coexistence
      • P2P applications need volume, not priority
      • Interactive applications (Web) and real-time applications (VoIP) want priority and not volume
      • P2P, Interactive, and real-time applications each get what they want under a managed network
        • Interactive and real-time apps have small/fixed volume so no matter how much they’re prioritized, they cannot slow down a P2P download.
      • Unmanaged networks regardless of capacity will always be unfair and hostile to interactive and real-time applications