Published on

  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide
  • Joke about copper, Fiber, and wireless. QoS research is finally having impact. Convergence
  • Let us see where data networks are today. Current IP networks are best-effort, not well-managed, QoS–enabled, and secure. These networks will provide acceptable performance for video and audio. In addition, reliability will not be carrier-grade. Convergence requires that this must change. DO NOT SAY FURTHER 4-5 TIMES.
  • This is the converged network of the future. Access: All access processing must terminate in a single box. Not true for cellular. Wireless has extensive RAN networks. Core: QoS-enabled, well-managed, and secure. SEL: blended applications (voice, data, and video). Same service from any terminal. Explain global roaming, personalization, and always-on. Going forward each service provider will have an converged network based on IP technology.
  • N link state databases; synchronize. Each router computes the same shortest path. OSPF has a flood in mechanism. Less by a factor of 10. 50 ms failure recovery time can be achieved. We can avoid network disconnects which can happen with the current networks. Graph equi-partition problem. K control elements. Balanced across all FEs. Load balancing and minimize the mean distance. k–way equipartition problem and come up with a network design. forwarding element sends a control packet asking for a routing table from the control element. Repeated search starting from neighbors. We have a new discovery protocol. There are many activities in the industry.
  • As we are all well aware, the rapid growth in the traffic on the internet continues. By some estimates there is a doubling of traffic each year. Routers are the heart of the internet. Their capacity has been doubling every 18 months since the mid 80s. The difference between internet and router growth can be attributed to the increase in the number of routers in the internet. Of course fewer routers would lead to easier management but even on this growth path we might expect to see routers with 100Tb/s capacity in five years time. Currently router design is in it’s third generation and can achieve 1Tb/s in a single rack. [CLICK] The transition to multi-rack routers is just beginning, with offerings in this space from Juniper and CISCO. It would be great if Lucent had its own router and such a transition with the need for new technologies provides an opportunity.
  • Now I will describe some of the challenges that we have identified that exist in scaling packet routers to larger sizes. The first is power and heat dissipation limits which are pushing routers to multi rack solutions. [CLICK] Expanding this picture we can see that these multi rack systems result in increased distance between the line cards and the switch fabrics.[CLICK] The use of a centralized switch fabric posses a challenge since it is also limited by the power density and heat dissipations limits. We have also identified that there is also a massive wiring density required between the racks of line cards and the switch fabric. [CLICK] Finally we have identified that the central scheduler does not scale linearly and may result in greatly increased complexity or reduced throughput for the router. [CLICK] Simply put these routers do not scale because of the centralized functions of switching and scheduling. Solving these thermal interconnect and scheduling problems is is the sort of hard technical challenge at which Bell Labs excels and I will describe some of our solutions in the subsequent slides.
  • A first step is to eliminate the massive wiring between racks is by introducing optical transceivers and optical fibers. This must be done with careful consideration to avoid replacing a reliable passive electrical backplane with many active components.[CLICK] We have constructed distributed optical switch fabrics using a passive optical device which directs signals according to their individual wavelengths. This device is called a arrayed waveguide grating router or AWG and is similar in function to how a prism splits colors. [CLICK] Switching is now achieved by using fast wavelength tunable transmitters, TTX’s, on each line card. Because these transmitters are on the line card they are only deployed as the capacity scales and he optical backplane is passive ensuring higher reliability. [CLICK] This approach addresses the challenge of scaling the switch fabric hardware of the router. Building such switches requires the expertise in optics that Bell Labs can deliver the technology needs of routers are coming our way. Make the transition to next chart crisply.
  • Now to address the issue of scheduler. If we are to eliminating the central scheduler without degrading the router performance we need to ensure that the traffic is uniformly distributed to line cards. We have demonstrated that this can be achieved using a technique called load balancing which randomly distributes the traffic. [CLICK] Now each line card can operate like an individual router, with it’s own local scheduler. This scales gracefully since the scheduling problem is independent of the size of the router. Additionally as with the distributed switching the deployed hardware scales with the deployed capacity of the router.[CLICK] By combining the distributed scheduling with the distributed optical switching, this allows scaling to very large routers. [CLICK] We believe this would allow the construction of routers with greater than 100 Tb/s.
  • (i) Overprovisiong (ii)
  • slides

    1. 1. QoS Enablers for Next-Gen Networks Krishan Sabnani Bell Labs
    2. 2. Today’s Data Networks Not QoS enabled, secure, manageable, and reliable Access Router 3G Cellular Networks Radio Controller Access Router Enterprise Networks Home Networks Metro Networks Packet-Based Network Edge Router Edge Router Edge Router Access Router Ad hoc Networks
    3. 3. The Network Evolution <ul><li>Networks were designed to carry voice traffic </li></ul><ul><li>Data traffic mostly overlaid on voice networks (using Modems) </li></ul><ul><li>Networks are designed to carry primarily data traffic </li></ul><ul><li>Voice traffic overlaid on data networks (e.g. VoIP) </li></ul><ul><li>Future networks should be designed primarily for efficient content distribution and content search/location </li></ul><ul><ul><li>Content distribution should not only be overlaid, but built in from ground up </li></ul></ul><ul><li>Future networks should also be able to effectively carry best-effort data traffic and QoS-sensitive voice. </li></ul>Yesterday… … Today... … Tomorrow ... Volume of data traffic exceeds voice traffic Content traffic becomes dominant
    4. 4. Impact of Video on Network Traffic <ul><li>VoD/IPTV will grow by an order of magnitude over the next 5 years </li></ul><ul><ul><li>2005: 90% best-effort data traffic </li></ul></ul><ul><ul><li>2010: 40% high-priority VoD traffic; 50% best-effort traffic </li></ul></ul><ul><ul><li>Carrier-class links with capacities of 40G and 100G required in MAN </li></ul></ul><ul><li>Broadcast IPTV and streaming VoD is high-priority traffic, close to TDM </li></ul><ul><ul><li>Very different from today’s best-effort bursty data traffic </li></ul></ul>Source: Lucent Bell Labs IPTV/VoD study, May 2006
    5. 5. Tomorrow’s Converged Network Edge Router Access Router Access Router Radio Controller Access Router Next-Gen Metro Networks 4G/Mesh User Mobility Traffic Type (Multimedia) Quality of Service (e.g. for voice) Network Intelligence QoS-Enabled Packet Core Network Edge Router Edge Router Services Enablement Layer Always-On Global Roaming Personalization 3G Cellular Networks Enterprise Networks Home Networks
    6. 6. Enabling technologies <ul><li>Future converged network will put significant demands on next generation network elements </li></ul><ul><ul><li>Carrier-grade and increased complexity demands </li></ul></ul><ul><ul><ul><li>QoS support, Scalability, Reliability, Security, Manageability </li></ul></ul></ul><ul><ul><ul><ul><li>SoftRouter </li></ul></ul></ul></ul><ul><ul><li>Capacity demands </li></ul></ul><ul><ul><ul><li>Switching capacity > 100Tb/s will be required in a single switching element </li></ul></ul></ul><ul><ul><ul><ul><li>Optical Data Router </li></ul></ul></ul></ul><ul><ul><li>High-speed wireless data access </li></ul></ul><ul><ul><ul><li>Simplify RAN to improve QoS </li></ul></ul></ul><ul><ul><ul><ul><li>Base Station Router </li></ul></ul></ul></ul>
    7. 7. Motivation <ul><li>Future converged network will put significant demands on next generation network elements </li></ul><ul><ul><li>Carrier-grade demands </li></ul></ul><ul><ul><ul><li>QoS support, Scalability, Reliability, Security, Manageability </li></ul></ul></ul><ul><ul><ul><ul><li>SoftRouter </li></ul></ul></ul></ul><ul><ul><li>Capacity demands </li></ul></ul><ul><ul><ul><li>Switching capacity > 100Tb/s may be required in a single switching element </li></ul></ul></ul><ul><ul><ul><ul><li>Optical Data Router </li></ul></ul></ul></ul><ul><ul><li>High-speed wireless data access </li></ul></ul><ul><ul><ul><li>All wireless services will converge to IP </li></ul></ul></ul><ul><ul><ul><ul><li>Base Station Router </li></ul></ul></ul></ul>
    8. 8. Routers are becoming increasingly complex <ul><li>Complexity is an IP “Middle-Age” problem! </li></ul><ul><ul><li>IP provides end-to-end datagram delivery service to protocols/applications </li></ul></ul><ul><ul><li>IP can use any link-layer technology that delivers packets </li></ul></ul><ul><li>Emerging applications are driving more functions into IP, expanding the “waist” of the IP hour glass </li></ul><ul><li>Router vendors incorporate all new IP functions into routers </li></ul><ul><li>Complexity is spread throughout the network </li></ul><ul><ul><li>Achieving network-wide objectives such as traffic engineering requires complex translation of global objectives to configuration information in numerous individual routers </li></ul></ul><ul><ul><li>Misconfiguration or uncoordinated configuration can result in poor performance or even network instability </li></ul></ul>email WWW phone... SMTP HTTP RTP... TCP UDP… IP ethernet PPP… CSMA async sonet... copper fiber radio... email WWW phone... SMTP HTTP RTP... TCP UDP… IP ethernet PPP… CSMA async sonet... copper fiber radio... IP mobile mcast IPsec diff-serv NAT
    9. 9. Solution: SoftRouter <ul><li>Disaggregation of router hardware from software addresses this problem and has the potential for major additional advantages </li></ul><ul><li>Bell Labs has a research program that disaggregates router control and transport planes (called SoftRouter-based approach) </li></ul><ul><ul><li>Transport plane: packet forwarding element </li></ul></ul><ul><ul><li>Control plane: control element server and feature server </li></ul></ul><ul><ul><li>Control plane servers and transport plane communicate using standard protocols </li></ul></ul><ul><ul><li>Approach similar to SoftSwitch-based disaggregation of class 5 switches </li></ul></ul>
    10. 10. New Router Architecture: SoftRouter <ul><li>3 key components of SoftRouter approach </li></ul><ul><ul><li>Decoupling: Separate complex control plane processing from the transport plane </li></ul></ul><ul><ul><li>Servers: Implement control plane processing functions on dedicated external control plane servers </li></ul></ul><ul><ul><li>Standard Interface: Define standard protocol for control plane servers to interface to the forwarding elements </li></ul></ul>Control plane processing Forwarding plane processing Proprietary API Standard protocol Current Router Model SoftRouter Model Control Plane Transport Plane Feature Server Packet Forwarding Element Control Element Server
    11. 11. SoftRouter Network Architecture Router Packet Forwarding Element Control Element Server Traditional Router-based Network SoftRouter-based Network The SoftRouter approach separates and centralizes the software-intensive control element and feature servers from hardware-centric transport and packet forwarding Feature Server
    12. 12. SoftRouter Benefits <ul><li>Lower Costs </li></ul><ul><ul><li>Commoditized, standards-based hardware (lower capex) </li></ul></ul><ul><ul><li>Dedicated control element servers imply fewer management points (lower opex) </li></ul></ul><ul><li>New Features </li></ul><ul><ul><li>Network-based features to support new services more easily added using open APIs </li></ul></ul><ul><ul><li>Incremental deployment made simpler through centralized management </li></ul></ul><ul><li>Better Scalability </li></ul><ul><ul><li>Centralized control element servers easier to scale using well-established server scaling techniques </li></ul></ul><ul><li>Enhanced Stability, Controllability, and Reliability </li></ul><ul><ul><li>Network instability problems due to BGP Route Reflectors are eliminated </li></ul></ul><ul><ul><li>SoftRouter-based network can be designed to be more reliable than a traditional network </li></ul></ul><ul><li>5. Increased Security </li></ul><ul><ul><li>Fewer control element servers easier to secure using perimeter defense systems, e.g., firewalls </li></ul></ul>
    13. 13. Technical Challenges: Summary <ul><li>Protocol aggregation: how would protocols like OSPF/BGP operate when a single protocol instantiation at the control element server manages multiple forwarding elements? </li></ul><ul><ul><li>OSPF: Preliminary results indicate 50ms failure recovery time is feasible when the SoftRouter network is managed by one or two primary CE/OSPF processes and the propagation delay from CE to its FEs is small </li></ul></ul><ul><ul><li>BGP: Full I-BGP mesh can be maintained among the few control servers, eliminating network instability possible under the Route Reflector architecture </li></ul></ul><ul><li>Network design: where do we place the control element servers and how do we determine which control element servers manage which forwarding elements? </li></ul><ul><ul><li>Algorithm based on recursive graph bisection appears to work reasonably well in identifying where to place the CEs and which set of FEs that each CEs manage </li></ul></ul><ul><li>Bootstrapping paradox: the forwarding element needs updated forwarding tables in order to route packets to its control element server but only the control element server can update the forwarding tables </li></ul><ul><ul><li>We develop a new discovery protocol to break the above circularity </li></ul></ul><ul><ul><li>This protocol allows each FE to bind to its best CE and provides simple routing capability between them </li></ul></ul>
    14. 14. Motivation <ul><li>Future converged network will put significant demands on next generation network elements </li></ul><ul><ul><li>Carrier-grade demands </li></ul></ul><ul><ul><ul><li>QoS support, Scalability, Reliability, Security, Manageability </li></ul></ul></ul><ul><ul><ul><ul><li>SoftRouter </li></ul></ul></ul></ul><ul><ul><li>Capacity demands </li></ul></ul><ul><ul><ul><li>Switching capacity > 100Tb/s may be required in a single switching element </li></ul></ul></ul><ul><ul><ul><ul><li>Optical Data Router </li></ul></ul></ul></ul><ul><ul><li>High-speed wireless data access </li></ul></ul><ul><ul><ul><li>All wireless services will converge to IP </li></ul></ul></ul><ul><ul><ul><ul><li>Base Station Router </li></ul></ul></ul></ul>
    15. 15. Growth in Router Capacity <ul><li>Internet traffic doubles every 12 months </li></ul><ul><li>Router Capacity doubles every 18 months </li></ul><ul><li>Routers will reach 100Tb/s in 2010 </li></ul>Single Rack <1Tb/s Multi-rack solutions >1Tb/s e.g. Juniper TX640 or Cisco XR12416 e.g. Juniper TX Matrix or Cisco CRS-1 Source: Nick McKeown, Stanford Currently Third Generation of Packet Routers : 1Tb/s capacity in a Single Rack Router Capacity over Time Routers 2x every 18 months Internet 2x every 12 months
    16. 16. Why do current packet routers not scale ? <ul><li>Power and heat dissipation limit density. </li></ul><ul><ul><li>Require multiple shelves and racks of Line Cards </li></ul></ul><ul><ul><li>Increases distance between Line Cards and Switch Fabric </li></ul></ul><ul><li>Centralized Switch Fabric </li></ul><ul><ul><li>Switch Fabric alone pushes power density and heat dissipation limits </li></ul></ul><ul><ul><li>Massive wiring density between Line Cards and Switch Fabric </li></ul></ul><ul><li>Centralized Control and Scheduling complexity grows nonlinearly </li></ul>Centralized functions scale poorly Massive Wiring Switch Fabric Central Scheduler To Network Line Cards Packet Processing Buffer Optics I/O Packet Processing Buffer Optics I/O Packet Processing Buffer Optics I/O Packet Processing Buffer Optics I/O
    17. 17. Bell Labs Solution: Distributed Optical Switch Fabric <ul><li>Optical fibers eliminate massive wiring density and allow higher bandwidths </li></ul>Fiber <ul><li>Central Switch fabric replaced by passive optical device that directs signals according to their wavelength </li></ul><ul><ul><li>Called Arrayed Waveguide Grating (AWG) </li></ul></ul><ul><li>Switching using Fast Wavelength Tunable Transmitters (T-TX) distributed on each line card </li></ul><ul><ul><li>Deployed hardware scales with deployed capacity </li></ul></ul><ul><ul><li>Passive Optical Backplane </li></ul></ul>Fixes Hardware Scaling Switch Fabric Central Scheduler To Network Line Cards Packet Processing Buffer Optics I/O Packet Processing Buffer Optics I/O Packet Processing Buffer Optics I/O Packet Processing Buffer Optics I/O Central Scheduler To Network Line Cards Packet Processing Buffer Optics Optics Packet Processing Buffer Optics Optics Packet Processing Buffer Optics Optics Packet Processing Buffer Optics T-TX Optics AWG Passive Optical Backplane
    18. 18. Bell Labs Solution: Distributed Scheduler <ul><li>Eliminate central scheduler without sacrificing throughput </li></ul><ul><ul><li>Distribute traffic uniformly to Line Cards </li></ul></ul><ul><ul><li>Technique called “load balancing” </li></ul></ul><ul><li>Each line card works like a small router with its own scheduler </li></ul><ul><ul><li>Scales gracefully since complexity of scheduling is constant </li></ul></ul><ul><ul><li>Deployed hardware scales with deployed capacity </li></ul></ul><ul><li>Combined with Distributed Optical Switch Fabric allows scaling to very large routers </li></ul>Router Scales to >100Tb/s Lucent Technologies – Proprietary Use pursuant to company instruction To Network Line Cards AWG Packet Processing Buffer Optics Optics Local Scheduler Packet Processing Buffer Optics Optics Local Scheduler Packet Processing Buffer Optics Optics Local Scheduler Packet Processing Buffer Optics Optics Local Scheduler
    19. 19. Optical Data Router <ul><li>DARPA-funded program </li></ul><ul><li>Linecards </li></ul><ul><ul><li>Highly integrated optical components </li></ul></ul><ul><ul><ul><li>Denser integration of optical interfaces on line card </li></ul></ul></ul><ul><ul><ul><li>Data never converted to electronics for lower power </li></ul></ul></ul><ul><ul><li>Shallow Buffers </li></ul></ul><ul><ul><ul><li>Demonstration of high throughput with buffers 20 packet deep </li></ul></ul></ul><ul><li>Scalable switch fabrics </li></ul><ul><ul><li>Highly scalable nonblocking switch fabrics </li></ul></ul><ul><ul><ul><li>5Tb/s using AWG plus tunable lasers and 100Gb/s Transmitter </li></ul></ul></ul><ul><ul><li>Use of load balancing to allow blocking switch fabrics to be used and simplifies scheduling </li></ul></ul><ul><ul><ul><li>Scalability to 256 Tb/s throughput </li></ul></ul></ul><ul><ul><ul><li>Transparent optical packet router demonstrated </li></ul></ul></ul>100Gb/s Packet switching Optical Packet Router Highly integrated photonic chips Effects of shallow buffers
    20. 20. The Optical Data (IRIS) Router in a Carrier Network
    21. 21. Motivation <ul><li>Future converged network will put significant demands on next generation network elements </li></ul><ul><ul><li>Carrier-grade demands </li></ul></ul><ul><ul><ul><li>QoS support, Scalability, Reliability, Security, Manageability </li></ul></ul></ul><ul><ul><ul><ul><li>SoftRouter </li></ul></ul></ul></ul><ul><ul><li>Capacity demands </li></ul></ul><ul><ul><ul><li>Switching capacity > 100Tb/s may be required in a single switching element </li></ul></ul></ul><ul><ul><ul><ul><li>Optical Data Router </li></ul></ul></ul></ul><ul><ul><li>High-speed wireless data access </li></ul></ul><ul><ul><ul><li>All wireless services will converge to IP </li></ul></ul></ul><ul><ul><ul><ul><li>Base Station Router </li></ul></ul></ul></ul>
    22. 22. Base Station Router: push intelligence to the edge <ul><li>Current wireless networks are complex, involving many network elements, and result in high cost and high latency </li></ul><ul><li>Base Station Router terminates all air interface-specific functionality in the base station </li></ul><ul><li>Collapsing Radio Access Network elements into the base station simplifies network and reduces latency </li></ul><ul><li>Pushing IP intelligence to the base station results in better Quality of Service support </li></ul>Base Station Base Station Mobile Router O Radio Controller Packet Backhaul Packet backhaul packet data circuit voice Base Station Router Mobile Router O Mobile Switching Center O Telephone Network Internet O
    23. 23. BSR Motivation <ul><li>BSR provides advantages in the following areas </li></ul><ul><li>reduces system OpEx by 75% </li></ul><ul><li>increases VoIP Capacity by 25% </li></ul><ul><li>reduces delay by 40% </li></ul>BSR Collapse access, control, and gateway elements to a single IP Access Point Access Control Gateway
    24. 24. Revolutionary Product Takes Top Honors for Most Innovative In-Building Solution LAS VEGAS – Lucent Technologies (NYSE:LU) today announced that its Base Station Router (BSR) product was selected as the first place winner of a CTIA WIRELESS 2006 Wireless Emerging Technologies (E-tech) Award in the category of “Most Innovative In-Building Solution.” Award recipients were announced yesterday in a ceremony at the Las Vegas Convention Center during the CTIA WIRELESS trade show. The Wireless E-tech Awards program is designed to give industry recognition and exposure to the best wireless products and services in the areas of Consumer, Enterprise and Network technology. Nearly 200 applications were submitted and reviewed by a panel of recognized members of the media, industry analysts and executives, as well as select show attendees. Products were judged on innovation, functionality, technological importance, implementation and overall “wow” factor. Lucent Technologies' Base Station Router Receives CTIA Emerging Technology Award
    25. 25. <ul><li>State of QoS Deployment and Research </li></ul>
    26. 26. Typical Objections to QoS Deployment <ul><li>QoS can be avoided by over-provisioning </li></ul><ul><ul><li>May work for core network, not true for access network (difficult to add capacity) and wireless network (limited air spectrum) </li></ul></ul><ul><ul><li>Instantaneous congestion can still occur, disrupting flows that have strict QoS requirements </li></ul></ul><ul><li>QoS is too expensive to implement </li></ul><ul><ul><li>Range of QoS mechanisms available, some scale better than others </li></ul></ul><ul><ul><li>Fine grain QoS at the edge and aggregate based QoS in the core </li></ul></ul><ul><li>QoS is too complicated to manage </li></ul><ul><ul><li>Results show that many benefits of traffic management can be achieved by placing QoS-awareness in a small (5-10%) number of network elements </li></ul></ul>
    27. 27. State of QoS Deployment <ul><li>Core: MPLS, some with traffic engineering, is widely implemented in core networks </li></ul><ul><li>Access: Most access network elements have built-in QoS features such as policing. New services such as VoIP/IPTV will need better QoS support such as diff-serv. </li></ul><ul><li>Enterprise: Enterprises are increasingly deploying QoS-aware boxes at their edges. Main functions are application prioritization and acceleration. </li></ul><ul><li>Cellular networks: Limited bandwidth has made QoS Support an absolute must in air interfaces. </li></ul><ul><ul><li>Air interfaces use mechanisms to ensure fairness among mobiles. </li></ul></ul><ul><ul><li>Radio access networks (RAN) differentiate among mobile flows. </li></ul></ul>
    28. 28. Core Networks are implementing MPLS <ul><li>MPLS label-switched paths </li></ul><ul><ul><li>Bandwidth guaranteed </li></ul></ul><ul><ul><li>Strong traffic isolation and flexible routing </li></ul></ul><ul><ul><li>Allows traffic to be routed away from congestion points </li></ul></ul><ul><ul><li>Key tools for traffic engineering and policy-based routing </li></ul></ul>Ethernet Access DSL/Cable Access Telephone Access 802.11 Access Circuit Mobile Network Packet Mobile Network IAD IP/MPLS Backbone Focus on Core network
    29. 29. Access Networks are implementing Differentiated Services <ul><li>Differentiated services: Connection less IP-QoS mechanism </li></ul><ul><ul><li>Many traffic classes </li></ul></ul><ul><ul><ul><li>EF: Expedited Forwarding </li></ul></ul></ul><ul><ul><ul><li>AF: Assured Forwarding (several classes) </li></ul></ul></ul><ul><ul><ul><li>Best-Effort </li></ul></ul></ul><ul><ul><li>Priority scheduling among different classes </li></ul></ul><ul><ul><ul><li>E.g., EF traffic given highest priority, current best-effort traffic carried as lowest priority </li></ul></ul></ul><ul><ul><li>Admission control for each class </li></ul></ul><ul><ul><ul><li>E.g., Overall levels of EF traffic in network kept low to avoid starvation </li></ul></ul></ul><ul><ul><li>Network traffic shaped at edge using leaky-bucket controllers </li></ul></ul><ul><ul><li>Network routing can be determined by IP routing protocols </li></ul></ul><ul><li>Pros: </li></ul><ul><ul><li>No per-flow or per-LSP state needed in network </li></ul></ul><ul><li>Cons: </li></ul><ul><ul><li>Difficult to provide bandwidth guarantees finer than aggregate level </li></ul></ul>
    30. 30. Enterprises are implementing Application Acceleration <ul><li>Deploy deep packet inspection at enterprise edge routers to </li></ul><ul><ul><li>Classify packets by applications </li></ul></ul><ul><ul><li>Provide prioritization among different application packets </li></ul></ul><ul><ul><li>E.g., VoIP flows given priority over email traffic </li></ul></ul><ul><li>Some large corporations implementing large-scale VPN or even private network to enable control over its internal routing </li></ul><ul><ul><li>E.g., Google </li></ul></ul>
    31. 31. Wireless Networks Need More Than Diff-Serv and MPLS <ul><li>Air Interface: Time-varying bandwidth constrained channel </li></ul><ul><ul><li>Diff-serv and MPLS only differentiate between traffic classes </li></ul></ul><ul><ul><li>Wireless scheduling needs to take into account the channel conditions for maximizing cell-capacity: fundamentally different from scheduling diff-serv and MPLS traffic on wireline links </li></ul></ul><ul><ul><li>New scheduling mechanisms use information on channel condition and current queue-lengths to maximize throughput while maintaining QoS </li></ul></ul><ul><li>Radio access network: Low bandwidth network transports control, real-time, streaming, and best-effort traffic </li></ul><ul><ul><li>Requires QoS mechanisms but all traffic is encapsulated in tunnels </li></ul></ul><ul><ul><li>Diff-serv and MPLS mechanisms can be used if different traffic classes are carried over different tunnels and not on the same tunnel </li></ul></ul>
    32. 32. <ul><li>Provides minimum and maximum user throughput bounds </li></ul><ul><li>Provides “hogging prevention” </li></ul><ul><li>Used for inter-user throughput allocation. (Can be used for intra-user as well) </li></ul>Proportional Fair with Minimum Rate (PFMR) Algorithm Research work of: Alexander Stolyar; Matthew Andrews; Lijun Qian (Bell Labs) T i is a virtual “token queue” corresponding to each flow i Tokens arrive in the token queue at the rate X i which is R i min or R i max per slot If user i is served in slot n, then DRC i (n) tokens are removed from the queue If in a time interval, the average service rate of flow i is less than R i min , the token queue size has a positive drift, so the chances of flow i being served in each time slot gradually increase 0
    33. 33. <ul><li>Provides minimum and maximum user throughput bounds </li></ul><ul><li>Provides “hogging prevention” </li></ul><ul><li>Provides individual stream minimum throughput and latency control </li></ul><ul><li>Provides unified mechanism for inter-user and inter-flow (intra-user) resource allocation. This has its pluses (universality, higher overall throughput) and minuses (complexity) </li></ul><ul><li>Key to supporting VoIP in Ev-DO networks </li></ul>PFMR with Latency Control (PFMR-LC) algorithm:
    34. 34. State of QoS Research <ul><li>QoS research is as old as networking itself </li></ul><ul><ul><li>Lots of papers have appeared, most have analytical focus </li></ul></ul><ul><li>Most skeptics of QoS are also in academic </li></ul><ul><ul><li>QoS mechanisms are being implemented and used, adoption is mostly driven by real-world requirements, not by elegance of the theory </li></ul></ul><ul><li>Is more QoS research needed? </li></ul><ul><ul><li>How QoS mechanisms can be implemented in the networks of future? </li></ul></ul><ul><ul><li>Inter-domain QoS peering </li></ul></ul>
    35. 35. Net Neutrality <ul><li>Should transport providers be allowed to differentiate traffic transported over their networks? </li></ul><ul><ul><li>Verizon, AT&T, Comcast, … : ability to offer service differentiation provides the incentive for large capital investment in building out new access networks </li></ul></ul><ul><ul><li>Google, Yahoo, Microsoft … : service differentiation can be used by transport providers to favor their own traffic or traffic from their favored content aggregators, giving them an unfair advantage over 3 rd party content providers, and thus slowing competition and innovation </li></ul></ul><ul><li>My position: </li></ul><ul><ul><li>Transport providers should offer services with QoS </li></ul></ul><ul><ul><li>Such services should however be universally available to all content aggregators </li></ul></ul>
    36. 36. Conclusions <ul><li>Network convergence is happening </li></ul><ul><li>QoS is a necessity for next generation converged networks </li></ul><ul><li>Bell Labs has several programs to produce key assets for creating next generation converged networks </li></ul><ul><ul><li>SoftRouter </li></ul></ul><ul><ul><li>Optical Data Router </li></ul></ul><ul><ul><li>Base Station Router </li></ul></ul><ul><li>QoS is being deployed in service provider networks and enterprise networks </li></ul><ul><li>QoS research is still needed </li></ul><ul><ul><li>Inter-domain QoS </li></ul></ul><ul><ul><li>Simpler implementation </li></ul></ul>