VROOM: Virtual ROuters On the Move

  • 518 views
Uploaded on

 

  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
    Be the first to like this
No Downloads

Views

Total Views
518
On Slideshare
0
From Embeds
0
Number of Embeds
0

Actions

Shares
Downloads
14
Comments
0
Likes
0

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide
  • This is the talk-in-one-slide. If you had to leave now, here is all the information I’ll be talking about… The key idea of VROOM is that routers should be free to roam around, instead of being permanently attached to specific piece of hardware. I hope by the end of this talk I’ll convince you that VROOM is useful for many network management tasks. Actually, it can not only save network operators a lot of headache in their daily jobs, it can even save you power! And we show through a prototype implementation that VROOM is actually feasible in practice.
  • Here is the basic idea of VROOM: virtual routers running on top of physical routers form the logical topology of a network. The physical routers only provide shared hardware resource and the necessary virtualization support. It is the virtual routers that run routing protocols and forward actual traffic.
  • In VROOM, virtual routers can migrate from one physical router to another, along with all the links attached to it. Therefore, no configurations within the virtual routers need to be changed, and the topology stays intact. For example, the logical topology among the five virtual routers remains the same before and after the migration.
  • So why is VROOM useful? What’s the rationale behind the idea of migrating routers? Today, the physical and logical configurations of a router is tightly coupled. For example, hardware upgrade to a router requires logical reconfiguration to routing protocols. And moving a customer from one physical router to another also requires re-configuration. But we all know that we are better off with the less re-configurations, because less re-configuration means less protocol reconvergence, less traffic disruption, and less configuration errors and overhead. Remember that operators errors account for more network outages than equipment failures.
  • Given such observations, VROOM separates the tight coupling between logical and physical by supporting live virtual router migration. The protocol configuration, network topology and traffic all stay intact during the process and after the migration. VROOM is useful for many difference applications. Here I’ll talk about two examples. In one example, VROOM greatly simplifies a conventional network management task. In another example, VROOM enables new application that is hard to do today.
  • The VROOM architecture has several key components that enable virtual router migration. First, since virtual routers are not permanently attached to a specific piece of hardware, the substrate layer of the physical router needs to provide dynamic bindings between the logical interfaces of a virtual router and the physical interfaces of underlying physical router.
  • Second, to maintain the logical topology, the links attached to a virtual router need to be migrated together with the virtual router to the new physical router. There are two ways to do this. The first approach is to leverage the programmable transport networks, which is commonly available in large backbone ISPs. A programmable transport network is capable of switching an optical path from one port to another in very short time. The reported theoretical bound is sub-nanoseconds. For ISPs that don’t have programmable transport networks, they can use tunnels to connect virtual routers. In this case, migrating a link only requires re-configuring its tunnel end point to the new physical router.
  • Second, to maintain the logical topology, the links attached to a virtual router need to be migrated together with the virtual router to the new physical router. There are two ways to do this. The first approach is to leverage the programmable transport networks, which is commonly available in large backbone ISPs. A programmable transport network is capable of switching an optical path from one port to another in very short time. The reported theoretical bound is sub-nanoseconds. For ISPs that don’t have programmable transport networks, they can use tunnels to connect virtual routers. In this case, migrating a link only requires re-configuring its tunnel end point to the new physical router.
  • Second, to maintain the logical topology, the links attached to a virtual router need to be migrated together with the virtual router to the new physical router. There are two ways to do this. The first approach is to leverage the programmable transport networks, which is commonly available in large backbone ISPs. A programmable transport network is capable of switching an optical path from one port to another in very short time. The reported theoretical bound is sub-nanoseconds. For ISPs that don’t have programmable transport networks, they can use tunnels to connect virtual routers. In this case, migrating a link only requires re-configuring its tunnel end point to the new physical router.
  • Second, to maintain the logical topology, the links attached to a virtual router need to be migrated together with the virtual router to the new physical router. There are two ways to do this. The first approach is to leverage the programmable transport networks, which is commonly available in large backbone ISPs. A programmable transport network is capable of switching an optical path from one port to another in very short time. The reported theoretical bound is sub-nanoseconds. For ISPs that don’t have programmable transport networks, they can use tunnels to connect virtual routers. In this case, migrating a link only requires re-configuring its tunnel end point to the new physical router.
  • Second, to maintain the logical topology, the links attached to a virtual router need to be migrated together with the virtual router to the new physical router. There are two ways to do this. The first approach is to leverage the programmable transport networks, which is commonly available in large backbone ISPs. A programmable transport network is capable of switching an optical path from one port to another in very short time. The reported theoretical bound is sub-nanoseconds. For ISPs that don’t have programmable transport networks, they can use tunnels to connect virtual routers. In this case, migrating a link only requires re-configuring its tunnel end point to the new physical router.
  • Second, to maintain the logical topology, the links attached to a virtual router need to be migrated together with the virtual router to the new physical router. There are two ways to do this. The first approach is to leverage the programmable transport networks, which is commonly available in large backbone ISPs. A programmable transport network is capable of switching an optical path from one port to another in very short time. The reported theoretical bound is sub-nanoseconds. For ISPs that don’t have programmable transport networks, they can use tunnels to connect virtual routers. In this case, migrating a link only requires re-configuring its tunnel end point to the new physical router.
  • Second, to maintain the logical topology, the links attached to a virtual router need to be migrated together with the virtual router to the new physical router. There are two ways to do this. The first approach is to leverage the programmable transport networks, which is commonly available in large backbone ISPs. A programmable transport network is capable of switching an optical path from one port to another in very short time. The reported theoretical bound is sub-nanoseconds. For ISPs that don’t have programmable transport networks, they can use tunnels to connect virtual routers. In this case, migrating a link only requires re-configuring its tunnel end point to the new physical router.
  • Second, to maintain the logical topology, the links attached to a virtual router need to be migrated together with the virtual router to the new physical router. There are two ways to do this. The first approach is to leverage the programmable transport networks, which is commonly available in large backbone ISPs. A programmable transport network is capable of switching an optical path from one port to another in very short time. The reported theoretical bound is sub-nanoseconds. For ISPs that don’t have programmable transport networks, they can use tunnels to connect virtual routers. In this case, migrating a link only requires re-configuring its tunnel end point to the new physical router.

Transcript

  • 1. VROOM: V irtual RO uters O n the M ove Aditya Akella Based on slides from Yi Wang
  • 2. Virtual ROuters On the Move (VROOM)
    • Key idea
      • Routers should be free to roam around
    • Useful for many different applications
      • Simplify network maintenance
      • Simplify service deployment and evolution
      • Reduce power consumption
    • Feasible in practice
      • No performance impact on data traffic
      • No visible impact on routing protocols
  • 3. VROOM: The Basic Idea
    • Virtual routers (VRs) form logical topology
    1 2 3 4 5 physical router virtual router logical link
  • 4. VROOM: The Basic Idea
    • VR migration does not affect the logical topology
    1 2 3 4 5 physical router virtual router logical link
  • 5. Outline
    • Why is VROOM a good idea?
    • What are the challenges?
      • Or it is just technically trivial?
    • How does VROOM work?
      • The migration process
    • Is VROOM practical?
      • Prototype system
      • Performance evaluation
    • Where to migrate?
      • The scheduling problem
    • Still have questions? Feel free to ask!
  • 6. The Coupling of Logical and Physical
    • Today, the physical and logical configurations of a router is tightly coupled
    • Physical changes break protocol adjacencies, disrupt traffic
    • Logical configuration as a tool to reduce the disruption
      • E.g., the “cost-out/cost-in” of IGP link weights
      • Cannot eliminate the disruption
      • Account for over 73% of network maintenance events
  • 7. VROOM Separates the Logical and Physical
    • Make a logical router instance migratable among physical nodes
    • All logical configurations/states remain the same before/after the migration
      • IP addresses remain the same
      • Routing protocol configurations remain the same
      • Routing-protocol adjacencies stay up
        • No protocol (BGP/IGP) reconvergence
      • Network topology stays intact
    • No disruption to data traffic
  • 8. Case 1: Planned Maintenance
    • Today’s best practice: “cost-out/cost-in”
      • Router reconfiguration & protocol reconvergence
    • VROOM
      • NO reconfiguration of VRs, NO reconvergence
    PR-A PR-B VR-1
  • 9. Case 1: Planned Maintenance
    • Today’s best practice: “cost-out/cost-in”
      • Router reconfiguration & protocol reconvergence
    • VROOM
      • NO reconfiguration of VRs, NO reconvergence
    PR-A PR-B VR-1
  • 10. Case 1: Planned Maintenance
    • Today’s best practice: “cost-out/cost-in”
      • Router reconfiguration & protocol reconvergence
    • VROOM
      • NO reconfiguration of VRs, NO reconvergence
    PR-A PR-B VR-1
  • 11. Case 2: Service Deployment & Evolution
    • Deploy a new service in a controlled “test network” first
    Production network Test network Test network Test network CE CE CE
  • 12. Case 2: Service Deployment & Evolution
    • Roll out the service to the production network after it matures
    • VROOM guarantees seamless service to existing customers during the roll-out and later evolution
    Production network Test network Test network Test network
  • 13. Case 3: Power Savings
    • Big power consumption of routers
      • Millions of Routers in the U.S.
      • Electricity bill: $ hundreds of millions/year
    (Source: National Technical Information Service, Department of Commerce, 2000. Figures for 2005 & 2010 are projections.)
  • 14. Case 3: Power Savings
    • Observation: the diurnal traffic pattern
    • Idea: contract and expand the physical network according to the traffic demand
  • 15. Case 3: Power Savings Dynamically contract & expand the physical network in a day - 3PM
  • 16. Case 3: Power Savings Dynamically contract & expand the physical network in a day - 9PM
  • 17. Case 3: Power Savings Dynamically contract & expand the physical network in a day - 4AM
  • 18.
    • Migrate an entire virtual router instance
      • All control plane & data plane processes / states
    • Minimize disruption
      • Data plane: up to millions packets per second
      • Control plane: less stringent (w/ routing message retrans.)
    • Migrate links
    Virtual Router Migration: the Challenges
  • 19. Outline
    • Why is VROOM a good idea?
    • What are the challenges?
    • How does VROOM work?
      • The migration enablers
      • The migration process
        • What to be migrated?
        • How? (in order to minimize disruption)
    • Is VROOM practical?
    • Where to migrate?
  • 20. VROOM Architecture
    • Three enablers that make VR migration possible
      • Router virtualization
      • Control and data plane separation
      • Dynamic interface binding
  • 21. A Naive Migration Process
    • Freeze the virtual router
    • Copy states
    • Restart
    • Migrate links
    • Practically unacceptable
      • Packet forwarding should not stop during migration
  • 22. VROOM’s Migration Process
    • Key idea: separate the migration of control and data plane
      • No data-plane interruption
      • Low control-plane interruption
    • Control-plane migration
    • Data-plane cloning
    • Link migration
  • 23. Control-Plane Migration
    • Two things to be copied
      • Router image
        • Binaries, configuration files, etc.
      • Memory
        • 1st stage: pre-copy
        • 2nd stage: stall-and-copy (when the control plane is “frozen”)
    t1 t2 t3 t4 time 1 2 1: router-image copy 2: memory copy pre-copy stall-and-copy
  • 24. Data-Plane Cloning
    • Clone the data plane by repopulation
      • Copying the data plane states is wasteful, and could be hard
      • Instead, repopulate the new data plane using the migrated control plane
      • The old data plane continues working during migration
    t1 t2 t3 t4 time 1 2 1: router-image copy 2: memory copy t5 3 3: data-plane cloning
  • 25. Remote Control Plane
    • The migrated control plane plays two roles
      • Act as a “remote control plane” for the old data plane
      • Populate the new data plane
    t1 t2 t3 t4 time 1 2 1: router-image copy 2: memory copy t5 3 3: data-plane cloning old node new node control plane remote control plane
  • 26. Keep the Control Plane “Online”
    • Data-plane cloning takes time
      • Around 110 us per FIB entry update (for high-end router) *
      • Installing 250k routes could take over 20 seconds
    • The control plane needs connectivity during this period
      • Redirect the routing messages through tunnels
    *: P. Francios, et. al., Achieving sub-second IGP convergence in large IP networks, ACM SIGCOMM CCR, no. 3, 2005.
  • 27. Double Data Planes
    • At the end of data-plane cloning, two data planes are ready to forward traffic (i.e., “double data planes”)
    t1 t2 t3 t4 time 1 2 1: router-image copy 2: memory copy t5 3 3: data-plane cloning t0 0 0: tunnel setup double data plane data plane old node 4 4: asynchronous link migration new node old node new node control plane remote control plane t6
  • 28. Asynchronous Link Migration
    • With the double data planes, each link can be migrated independently
      • Eliminate the need for a synchronization system
  • 29. Outline
    • Why is VROOM a good idea?
    • What are the challenges?
    • How does VROOM work?
    • Is VROOM practical?
      • Prototype system
      • Performance evaluation
    • Where to migrate?
  • 30. Prototype Implementation
    • PC + OpenVZ
    • OpenVZ: OS-level virtualization
      • Lighter-weight
      • Supports live migration
    • Two prototypes
      • Software-based data plane (SD): Linux kernel
      • Hardware-based data plane (HD): NetFPGA
        • NetFPGA: 4-port gigabit Ethernet PCI with an FPGA
    • Why two prototypes?
      • To validate the data-plane hypervisor design (e.g., migration between SD and HD)
  • 31. The Out-of-box OpenVZ Approach
    • Packets are forwarded inside each VE
    • When a VE is being migrated, packets are dropped
  • 32. Control and Data Plane Separation
    • Move the FIBs out of the VEs
    • shadowd in each VE, “pushing down” route updates
    • virtd in VE0, as the “data-plane hypervisor”
  • 33. Dynamic Interface Binding
    • bindd provides two types of bindings:
      • Map substrate interfaces to the right FIB
      • Map substrate interfaces to the right virtual interfaces
  • 34. Putting It Altogether: Realizing Migration 1. The migration program notifies shadowd about the completion of the control plane migration
  • 35. Putting It Altogether: Realizing Migration 2. shadowd requests zebra to resend all the routes, and pushes them down to virtd
  • 36. Putting It Altogether: Realizing Migration 3. virtd installs routes the new FIB, while continuing to update the old FIB
  • 37. Putting It Altogether: Realizing Migration 4. virtd notifies the migration program to start link migration after finishing populating the new FIB 5. After link migration is completed, the migration program notifies virtd to stop updating the old FIB
  • 38. Evaluation
    • Answer three questions
      • Performance of individual migration steps?
      • Impact on data traffic?
      • Impact on routing protocol?
    • Experiments on Emulab
  • 39. Performance of Migration Steps
    • Memory copy time
      • With different numbers of routes (dump file sizes)
  • 40. Performance of Migration Steps
    • FIB population time
      • Grows linearly w.r.t. the number of route entries
      • Installing a FIB entry into NetFPGA: 7.4 microseconds
      • Installing a FIB entry into Linux kernel: 1.94 milliseconds
    • FIB update time: time for virtd to install entries to FIB
    • Total time: FIB update time + time for shadowd to send routes to virtd
  • 41. Data Plane Impact
    • The diamond testbed
    • 64-byte UDP packets, round-trip traffic
  • 42. Data Plane Impact
    • HD router with separate migration bandwidth
      • No delay increase or packet loss
    • SD router with separate migration bandwidth
      • Up to 3.7% delay increase at 5k packets/s
      • Less than 0.4% delay increase at 25k packets/s
    SD, 5k packets/s
  • 43. The Importance of Separate Migration Bandwidth
    • The dumbbell testbed
    • 250k routes in the RIB
  • 44. Separate Migration Bandwidth is Important
    • Throughput of the migration traffic
  • 45. Separate Migration Bandwidth is Important
    • Delay increase of the data traffic
  • 46. Separate Migration Bandwidth is Important
    • Loss rate of the data traffic
  • 47. Control Plane Impact
    • The Abilene testbed
    • Assume a backbone running MPLS
    • VR5 configured as
      • Core router (running OSPF only)
      • Edge router (running OSPF + BGP)
  • 48. Core Router Migration
    • No events during migration
      • Average control plane downtime: 0.972 seconds (0.924 - 1.008 seconds in 10 runs)
      • Support 1-second OSPF hello-interval (with 4-second dead-interval )
      • Miss at most one hello message
  • 49. Core Router Migration
    • Events happen during migration
      • Introducing events (LSA) by flapping link VR2-VR3
      • Miss at most one LSA
      • Get retransmission 5 seconds later (the default LSA retransmission-interval )
      • Can use smaller LSA retransmission-interval (e.g., 1 second)
  • 50. Edge Router Migration
    • 255k BGP routes + OSPF
    • Dump file size grows from 3.2MB to 76.0MB
    • Average control plane downtime: 3.560 seconds (3.484 - 3.594 seconds in 10 runs)
    • Support 2-second OSPF hello-interval (with 8-second dead-interval )
    • BGP sessions stay up
    • In practice, ISPs often use the default values
      • 10-second hello-interval
      • 40-second dead interval
  • 51. Outline
    • Why is VROOM a good idea?
    • What are the challenges?
    • How does VROOM work?
    • Is VROOM practical?
    • Where to migrate?
  • 52. Deciding Where To Migrate
    • Physical constraints
      • Latency
        • E.g, NYC to Washington D.C.: 2 msec
      • Link capacity
        • Enough remaining capacity for extra traffic
      • Platform compatibility
        • Routers from different vendors
      • Router capability
        • E.g., number of access control lists (ACLs) supported
    • Good news: these constraints limit the search space
  • 53. Two Optimization Problems
    • For planned maintenance/service deployment
      • Minimize path stretch
      • With constraints on link capacity, platform compatibility, router capability, etc.
    • For power savings
      • Maximize power savings
        • With different regional electricity prices
      • With constraints on path stretch, link capacity, etc.
  • 54. Conclusions
    • VROOM offers a useful network-management primitive
      • separates the tight coupling between physical and logical
      • Simplify network management, enable new applications
    • Live router migration with minimal disruption
      • Data-plane hypervisor enables
        • Data-plane cloning
        • Remote control plane
        • Double data plane and asynchronous link migration
      • No data-plane disruption
      • No visible control-plane disruption