Advertisement
Advertisement

More Related Content

Advertisement
Advertisement

Advanced network experiments in FED4FIRE

  1. PUBLIC ADVANCED NETWORK EXPERIMENTS ON FED4FIRE,PAST PRESENTANDA LOOK INTOTHE FUTURE DIMITRI STAESSENS
  2. SPARC – SPLITARCHITECTURE 2
  3. SPARC OBJECTIVES:CARRIER GRADE SDN
  4. Requirements(study topics) f Problem andSolution Description OF Extensions Prototype Integration /Implementation Validation/ Performance Evaluation Controller Architecture Yes Yes (Namespace mgmt) Yes Yes NetworkManagement Yes No No No Scalability Yes (numerical validation) N/A N/A Yes Openness& Extensibility Yes Yes Yes Yes Service Creation Yes Yes Yes Yes Virtualization& Isolation Yes Yes Yes Yes Control Channel Bootstrapping & Topology Discovery Yes N/A Yes Yes OAM Yes Yes Yes Yes NetworkResiliency Yes N/A Yes Yes Energy-Efficient Networking Yes Yes No No Quality of Service Yes No No No Multilayer Aspects Yes No No No SCOPE
  5. Modify flow entry Add new flow entry RESTORATION 5
  6. Modify flow entry Add new flow entry RESTORATION 6
  7. Modify flow entry Add new flow entry Delete old flow entry RESTORATION 7
  8. RESILIENCE EXPERIMENT • 14 OF nodes (ovs) • 14 hosts (not shown) • Not Openflow“aware”! • 1 controller • separate control LAN • restoration application • Shortest path • Failure notification by switch • 21 links (1Gbps) • 176 “flows” • Pktgen • UDP traffic • ~300 packets/s 8
  9. FACILITY:VIRTUALWALL 9
  10. 10
  11. EXPERIMENTTIMING 11 Connecting switches to NOX controller “DP join” Normal operation “echo req/rep” Failure “portstatus” Restored operation “echo req/rep” Establishing flows “packet-in”
  12. RESULTS:RESTORATIONAND PROTECTION Restoration Protection
  13. RESULTS:RESTORATIONAND PROTECTION 4/4/2017 (C) Restoration-Protection Experiment 0 20 40 60 80 100 120 140 160 180 200 -0.4-0.3-0.2-0.1 0 0.1 0.2 0.3 0.4 Traffic(packet/10ms) Experimenttimein seconds Total Traffic Traffic from Berlin (A) Restoration Experiment 0 20 40 60 80 100 120 140 160 180 200 -0.4 -0.3 -0.2 -0.1 0 0.1 0.2 0.3 0.4 Traffic(packet/10ms) Experimenttimein seconds Total Traffic Traffic from Berlin (D) Protection Experiment 0 20 40 60 80 100 120 140 160 180 200 -0.4 -0.3 -0.2 -0.1 0 0.1 0.2 0.3 0.4 Traffic(packet/10ms) ExperimentTimein seconds Total Traffic Traffic from Berlin (B) Protection-Restoration Experiment 0 20 40 60 80 100 120 140 160 180 200 -0.4 -0.3 -0.2 -0.1 0 0.1 0.2 0.3 0.4 Traffic(packet/10ms) Experimenttimein seconds Total Traffic Trafficfrom Berlin 150ms < 50ms~65ms ~120ms
  14. CITYFLOW – QOS OVER SDN 14
  15. CITYFLOW OBJECTIVE:QOS DIFFERENTIATION AS 65001 AS 65002 AS 65003 Scheduler Control Scheduler Control VPS Controller VPS Controller VPS Controller VPS VPS Control Plane Invocation API includes following functions: Network Service Portfolio Invocation Controller NSIS Signalling Driver: End to End Control IPsphere Driver: Inter AS Configuration RACF CAC Network Element Configuration Interface VPS Control Plane OpenFlow Multi AS Network Endpoint Endpoint Invocation by Application Business Logic Bus Invocation Bus (VPSS) Public Internet Future Internet Right of way for High-PriorityTraffic
  16. LOW-LEVEL INSTALLATION OF QUEUES IN FORWARDING ENGINES
  17. OFELIA & CITYFLOW i2CAT AS ETHZ AS CreateNet AS TUB AS iMinds Interconnection RedZinc ADSL link OVS Floodlight VPS OVS Floodlight VPS OVS Floodlight VPS OVS Floodlight VPS iMinds OVS Floodlight VPS 17
  18. CITY-SCALE NETWORKEMULATION 18
  19. DIFFERENTIATED RECOVERY 0 5 10 15 20 25 30 35 -100 0 100 200 300 400 Traffic(Mb/s) Experiment Time in Seconds Best-Effort High Priority failure Failure repaired
  20. IRATI – CLEAN SLATE NETWORKING 20
  21. OBJECTIVE:IMPLEMENT RINA POC
  22. RECURSIVEINTERNETARCHITECTURE 22
  23. RINA : IRATI OS/LINUX IMPLEMENTATION Source: S. Vrijders, F. Salvestrini, E.Grasa, M. Tarzan, L. Bergesio, D. Staessens, D. Colle “ Prototyping [RINA], the IRATI project approach”, IEEE Network, March 2014
  24. TESTBEDS:OFELIA 24
  25. VALIDATION OF ROUTING 26
  26. VIRTUAL MACHINE NETWORKING
  27. SHIM IPCP OVER HYPERVISOR Implementation directly in the hypervisor (Qemu / Xen)
  28. VALIDATION OFTHE SHIM-HV 29
  29. PERFORMANCETEST 30
  30. PRISTINE – CLEAN SLATE NETWORKING 31
  31. OBJECTIVES:PROGRAMMABILITY OF RINA DATNET USE CASE DISTCLOUD USE CASE
  32. 33
  33. PERFORMANCE ISOLATION IN DATACENTERS • Custom congestion control in Fat Tree topologies. • Measurements of performances of flows which belongs to differentTenants. Such flows compete for the link bandwidth. • Measurements on the status of the queues during congestion events. • Reaction of the flows which will have their rate reduced to their paid bandwidth, and can also share any remaining left capacity on the link. • How performances change adopting different multipath strategies.
  34. PRISTINE:VALIDATION EXPERIMENTS • Authentication • password-based,asymmetric keys • Encryption • Explicit congestion avoidance • Scalable routing • Location-independent application names • Mapping of application names to node addresses at multiple layers. 35
  35. ARCFIRE – LARGE SCALE RINA EXPERIMENTATION ON FED4FIRE+ 36
  36. SEAMLESS NODE RENUMBERING 3-4 days of tedious and error-prone work to setup the experiment Each node changes addresses randomly every 30-60 seconds
  37. RUMBA FRAMEWORK Python library for managing RINA experiments on Fed4FIRE 38 TESTBED PLUGINS PROTOTYPE PLUGINS Will become available to all fed4fire users
  38. CONCLUSIONS 39
  39. CONCLUSIONS FIRE testbeds fill a gap for Future Internet experiments that have one or more of the following requirements Real-time operation Performance measurements at small timescales Implementations near the hardware Advanced OS modifications near the device driver level Advanced architectural concepts Advanced virtualization concepts Scriptable interface
  40. PUBLIC
Advertisement