Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

6 P.Mueller


Published on

  • Be the first to comment

  • Be the first to like this

6 P.Mueller

  1. 1. Test der PP-Präsentation<br />G-Lab / PII<br />Federation<br />FIRE Conference<br />15th Dec. 2010, Ghent<br />Belgium<br />Wiekönnen die Folienverbessertwerden?<br />Paul Mueller<br />
  2. 2. Content<br />
  3. 3. G-Lab: Vision of the Future Internet<br />Closing the loop between research and real-world experiments<br />Provide an experimental facility for studies onarchitectures, mechanisms, protocols and applications towards Future Internet<br />Investigate interdependency of theoretical studies and prototype development<br />G-Lab as a virtual laboratory<br />Design and Setup of the Experimental Facility<br />
  4. 4. G-Lab Phase I (Oct. 2008)<br />Coordination:<br />WP0: Coordination and Consolidation<br />(Prototype) Studies:<br />WP1: Architecture of the Future Internet<br />WP2: Routing and Address Schemes<br />WP3: Wireless Networks and Mobility<br />WP4: Monitoring and Management Concepts<br />WP5: QoS, QoE and Security<br />WP6: Architecture and Composition of Services<br />Experimental Facility:<br />WP7: Platform for Experiments<br />G-Lab Organization<br />G-Lab Phase II (since Sept. 2009)<br /><ul><li>COMCON(Control and Management of Coexisting Networks)
  5. 5. VirtuRAMA(Network Virtualization)
  6. 6. FoG(Forwarding on Gates)
  7. 7. NETCOMP(Network-Computing for the Service Internet of the Future)
  8. 8. CICS(Convergence of Internet and Cellular Systems)
  9. 9. HAMcast(Hybrid Adaptive Mobile Multicast)
  10. 10. Deep (Deepening G-Lab for Cross-Layer Composition)
  11. 11. Real-World G-Lab
  12. 12. Ener-G(Energy Efficiency in G-Lab)</li></li></ul><li>G-Lab Phase 1: Initial project partners<br />
  13. 13. G-Lab Environment<br />Full control over the resources<br />Reservation of single resource should be possible<br />Elimination of side effects<br />Testing scalability<br />Exclusive resource reservation<br />Testing QoS / QoE<br />Decentralized Resources can be independently used<br />Tests on the lower layers of the network without affecting the “life” network<br />Extended functionality<br />New technologies (Wireless, Sensor,…)<br />Interfaces to other testbeds (GENI, PlanetLab Japan, WinLab, …)<br />
  14. 14. <ul><li>Nodes per Site
  15. 15. 174 Nodes in total</li></ul> (1392 cores total) <br />Hardware Equipment (1)<br />Normal Node<br />2x Intel L5420 Quad Core 2,5 GHz<br />16 GB Ram<br />4x Gbit-LAN<br />4x 146 GB disk<br />ILOM Management Interface (separate LAN)<br />Network Node<br />4 extra Gbit-Lan<br />Headnode<br />2x Intel E5450 Quad Core 3,0 GHz<br />12x 146 GB disk<br />Site requirements<br />public IP addresses IPv4/IPv6<br />
  16. 16. Network Topology<br />DFN IP Topology<br />
  17. 17. Flexibility<br />Experimental Facility is part of research experiments<br />Facility can be modified to fit the experiments needs<br />Researchers can run experiments that might break the facility<br />Experimental facility instead of a testbed<br />Research is not limited by<br />Current software / hardware setup<br />Restrictive policies<br />Experimental Facility is evolving<br />Cooperative approach<br />„When you need it, build it“<br />Core team helps<br />Cooperation / Federation with other facilities (e.g. OneLab, Planet-Lab, GENI)<br />Private Bootimages<br />Three types of bootimages: Planet-Lab; Hypervisor virtualization image; Custom boot images<br />
  18. 18. Monitoring (<br /><ul><li>Nagios
  19. 19. Central monitoring in Kaiserslautern
  20. 20. Obtain information from other sites via NRPE proxy on the head-node
  21. 21. Checks
  22. 22. Availability of Nodes
  23. 23. Status of special services
  24. 24. Hardware status (via ILOM)
  25. 25. CoMon
  26. 26. Planet-Lab specific monitoring
  27. 27. In cooperation with Planet-Lab, Princeton
  28. 28. Monitors nodes from within
  29. 29. CPU, Memory, IO
  30. 30. Slice centric view
  31. 31. Monitors experiments
  32. 32. MyOps
  33. 33. Planet-Lab specific tool
  34. 34. In cooperation with Planet-Lab, Princeton
  35. 35. Detects common Planet-Lab problems
  36. 36. Reacts to problems</li></li></ul><li>Control Frameworks<br /><ul><li>Existing control frameworks within G-Lab</li></ul>Planet-Lab<br />High node numbers -> allows large experiments<br />Uses container virtualization -> limited hardware and kernel access<br />Shared network interfaces -> no network topologies<br />Emulab<br />No virtualization -> full hardware and kernel access, inefficient<br />Realized as cluster -> full control over network topology<br />Seattle (GENI)<br />Very high node numbers -> allows large experiments<br />Uses virtual python interpreter -> no operating system access<br />Only custom python dialect -> not suitable for all experiments<br />ToMaTo (Topology Management Tool)<br />Topologycontainsdevicesandconnectors<br />Devices: KVM, OpenVZ, Templates<br />Connectors: TincVPN, Special connectors, Link emulation<br />
  37. 37. ToMaTo a TopologyManagement Tool<br />TopologyCreator<br />Automaticallycreatestopologies<br />Ring topology<br />Star topology<br />Fullmesh<br />Connectstopologies<br />Configuresnetworkinterfaces<br />IP addresses<br />Netmasks<br />ToMaTo<br />Topologycontainsdevicesandconnectors<br />Devices: KVM, OpenVZ, Templates<br />Connectors: Tinc VPN, Special connectors, Link emulation<br />
  38. 38. G-Lab Environment<br />Testbed:<br />Real not simulated<br />Specific purpose<br />Focused goal<br />Known success criteria<br />Limited scale<br />Not sufficient for clean slate design<br />Experimental facility:<br />Purpose:<br />explore yet unknown architectures <br />expose researchers to real thing <br />breakable infrastructure<br />Larger scale (global?)<br />Success criteria: unknown<br />
  39. 39. Why Federation<br />Wilderness<br />Scalability<br />Hoe does new algorithms behave in the wilderness?<br />Zoo<br />Controlled environment<br />Host systems<br />Network<br />Users<br />Controlled environment for<br />development, deployment and testing of new algorithms<br />Breakable infrastructure<br />Repeatable experiments<br />When dealing with new algorithms for routing, security, mobility, …<br />Improve scientific quality<br />
  40. 40. Federation<br />Boom of testbed & experimental platforms<br />Virtualization efforts of most areas in networking and applications<br />Growing number of testbeds or experimental facilities<br />Current problems and activities<br />Danger for experimental platforms being underutilized <br />Interworking between testbeds underdeveloped: federation<br />Role of simulation and analysis forgotten or underestimated<br />Possible solution: future networks studies and testbeds in parallel (partly disjunct)<br />© G-Lab Phuoc Tran-Gia<br />
  41. 41. The FIRE Project PII <br />Pan-European Laboratory Project (PII) develops SOA testbed federation tool Teagle<br />Support life cycle of testbed federation management via common management plane<br />Provides repositories for testbed and component descriptions (registration)<br />Search available resources, i.e. infrastructure and services (discovery)<br />Orchestraton of services and testing infrastructures<br />Initiate automated deployment / provisioning <br />Monitoring of federated resources (fault / performance) mgt<br />interface<br />PII Testbed Manager<br />G-Lab Testbed Manager<br />interface<br />Resource<br />Adaptor<br />Resource<br />Adaptor<br />interface<br />Resource<br />Resource<br />
  42. 42. Prof. Dr. Paul Mueller<br />Phone: +49 (0)631 205-2263<br />Fax: +49 (0)631 205-30 56<br />Email:<br />Internet:<br />