Your SlideShare is downloading. ×
Testbed by Yeongjae Yu
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×

Saving this for later?

Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime - even offline.

Text the download link to your phone

Standard text messaging rates apply

Testbed by Yeongjae Yu

451
views

Published on

Published in: Technology, Business

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
451
On Slideshare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
3
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide
  • * http://www3.informatik.uni-wuerzburg.de/ITG/2006/presentations/talk_Rexford.pdf
  • * http://www3.informatik.uni-wuerzburg.de/ITG/2006/presentations/talk_Rexford.pdf
  • * http://www3.informatik.uni-wuerzburg.de/ITG/2006/presentations/talk_Rexford.pdf
  • * http://www3.informatik.uni-wuerzburg.de/ITG/2006/presentations/talk_Rexford.pdf
  • For GENI to function, the GMC must be reduced to a specific implementation. We refer to this implementation as the “GENI Management Core Implementation”, or “GMCI”. The GMCI provides two key elements of the overall GENI system. These are 1) the “small set of core services” that are necessary for the system to operate, and 2) an “underlying messaging and remote operation invocation framework” needed for elements of the GENI system to communicate with each other.
  • * GMC 의 실제 구현은 Aggregate 인 것 같다 . GMC 는 GENI 제대로 운영되려면 어떻게 해야 하는지 정의해 놓은 framework.
  • GGID(for Naming) 에 대한 문구 ... The GMC defines unambiguous identifiers-called “GENI Global Identifiers (GGID)”-for the set of objects that make up GENI. These objects, as defined fully in the next section, include “users”, “components”, “aggregates”, and “slices”. Specifically, a GGID is represented as an X.509 certificate [X509, RFC3280] that binds a Universally Unique Identifier (UUID) to a public key. There may be one or many authorities that each implement the GMC. The “GMCI” provides a default repository that defines a hierarchical name space for objects, corresponding to the hierarchy of authorities that have been delegated the right to create and name objects. A repository is implemented as a service in GENI, and hence, it is identified with its own GGID. (=> Repository 가 service 로 구현 ) The protocols used to register with or query a repository are described in WSDL and are extensible and exportable. New repositories will be able to describe their capabilities in WSDL and common tools can be used to contact repositories.
  • * Components 에 대한 문구 The GMC defines “components” as the primary building block of GENI. For example, a component might correspond to an edge computer, a customizable router, or a programmable access point. A component encapsulates a collection of resources, including both physical resources (e.g., CPU, memory, disk, bandwidth) and logical resources (e.g., file descriptors, port numbers). Each component is controlled via a “component manager (CM)”, which exports a well-defined, remotely accessible interface (described in Section 6.2). The “component manager abstraction” defines the operations available to the GMC and higher level services to manage the allocation of component resources to different users and their experiments. One concrete representation of the “CM interface” is that it exports a set of procedures that can be invoked using XML-RPC.
  • * GMC 의 실제 구현은 Aggregate 인 것 같다 . GMC 는 GENI 가 제대로 운영되려면 어떻게 해야 하는지 정의해 놓은 framework. * GMC 라고 할까 Aggregate 이라고 할까 ? Key functions 과 aggregates... This section sketches how we expect these key functions will be implemented, primarily through the use of “aggregates”. ★ It is important to keep in mind that this section outlines several possible scenarios for how the GENI management plane organizes itself into a collection of “aggregates” to accomplish key functions. Other structures are possible, the only requirement being that they adhere to the set of “abstractions” and “interfaces” defined in the previous sections.
  • Components 의 resource allocation... Components might allocate resources directly 1) to individual “slices”, 2) to an network-specific “aggregate” that manages resource allocation across a set of related components, 3) to a “researcher portal” that implements a more sophisticated resource allocation policy, or 4) to some combination of all three. * Portal 에 대해 ... Second, a pair of “portals” provides interfaces to all of “GENI proper”. Researchers request slices and control their experiments using the “researcher portal”, where all rights for resources are concentrated with this portal and the Science Council policy is implemented by this portal. The portal then contacts the “slice control interface” in an appropriate set of “management aggregates” to create the corresponding slivers. Similarly, an “operator portal” defines a GENI-wide interface for operations across all GENI components. This portal, in turn, contacts the “O&M control interface” for the appropriate set of subnet-specific “management aggregates”.
  • * PlanetLab is the prototype of GENI (Larry Peterson @ Metro line M2 of Lausanne, 2005.10) * Reference: [Chen 06] huanghai.org/meetings/data/2ndC-K/PlanetLab20060828.ppt
  • * Our goal is for VINI to become shared infrastructure that enables researchers to simultaneously evaluate new protocols and services using real traffic from distributed services that are also sharing VINI resources. * The nodes at each site will initially be high-end servers, but may eventually be programmable hardware devices that can better handle a large number of simultaneous experiments carrying a volume of real traffic and many simultaneously running protocols.
  • * Perform these steps on your local PC to deploy and initialize a virtual network. 1) In the Makefile, change CONFIG to point to the Ruby configuration file that describes your virtual network topology and routing protocols (e.g., config/vini.rb ). 2) Generate the configuration files for all nodes. 3) Sync up files to the PlanetLab nodes. - This will copy everything (necessary scripts, configuration files, RPMs, etc.) into ~/vini on each PlanetLab node. Since it uses 'rsync', it may take a while the first time you run it, but after that it should be faster. You need to run this step every time you rebuild the configuration files, and also when you upgrade the software (e.g., to a new Click release). 4) Install any required RPMs on the boxes, install the file system image that UML runs from. - On each node, this command execute two other Makefile targets: install_rpms and get_uml_fs. You only need to run this the first time you deploy the PL-VINI software in your slice, or if you are upgrading the software.
  • * resources 로 links, switches, routers, ...
  • * Original UCLPv2 는 XC-WS 만 존재하여 link level APN 만 만들 수 있고 , network level APN 만들 수 없다 .
  • * 각 영역 설명하자 .
  • * http://www.geni.net/docs/dev_VINI.txt
  • * it(Node Manager) periodically retrieves slices.xml from PLC, creates the necessary local slivers, interprets the resource allocations for those slivers specified by slices.xml, and manipulates the various kernel schedulers and allocators to affect those allocations. [Fiuczynski 04]
  • * PlanetLab 3.0 에 추가된 두 기능 - ticket 을 이용한 slice creation mechanism 제공 ( 디폴트로 기존의 PLC 를 이용한 slice creation mechanism 제공 ) - users 와 site PI 가 추가적인 slice attribute 를 지정할 수 있게 하였음 ( e.g., CPU shares, link bandwidth, disk quotas, and so on)
  • * PL-VINI enables arbitrary virtual networks, consisting of software routers connected by tunnels, to be configured within a PlanetLab slice. * An example of a network experiment enabled by PL-VINI is the I nternet In A Slice (IIAS) , which consists of an IP data plane configured by a control plane running standard network routing protocols. * This User Guide describes how to configure a PL-VINI experiment running IIAS , but IIAS is only one experiment supported by the PL-VINI infrastructure. * Virtual network topologies, consisting of UML instances connected by virtual point-to-point Ethernet links , running within a PlanetLab slice. PL-VINI matches packets to virtual links (implemented by UDP tunnels) based on the Ethernet MAC header , and so has no inherent dependencies on Layer 3 protocols.
  • * $slice = Slice.new(<control port>, <UDP port>,'<slice name>', <click forwarding>, '<router>‘) 1. The TCP port that Click runs its control (telnet) interface on. Pick an unused port. 2. The UDP port on which overlay traffic is tunneled. Pick an unused port. 3. The name of your PlanetLab/VINI slice. 4. Whether to enable IP forwarding in the Click FEA ( true | false ). This option is only meaningful with the XORP router. 5. What router software to run ( "XORP" | "Quagga" | "none" ). * $node = Node.new('<node DNS name>', $slice, 'label') * $node.hostinfo('<real IP>', '<tap0 IP>', '<tap0 MAC>‘) * $link = Link.new($node0, $node1, <OSPF cost>, <loss rate>) * $node.add_nat_dests(['<prefix1>', '<prefix2>', ...]) - Adds the specified prefixes to those for which the node will serve as a NAT egress
  • * <Backbone Node: Requirements and Architecture, GDD-06-26>
  • * <Backbone Node: Requirements and Architecture, GDD-06-26>
  • * <Backbone Node: Requirements and Architecture, GDD-06-26>
  • Transcript

    • 1. CS744 Sub Area Introduction: GENI Facility (Testbed) 2007.10.22 Yu Yeongjae [email_address]
    • 2. Table of Contents
      • Need for Experimental Facility
      • GENI Architecture & Facility Design
      • Related Work
      • − PlanetLab & Virtual Network Infrastructure (VINI)
      • − User Controlled LightPath (UCLP)& Articulated Private Network (APN)
      • Reference
      • Appendix
    • 3.
      • 1. Need for Experimental Facility
    • 4. 1.1 Need for Experiment Facility [Peterson 06] Goal: Seamless conception-to-deployment process
    • 5.
      • Simulators
        • ns
      • Emulators
        • Emulab
        • WAIL
      • Wireless Testbeds
        • ORBIT
        • Emulab
      • Wide-Area Testbeds
        • PlanetLab
        • RON
        • X-bone
        • DETER
      1.2 Existing Tools [Peterson 06]
    • 6.
      • Simulation based on simple models
        • topologies, admin policies, workloads, failures…
      • Emulation (and “in lab” tests) are similarly limited
        • only as good as the models
      • Traditional testbeds are targeted
        • often of limited reach
        • often with limited programmability
      • Testbed dilemma
        • production: real users but incremental change
        • research: radical change but no real users
      1.3 Today’s Tools Have Limitations [Peterson 06]
    • 7.
      • We need:
      • − Real implementation
      • − Real experience
      • − Real network conditions
      • − Real users
      • Global Environment for Network Innovations
      • − Prototyping new architectures
      • − Realistic evaluation
      • − Controlled evaluation
      • − Shared facility
      • − Connecting to real users
      • − Enabling new services
      1.4 Things We Need [Rexford 06]
    • 8.
      • 2. GENI Architecture & Facility Design
    • 9.
      • 1.1 What is GENI?
      • GENI is an open, large-scale, realistic experimental facility that will revolutionize research in global communication networks [Peterson 06]
      • 1.2 The Role of GENI
      • GENI will allow researchers to experiment with alternative network architectures , services , and applications at scale and under real-world conditions [Clark 07]
      1. GENI Architecture & Facility Design * Reference: http://netseminar.stanford.edu/sessions/2006-11-16.ppt GENI Network Virtualization
    • 10.
      • 1.3 Three Levels of GENI Architecture [Peterson 07]
      • 1) Physical Substrate
      • − At the bottom level, GENI provides a set of physical facilities (e.g., routers,
      • processors, links, wireless devices)
      • 2) User Services
      • − At the top level, GENI's “user services” provide a rich array of user-visible
      • support services intended to make the facility accessible and effective in
      • meeting its research goals
      • 3) GENI Management Core (GMC)
      • − Sitting between the “physical substrate” and the “user services” is the
      • “ GENI Management Core”, or “GMC”
      • − The purpose of the GMC is to define a stable, predictable, long-lived
      • framework(a set of abstractions, interfaces, name spaces, and core
      • services) to bind together the GENI architecture
      1. GENI Architecture & Facility Design (Cont’d)
    • 11.
      • * Note that the GMC is not a management service or operations center.
      • GMC only defines the framework within which such facilities can be
      • constructed [Peterson 07]
      1. GENI Architecture & Facility Design (Cont’d) GENI Architecture * GMC: GENI Management Core * Reference: http://www.geni.net/docs/GENI.ppt abstraction
    • 12.
      • GENI Names [Peterson 07]
      • − The GMC defines unambiguous identifiers-called GENI Global Identifiers
      • (GGID)-for the set of objects that make up GENI
      • − These objects include users, components, aggregates, and slices
      • − A GGID is represented as an X.509 certificate [X509, RFC3280] that binds
      • a Universally Unique Identifier (UUID) to a public key
      1. GENI Architecture & Facility Design (Cont’d)
    • 13.
      • GENI Abstractions [Peterson 07]
      • Three major abstractions that the GMC defines:
      • (1) Components
      • − The primary building block of GENI
      • − A component encapsulates a collection of resources
      • − Each component is controlled via a component manager (CM)
      • (2) Slices
      • − Slice is t he substrate resources bound to a particular experiment [Clark 07]
      • − Users run their experiments in a slice of the GENI substrate
      • (3) Aggregates
      • − An “aggregate” is a GENI object that represents an unordered collection of
      • components
      • − There also might be a “root” aggregate (e.g. researcher portal) that
      • corresponds to all GENI components
      • − Aggregates coordinate resource allocation and manage set of components
      1. GENI Architecture & Facility Design (Cont’d)
    • 14. 1. GENI Architecture & Facility Design (Cont’d) Resource Controller Auditing Archive Slice Manager Aggregate node control data CM Virtualization SW Substrate HW * CM: Component Manager * GMC: GENI Management Core Component Components & Aggregate * Reference: http://www.geni.net/docs/GENI.ppt Management - boot/monitor Coordination - slice control CM Virtualization SW Substrate HW CM Virtualization SW Substrate HW
    • 15.
      • • Management Aggregate (Backbone/Wireless WG)
      • − Operations & Maintenance Control Plane ➤ securely boot and update ➤ diagnose & debug failures
      • • Coordination Aggregate (Backbone/Wireless WG)
      • − Slice Control Plane ➤ coordinate slice embedding across a subnet
      • • Portal Aggregate (Services WG)
      • − slice embedding service ➤ resource discovery ➤ resource allocation ➤ end-to-end topology “stitching”
      • − experiment management service
      • ➤ configuration management ➤ development tools ➤ diagnostics & monitoring ➤ data logging
      1. GENI Architecture & Facility Design (Cont’d) * A “portal” is an interface that defines an "entry point“ through which users access GENI components
    • 16. 1. GENI Architecture & Facility Design (Cont’d) [Peterson 07] (1) Researcher Portal (2) Operator Portal * Both portals serve as “front-ends” for a set of “infrastructure services” that researchers engage to help them manage their slices, and operators engage to help them monitor and diagnose the components User Portal
    • 17.
      • 4. Related Work
    • 18.
      • 3.1.1 PlanetLab
      • − PlanetLab is a global overlay network for developing and accessing broad-
      • coverage network services [Chun 03]
      • − PlanetLab allows multiple services to run concurrently and continuously,
      • each in its own slice of PlanetLab [Chun 03]
      • − PlanetLab serves as a prototype of GENI [Peterson 05]
      • ➤ helps make the case that such a facility is feasible
          • − PlanetLab is limited to a set of commodity PCs running as an overlay
          • [Chen 06]
      3.1 PlanetLab & VINI
    • 19.
      • 3.1.1 PlanetLab
      • − PlanetLab Central (PLC), a centralized front-end, acts as the trusted
      • intermediary between PL users and node owners [Peterson 06]
      3.1 PlanetLab & VINI (Cont’d) * Reference: [Peterson 05] http://lsirwww.epfl.ch/PlanetLabEverywhere/slides/epfl.ppt <PlanetLab Principals>
    • 20.
      • 3.1.2 Virtual Network Infrastructure (VINI) [Bavier 06]
      • − VINI is a virtual network infrastructure that allows network researchers to
      • evaluate their protocols and services in a realistic environment that also
      • provides a high degree of control over network conditions.
      • − PL-VINI is a prototype of a VINI that runs on the public PlanetLab. PL-VINI
      • enables arbitrary virtual networks, consisting of software routers connected
      • by tunnels, to be configured within a PlanetLab slice.
      • − VINI is an early prototype of the GENI [Rexford and Peterson 07]
      3.1 PlanetLab & VINI (Cont’d)
    • 21. <Deploying and Initializing the Virtual Network> 1. Make a slice on PlanetLab nodes 2. Change configuration file 3. Generate the configuration files for all nodes 4. Copy everything (necessary scripts, configuration files, RPMs, etc.) into each PlanetLab node 5. Install any required RPMs on the boxes, install the file system image that UML runs from 6. Start up the overlay
    • 22.
      • User Controlled LightPath (UCLPv2) [Lemay 06]
      • − UCLP is a network virtualization management tool built using web services
      • ex) XC-WS(Cross Connect Web Service) for SONET, SDH and Lambda Cross Connects
      • − Users can create several parallel application specific networks from
      • a single physical network through UCLP
      • Articulated Private Network (APN)
      • − An aggregate mix of resources [St.Arnaud 07]
      3.2 UCLP & APN UCLP Network Virtualization
    • 23. * Reference: [Grasa 07], http://tnc2007.terena.org/core/getfile.php?file_id=474 Resource Management Layer Resource Virtualization Layer User Access Layer GUI Client <UCLP High Level Architecture> LP-WS ITF-WS Ethernet WS XC-WS Router-WS
    • 24.
      • Original User Controlled LightPath (UCLPv2) Limitation
      • − It supports virtualization only for SONET based network element such as
      • ONS15454
      • − As a result, It is limited to make link level APNs and can’t support
      • experiments that need routers
      3.2 UCLP & APN (Cont’d)
    • 25. 3.3 Relationship between GENI and Related Work <Relationship among PlanetLab, PL-VINI, UCLP and GENI>
    • 26.
      • GENI is an experimental facility intended to enable fundamental innovations in
      • networking and distributed systems [Peterson 07]
      • GENI will allow researchers to experiment with alternative network architectures,
      • services, and applications at scale and under real-world conditions [Clark 07]
      • Prototyping of GENI is needed to aggressively drive down GENI construction risk
      • [Elliott 07]
      • There are related works with GENI: PlanetLab, VINI, UCLP
      • PlanetLab serves as a prototype of GENI [Peterson 05]
      • VINI is an early prototype of the GENI [Rexford and Peterson 07]
      • We’ll try to make a prototype of GENI based on UCLP deployed on lambda network
      4. Summary
    • 27.
      • [Peterson 07] Larry Peterson, John Wroclawski, “ Overview of the GENI Architecture ”, GDD-06-11, January 2007
      • [Peterson 07] Larry Peterson, Tom Anderson, Dan Blumenthal, “ GENI Facility Design ”, GDD-07-44, March 2007
      • [Turner 06] Jonathan Turner, “ A Proposed Architecture for the GENI Backbone Platform ”, GDD-06-09, March 2006
      • [Blumenthal 06] Dan Blumenthal, Nick McKeown, “ Backbone Node: Requirements and Architecture ”, GDD-06-26, November 2006
      • [Peterson 06] Larry Peterson, Steve Muir, Timothy Roscoey, Aaron Klingaman, &quot; PlanetLab Architecture: An Overview &quot;, PDN-06-031, May 2006
      • [Bavier 06] Andy Bavier, Nick Feamster, Mark Huang, Larry Peterson, and Jennifer Rexford, &quot; In VINI Veritas: Realistic and Controlled Network Experimentation “, SIGCOMM, October 2006
      • [St.Arnaud 06] Bill St.Arnaud, &quot; UCLP Roadmap for creating User Controlled and Architected Networks using Service Oriented Architecture &quot;, January 2006
      Reference
    • 28. Appendix A. PlanetLab Join Request PI submits Consortium paperwork and requests to join PI Activated PLC verifies PI, activates account, enables site (logged) User Activated Users create accounts with keys, PI activates accounts (logged) Nodes Added to Slices Users add nodes to their slice (logged) Slice Traffic Logged Experiments run on nodes and generate traffic (logged by Netflow) Traffic Logs Centrally Stored PLC periodically pulls traffic logs from nodes Slice Created PI creates slice and assigns users to it (logged) A.1 Chain of Responsibility [Peterson 05] * PI: PlanetLab Investigator * PLC: PlanetLab Central
    • 29. Appendix A. PlanetLab PLC (SA) VMM NM VM VM . . . . . . A.2 Slice Creation Mechanism (1) * NM: Node Manager * VM: Virtual Machine * VMM: Virtual Machine Monitor PI SliceCreate( ) SliceUsersAdd( ) User SliceNodesAdd( ) SliceAttributeSet( ) SliceInstantiate( ) SliceGetAll( ) slices.xml VM …
    • 30. Appendix A. PlanetLab PLC (SA) VMM NM VM VM . . . A.2 Slice Creation Mechanism (2) * NM: Node Manager * VM: Virtual Machine * VMM: Virtual Machine Monitor PI SliceCreate( ) SliceUsersAdd( ) User SliceAttributeSet( ) SliceGetTicket( ) VM … (distribute ticket to slice creation service) SliverCreate(ticket)
    • 31. Appendix B. Virtual Network Infrastructure
      • Virtual Network Infrastructure (VINI)
      • Configuring a Virtual Network
      • A virtual network's topology and routing protocols are specified by means of a configuration file. The basic idea is to define a Node object for each router in your experiment, and a Link object between two Node objects for each virtual link.
      • Next slide shows an example of configuration file.
    • 32. ### Specify global defaults ### Slices $iias = Slice.new(13654, 4801, 'princeton_iias', true, 'XORP') ### Nodes $pr = Node.new('vini1.princeton.vini-veritas.net', $iias, 'pr‘) $ny1 = Node.new('vini1.newy.internet2.vini-veritas.net', $iias, 'ny1‘) $ch1 = Node.new('vini1.chic.internet2.vini-veritas.net', $iias, 'ch1‘) $ny2 = Node.new('vini2.newy.internet2.vini-veritas.net', $iias, 'ny2‘) $ch2 = Node.new('vini2.chic.internet2.vini-veritas.net', $iias, 'ch2‘) ### Links $l1 = Link.new($pr, $ny1, 50) $l2 = Link.new($pr, $ny2, 50) $l3 = Link.new($ny1, $ch1, 700) $l4 = Link.new($ny2, $ch2, 700) $l5 = Link.new($ny1, $ny2, 1) $l6 = Link.new($ch1, $ch2, 1) ### External destinations $ch1.add_nat_dests(['66.158.71.0/24']) $ch2.add_nat_dests(['66.158.71.0/24']) ### OpenVPN servers $pr.openvpn(true) ### Additional node configuration ### Specifying host info means we don't need to ssh to node $pr.hostinfo(&quot;128.112.139.43&quot;, &quot;10.0.1.1&quot;, &quot;00:FF:0A:00:01:01“) $ny1.hostinfo(&quot;64.57.18.82&quot;, &quot;10.0.20.1&quot;, &quot;00:FF:0A:00:14:01&quot;) $ch1.hostinfo(&quot;64.57.18.18&quot;, &quot;10.0.18.1&quot;, &quot;00:FF:0A:00:12:01&quot;) $ny2.hostinfo(&quot;64.57.18.83&quot;, &quot;10.0.21.1&quot;, &quot;00:FF:0A:00:15:01&quot;) $ch2.hostinfo(&quot;64.57.18.19&quot;, &quot;10.0.19.1&quot;, &quot;00:FF:0A:00:13:01“) <Example configuration file>
    • 33.
      • This is what is frequently referred to as the Programmable Router
      • [GDD-06-09]
      • It can process packets in any way it chooses, at layers higher than
      • we traditionally think of as routing
      • − Transcode packets from one format to another
      • − D o deep application level packet inspection
      • − Terminate flows as an end-point
      Appendix C. GENI Programmable Router C.1 Packet Processor [Blumenthal 06]
    • 34.
      • Here, we categorize different experimenters and how they use might GENI.
      • Type 1
      • Network researchers who want a stable network of standard routers (e.g. IPv4, IPv6 routers with standard features) operating over a topology of their choosing, but using links that are statistically shared with other users and experiments.
      • Type 2
      • Network researchers who want a network of stable, standard routers (e.g. IPv4,
      • IPv6 routers with standard features) operating over a topology with dedicated and private bandwidth (e.g. a topology of circuits), with stable (possibly standard/default) framing.
      • Type 3
      • Network researchers who want to deploy their own packet processing element
      • and protocols in a private, or shared, slice; running over shared or dedicated bandwidth links within a topology. The experimenter has complete control of how data passes over the network (including framing and packet format ).
      • => Type 1-3 researchers are the networking research community who will need a stable substrate over which to perform their experiments
      Appendix C. GENI Programmable Router C.2 Summary of Requirements to Support Multiple Layers of Research [Blumenthal 06]
    • 35.
      • Type 4
      • Network researchers who want specific bandwidths on demand within a topology. E.g. a topology with precise bandwidths between nodes, and where bandwidth can be setup and removed dynamically.
      • Type 5
      • Researchers who want access to raw optical wavelengths with no framing, protocol or transport constraints.
      • Type 6
      • Researchers who want access to raw fiber bandwidth . E.g. new transmission, modulation, coding and formats.
      • => Type 4-6 researchers are the networking physical layer research community who will invent and explore new ways to provide future stable substrates for experiments of Types 1-3
      • * It seems that “GENI Programmable Router” should
      • support type 1-3 researchers.
      • * Type 4-6 researchers may be supported at lower layers.
      Appendix C. GENI Programmable Router C.2 Summary of Requirements to Support Multiple Layers of Research [Blumenthal 06]
    • 36.
      • Criteria
      • 1) Multiple independent routing and forwarding tables for current experiments
      • for all types
      • 2) Dedicated bandwidth allocation for type 2
      • 3) Programmability at hardware and/or software level for type 3
      Appendix C. GENI Programmable Router C.3 Classification of Virtual Routers GENI Programmable Router 1) 1) & 2) 3) Linux Virtual Router VINI Overlay Router Juniper Logical Router NetFPGA Router Open Network Laboratory Extensible Router 2) 1) & 2) & 3)