For GENI to function, the GMC must be reduced to a specific implementation. We refer to this implementation as the “GENI Management Core Implementation”, or “GMCI”. The GMCI provides two key elements of the overall GENI system. These are 1) the “small set of core services” that are necessary for the system to operate, and 2) an “underlying messaging and remote operation invocation framework” needed for elements of the GENI system to communicate with each other.
* GMC 의 실제 구현은 Aggregate 인 것 같다 . GMC 는 GENI 제대로 운영되려면 어떻게 해야 하는지 정의해 놓은 framework.
GGID(for Naming) 에 대한 문구 ... The GMC defines unambiguous identifiers-called “GENI Global Identifiers (GGID)”-for the set of objects that make up GENI. These objects, as defined fully in the next section, include “users”, “components”, “aggregates”, and “slices”. Specifically, a GGID is represented as an X.509 certificate [X509, RFC3280] that binds a Universally Unique Identifier (UUID) to a public key. There may be one or many authorities that each implement the GMC. The “GMCI” provides a default repository that defines a hierarchical name space for objects, corresponding to the hierarchy of authorities that have been delegated the right to create and name objects. A repository is implemented as a service in GENI, and hence, it is identified with its own GGID. (=> Repository 가 service 로 구현 ) The protocols used to register with or query a repository are described in WSDL and are extensible and exportable. New repositories will be able to describe their capabilities in WSDL and common tools can be used to contact repositories.
* Components 에 대한 문구 The GMC defines “components” as the primary building block of GENI. For example, a component might correspond to an edge computer, a customizable router, or a programmable access point. A component encapsulates a collection of resources, including both physical resources (e.g., CPU, memory, disk, bandwidth) and logical resources (e.g., file descriptors, port numbers). Each component is controlled via a “component manager (CM)”, which exports a well-defined, remotely accessible interface (described in Section 6.2). The “component manager abstraction” defines the operations available to the GMC and higher level services to manage the allocation of component resources to different users and their experiments. One concrete representation of the “CM interface” is that it exports a set of procedures that can be invoked using XML-RPC.
* GMC 의 실제 구현은 Aggregate 인 것 같다 . GMC 는 GENI 가 제대로 운영되려면 어떻게 해야 하는지 정의해 놓은 framework. * GMC 라고 할까 Aggregate 이라고 할까 ? Key functions 과 aggregates... This section sketches how we expect these key functions will be implemented, primarily through the use of “aggregates”. ★ It is important to keep in mind that this section outlines several possible scenarios for how the GENI management plane organizes itself into a collection of “aggregates” to accomplish key functions. Other structures are possible, the only requirement being that they adhere to the set of “abstractions” and “interfaces” defined in the previous sections.
Components 의 resource allocation... Components might allocate resources directly 1) to individual “slices”, 2) to an network-specific “aggregate” that manages resource allocation across a set of related components, 3) to a “researcher portal” that implements a more sophisticated resource allocation policy, or 4) to some combination of all three. * Portal 에 대해 ... Second, a pair of “portals” provides interfaces to all of “GENI proper”. Researchers request slices and control their experiments using the “researcher portal”, where all rights for resources are concentrated with this portal and the Science Council policy is implemented by this portal. The portal then contacts the “slice control interface” in an appropriate set of “management aggregates” to create the corresponding slivers. Similarly, an “operator portal” defines a GENI-wide interface for operations across all GENI components. This portal, in turn, contacts the “O&M control interface” for the appropriate set of subnet-specific “management aggregates”.
* PlanetLab is the prototype of GENI (Larry Peterson @ Metro line M2 of Lausanne, 2005.10) * Reference: [Chen 06] huanghai.org/meetings/data/2ndC-K/PlanetLab20060828.ppt
* Our goal is for VINI to become shared infrastructure that enables researchers to simultaneously evaluate new protocols and services using real traffic from distributed services that are also sharing VINI resources. * The nodes at each site will initially be high-end servers, but may eventually be programmable hardware devices that can better handle a large number of simultaneous experiments carrying a volume of real traffic and many simultaneously running protocols.
* Perform these steps on your local PC to deploy and initialize a virtual network. 1) In the Makefile, change CONFIG to point to the Ruby configuration file that describes your virtual network topology and routing protocols (e.g., config/vini.rb ). 2) Generate the configuration files for all nodes. 3) Sync up files to the PlanetLab nodes. - This will copy everything (necessary scripts, configuration files, RPMs, etc.) into ~/vini on each PlanetLab node. Since it uses 'rsync', it may take a while the first time you run it, but after that it should be faster. You need to run this step every time you rebuild the configuration files, and also when you upgrade the software (e.g., to a new Click release). 4) Install any required RPMs on the boxes, install the file system image that UML runs from. - On each node, this command execute two other Makefile targets: install_rpms and get_uml_fs. You only need to run this the first time you deploy the PL-VINI software in your slice, or if you are upgrading the software.
* resources 로 links, switches, routers, ...
* Original UCLPv2 는 XC-WS 만 존재하여 link level APN 만 만들 수 있고 , network level APN 만들 수 없다 .
* 각 영역 설명하자 .
* it(Node Manager) periodically retrieves slices.xml from PLC, creates the necessary local slivers, interprets the resource allocations for those slivers specified by slices.xml, and manipulates the various kernel schedulers and allocators to affect those allocations. [Fiuczynski 04]
* PlanetLab 3.0 에 추가된 두 기능 - ticket 을 이용한 slice creation mechanism 제공 ( 디폴트로 기존의 PLC 를 이용한 slice creation mechanism 제공 ) - users 와 site PI 가 추가적인 slice attribute 를 지정할 수 있게 하였음 ( e.g., CPU shares, link bandwidth, disk quotas, and so on)
* PL-VINI enables arbitrary virtual networks, consisting of software routers connected by tunnels, to be configured within a PlanetLab slice. * An example of a network experiment enabled by PL-VINI is the I nternet In A Slice (IIAS) , which consists of an IP data plane configured by a control plane running standard network routing protocols. * This User Guide describes how to configure a PL-VINI experiment running IIAS , but IIAS is only one experiment supported by the PL-VINI infrastructure. * Virtual network topologies, consisting of UML instances connected by virtual point-to-point Ethernet links , running within a PlanetLab slice. PL-VINI matches packets to virtual links (implemented by UDP tunnels) based on the Ethernet MAC header , and so has no inherent dependencies on Layer 3 protocols.
* $slice = Slice.new(<control port>, <UDP port>,'<slice name>', <click forwarding>, '<router>‘) 1. The TCP port that Click runs its control (telnet) interface on. Pick an unused port. 2. The UDP port on which overlay traffic is tunneled. Pick an unused port. 3. The name of your PlanetLab/VINI slice. 4. Whether to enable IP forwarding in the Click FEA ( true | false ). This option is only meaningful with the XORP router. 5. What router software to run ( &quot;XORP&quot; | &quot;Quagga&quot; | &quot;none&quot; ). * $node = Node.new('<node DNS name>', $slice, 'label') * $node.hostinfo('<real IP>', '<tap0 IP>', '<tap0 MAC>‘) * $link = Link.new($node0, $node1, <OSPF cost>, <loss rate>) * $node.add_nat_dests(['<prefix1>', '<prefix2>', ...]) - Adds the specified prefixes to those for which the node will serve as a NAT egress
* <Backbone Node: Requirements and Architecture, GDD-06-26>
* <Backbone Node: Requirements and Architecture, GDD-06-26>
* <Backbone Node: Requirements and Architecture, GDD-06-26>
CS744 Sub Area Introduction: GENI Facility (Testbed) 2007.10.22 Yu Yeongjae [email_address]
➤ configuration management ➤ development tools ➤ diagnostics & monitoring ➤ data logging
1. GENI Architecture & Facility Design (Cont’d) * A “portal” is an interface that defines an "entry point“ through which users access GENI components
1. GENI Architecture & Facility Design (Cont’d) [Peterson 07] (1) Researcher Portal (2) Operator Portal * Both portals serve as “front-ends” for a set of “infrastructure services” that researchers engage to help them manage their slices, and operators engage to help them monitor and diagnose the components User Portal
− VINI is a virtual network infrastructure that allows network researchers to
evaluate their protocols and services in a realistic environment that also
provides a high degree of control over network conditions.
− PL-VINI is a prototype of a VINI that runs on the public PlanetLab. PL-VINI
enables arbitrary virtual networks, consisting of software routers connected
by tunnels, to be configured within a PlanetLab slice.
− VINI is an early prototype of the GENI [Rexford and Peterson 07]
3.1 PlanetLab & VINI (Cont’d)
<Deploying and Initializing the Virtual Network> 1. Make a slice on PlanetLab nodes 2. Change configuration file 3. Generate the configuration files for all nodes 4. Copy everything (necessary scripts, configuration files, RPMs, etc.) into each PlanetLab node 5. Install any required RPMs on the boxes, install the file system image that UML runs from 6. Start up the overlay
[Peterson 07] Larry Peterson, John Wroclawski, “ Overview of the GENI Architecture ”, GDD-06-11, January 2007
[Peterson 07] Larry Peterson, Tom Anderson, Dan Blumenthal, “ GENI Facility Design ”, GDD-07-44, March 2007
[Turner 06] Jonathan Turner, “ A Proposed Architecture for the GENI Backbone Platform ”, GDD-06-09, March 2006
[Blumenthal 06] Dan Blumenthal, Nick McKeown, “ Backbone Node: Requirements and Architecture ”, GDD-06-26, November 2006
[Peterson 06] Larry Peterson, Steve Muir, Timothy Roscoey, Aaron Klingaman, " PlanetLab Architecture: An Overview ", PDN-06-031, May 2006
[Bavier 06] Andy Bavier, Nick Feamster, Mark Huang, Larry Peterson, and Jennifer Rexford, " In VINI Veritas: Realistic and Controlled Network Experimentation “, SIGCOMM, October 2006
[St.Arnaud 06] Bill St.Arnaud, " UCLP Roadmap for creating User Controlled and Architected Networks using Service Oriented Architecture ", January 2006
Appendix A. PlanetLab Join Request PI submits Consortium paperwork and requests to join PI Activated PLC verifies PI, activates account, enables site (logged) User Activated Users create accounts with keys, PI activates accounts (logged) Nodes Added to Slices Users add nodes to their slice (logged) Slice Traffic Logged Experiments run on nodes and generate traffic (logged by Netflow) Traffic Logs Centrally Stored PLC periodically pulls traffic logs from nodes Slice Created PI creates slice and assigns users to it (logged) A.1 Chain of Responsibility [Peterson 05] * PI: PlanetLab Investigator * PLC: PlanetLab Central
Appendix A. PlanetLab PLC (SA) VMM NM VM VM . . . . . . A.2 Slice Creation Mechanism (1) * NM: Node Manager * VM: Virtual Machine * VMM: Virtual Machine Monitor PI SliceCreate( ) SliceUsersAdd( ) User SliceNodesAdd( ) SliceAttributeSet( ) SliceInstantiate( ) SliceGetAll( ) slices.xml VM …
Appendix A. PlanetLab PLC (SA) VMM NM VM VM . . . A.2 Slice Creation Mechanism (2) * NM: Node Manager * VM: Virtual Machine * VMM: Virtual Machine Monitor PI SliceCreate( ) SliceUsersAdd( ) User SliceAttributeSet( ) SliceGetTicket( ) VM … (distribute ticket to slice creation service) SliverCreate(ticket)
A virtual network's topology and routing protocols are specified by means of a configuration file. The basic idea is to define a Node object for each router in your experiment, and a Link object between two Node objects for each virtual link.
Next slide shows an example of configuration file.
Here, we categorize different experimenters and how they use might GENI.
Network researchers who want a stable network of standard routers (e.g. IPv4, IPv6 routers with standard features) operating over a topology of their choosing, but using links that are statistically shared with other users and experiments.
Network researchers who want a network of stable, standard routers (e.g. IPv4,
IPv6 routers with standard features) operating over a topology with dedicated and private bandwidth (e.g. a topology of circuits), with stable (possibly standard/default) framing.
Network researchers who want to deploy their own packet processing element
and protocols in a private, or shared, slice; running over shared or dedicated bandwidth links within a topology. The experimenter has complete control of how data passes over the network (including framing and packet format ).
=> Type 1-3 researchers are the networking research community who will need a stable substrate over which to perform their experiments
Appendix C. GENI Programmable Router C.2 Summary of Requirements to Support Multiple Layers of Research [Blumenthal 06]