View stunning SlideShares in full-screen with the new iOS app!Introducing SlideShare for AndroidExplore all your favorite topics in the SlideShare appGet the SlideShare app to Save for Later — even offline
View stunning SlideShares in full-screen with the new Android app!View stunning SlideShares in full-screen with the new iOS app!
PlanetLab is a geographically distributed overlay network designed to support the deployment and evaluation of planetary-scale network services.
580 machines spanning 275 sites and 30 countries nodes within a LAN-hop of > 2M users
Supports distributed virtualization each of 425 network services running in their own slice
PlanetLab services and applications run in a slice of the platform: a set of nodes on which the service receives a fraction of each node’s resources, in the form of a virtual
PlanetLab Denver Seattle Sunnyvale LA San Diego Chicago Pitts Wash DC Raleigh Jacksonville Atlanta KC Baton Rouge El Paso - Las Cruces Phoenix Pensacola Dallas San Ant. Houston Albuq . Tulsa New York Clev
Minimize the functionality subsumed by the PlanetLab OS
Maximize the opportunity for services to compete with each other on a level playing field, the interface between the OS and these infrastructure services must be sharable, and hence, without special privilege.
It must allocate and schedule node resources (cycles,bandwidth, memory, and storage) so that the runtime behavior of one slice on a node does not adversely affect the performance of another on the same node.
It must either partition or contextualize the available name spaces (network addresses, file names, etc.) to prevent a slice interfering with another, or gaining access to information in another slice.
It must provide a stable programming base that cannot be manipulated by code running in one slice in a way that negatively affects another slice.
Linux Vserver which is a system call level virtualization. Vservers are the principal mechanism in PlanetLab for providing virtualization on a single node, and contextualization of name spaces; e.g.,user identifiers and files
Chroot utility, which is used to provide file system isolation. It extends the non-reversible isolation provided by chroot for filesystems to other operating system resources, such as processes and SysV IPC.
The Hierarchical Token Bucket (htb) queuing discipline of the Linux Traffic Control facility (tc) is used to cap the total outgoing bandwidth of a node, cap per-vserver output, and to provide bandwidth guarantees and fair service among vservers.
CPU scheduling is implemented by the SILK kernel module, which leverages Scout  to provide vservers with CPU guarantees and fairness.
PlanetLab’s CPU scheduler uses a proportional sharing (PS) scheduling policy to fairly share the CPU. It incorporates the resource container abstraction and maps each vserver onto a resource container that possesses some number of shares.
The scalability of vservers is primarily determined by disk space for vserver root filesystems and service-specific storage. 508MB of disk is required for each VM. Copy on write technology is adopted by PlanetLab OS to reduce disk storage space
Kernel resource limits are a secondary factor in the scalability of vservers. Re-configure Kernel option and then recompile kernel is an option
Scout is a modular, configurable, communication oriented operating system developed for small network appliances. Scout was designed around the needs of data-centric applications with particular attention given to networking.
SILK stands for Scout In the Linux kernel and is a port of the scout operating system to run as a Linux kernel module.