Your SlideShare is downloading. ×
0
Lecture 8:  Testbeds Anish Arora CIS788.11J Introduction to Wireless Sensor Networks Material uses slides from Larry Peter...
References <ul><li>EmuLab   : artifact-free, auto-configured, fully controlled </li></ul><ul><ul><li>A configurable Intern...
Emulab philosophy <ul><li>Live-network experimentation </li></ul><ul><ul><li>Achieves realism </li></ul></ul><ul><ul><li>S...
Emulab <ul><li>Allow experimenter complete control, i.e., bare hardware with lots of tools for common cases </li></ul><ul>...
Experiment Life Cycle $ns duplex-link $A $B 1.5Mbps 20ms A B B A Specification Global Resource Allocation Node Self-Config...
assign: Mapping Local Cluster Resources <ul><li>Maps virtual resources to local nodes and VLANs  </li></ul><ul><li>General...
Frisbee: Disk Loading <ul><li>Loads full disk images (bulk download) </li></ul><ul><li>Performance techniques: </li></ul><...
IDE planned for Emulab <ul><li>Evolve Emulab to be the network-device-independent control and integration center for exper...
Collaboratory Subsystems <ul><li>Source repository:Sourceforge, CVS, Subversion </li></ul><ul><li>Datapository </li></ul><...
Experimentation Workbench <ul><li>Four types: </li></ul><ul><ul><li>Workflow management (processes), including </li></ul><...
Workbench: “Time Travel” and Stateful Swapout <ul><li>Time-travel of distributed systems for debugging </li></ul><ul><ul><...
Planetlab: Requirements <ul><li>It must provide a global platform that supports both short-term experiments and long-runni...
PlanetLab: Slices
Slices
Slices
User Opt-in Server NAT Client
Virtualization: Per Node View Virtual Machine Monitor (VMM) Node Mgr Owner VM VM 1 VM 2 VM n … Linux kernel (Fedora Core) ...
Global View PLC … … …
Requirements <ul><li>It must be available now, even though no one knows for sure what “it” is </li></ul><ul><ul><li>deploy...
Brokerage Service PLC (SA) VMM NM VM VM . . . . . . VM Broker VM … (broker contacts relevant nodes) Bind(slice, pool) User...
Requirements <ul><li>Convince sites to host nodes running code written by unknown researchers from other organizations. </...
Requirements <ul><li>Sustaining growth depends on support for site autonomy and decentralized control </li></ul><ul><ul><l...
Requirements <ul><li>It must scale to support many users with minimal resources available </li></ul><ul><ul><li>expect und...
Slice Creation PLC (SA) VMM NM VM VM … . . . . . . plc.scs PI SliceCreate( ) SliceUsersAdd( ) User/Agent GetTicket( ) (red...
Combining PlanetLab and Emulab: “Pelab” <ul><li>Motivation: </li></ul><ul><ul><li>PlanetLab (sort of) sees the “real Inter...
Approach <ul><li>App runs on Emulab nodes </li></ul><ul><ul><li>Chosen Plab nodes peered with Emulab nodes </li></ul></ul>...
GENI Design <ul><li>Key Idea </li></ul><ul><ul><li>Slices  embedded in a  substrate  of networking resources </li></ul></u...
National Fiber Facility
+ Programmable Routers
+ Clusters at Edge Sites
+ Wireless Subnets
+ ISP Peers MAE-West MAE-East
Closer Look Internet backbone wavelength backbone switch Sensor Network Edge Site Wireless Subnet Customizable  Router Dyn...
Summary of Substrate <ul><li>Node Components </li></ul><ul><ul><li>edge devices </li></ul></ul><ul><ul><li>customizable ro...
Management Framework GMC Management Services Substrate Components <ul><li>name space for users, slices, & components </li>...
GENI Management Core (GMC) Resource Controller Auditing Archive Slice Manager GMC node control sensor data CM Virtualizati...
Upcoming SlideShare
Loading in...5
×

CIS788.11J: Introduction to Wireless Sensor Networks (Testbeds)

587

Published on

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
587
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
11
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Transcript of "CIS788.11J: Introduction to Wireless Sensor Networks (Testbeds)"

  1. 1. Lecture 8: Testbeds Anish Arora CIS788.11J Introduction to Wireless Sensor Networks Material uses slides from Larry Peterson, Jay Lapreau, and GENI.net
  2. 2. References <ul><li>EmuLab : artifact-free, auto-configured, fully controlled </li></ul><ul><ul><li>A configurable Internet emulator </li></ul></ul><ul><ul><li>2001: 200 nodes, 500 wires, 2x BFS (switch) </li></ul></ul><ul><ul><li>2006: 350 PCs, 7 IXPs, 40 WANodes, 27+ 802.11nodes </li></ul></ul><ul><li>PlanetLab : real environment </li></ul><ul><li>GENI </li></ul><ul><li>670 machines spanning 325 sites and 35 countries </li></ul><ul><ul><li>nodes within a LAN-hop of > 3M users </li></ul></ul><ul><li>Supports distributed virtualization </li></ul><ul><ul><li>each of 600+ network services running in their own slice </li></ul></ul>
  3. 3. Emulab philosophy <ul><li>Live-network experimentation </li></ul><ul><ul><li>Achieves realism </li></ul></ul><ul><ul><li>Surrenders repeatability </li></ul></ul><ul><ul><li>e.g., MIT “RON” testbed, PlanetLab </li></ul></ul><ul><li>Pure emulation </li></ul><ul><ul><li>Introduces controlled packet loss and delay </li></ul></ul><ul><ul><li>Requires tedious manual configuration </li></ul></ul><ul><li>Emulab approach </li></ul><ul><ul><li>Brings simulation’s efficiency and automation to emulation </li></ul></ul><ul><ul><li>Artifact free environment </li></ul></ul><ul><ul><li>Arbitrary workload: any OS, any ”router” code, any program, for any user </li></ul></ul><ul><ul><li>So default resource allocation policy is conservative: </li></ul></ul><ul><ul><ul><li>allocate full real node & link: no multiplexing; assume max. possible traffic </li></ul></ul></ul>
  4. 4. Emulab <ul><li>Allow experimenter complete control, i.e., bare hardware with lots of tools for common cases </li></ul><ul><ul><li>OS’s, disk loading, state mgmt tools, IP, traffic generation, batch, ... </li></ul></ul><ul><li>Virtualization </li></ul><ul><ul><li>of all experimenter-visible resources </li></ul></ul><ul><ul><li>topology, links, software, node names, network interface names, network addresses </li></ul></ul><ul><ul><li>Allows swapin/swapout </li></ul></ul><ul><li>Remotely accessible </li></ul><ul><li>Persistent state maintenance (in database) </li></ul><ul><li>Separate control network </li></ul><ul><li>Configuration language: ns </li></ul>
  5. 5. Experiment Life Cycle $ns duplex-link $A $B 1.5Mbps 20ms A B B A Specification Global Resource Allocation Node Self-Configuration Experiment Control Swap Out Parsing Swap In B A DB
  6. 6. assign: Mapping Local Cluster Resources <ul><li>Maps virtual resources to local nodes and VLANs </li></ul><ul><li>General combinatorial optimization approach to NP-complete problem </li></ul><ul><li>Based on simulated annealing </li></ul><ul><li>Minimizes inter-switch links, # switches & other constraints … </li></ul><ul><li>All experiments mapped in less than 3 secs [100 nodes] </li></ul><ul><li>WANassign for Mapping Global Resources (uses genetic algorithm) </li></ul>
  7. 7. Frisbee: Disk Loading <ul><li>Loads full disk images (bulk download) </li></ul><ul><li>Performance techniques: </li></ul><ul><ul><li>Overlaps block decompression and device I/O </li></ul></ul><ul><ul><li>Uses a domain-specific algorithm to skip unused blocks </li></ul></ul><ul><ul><li>Delivers images via a custom reliable multicast protocol </li></ul></ul><ul><li>13 GB generic IDE 7200 rpm drives </li></ul><ul><li>Was 20 minutes for 6 GB image </li></ul><ul><li>Now 88 seconds </li></ul>
  8. 8. IDE planned for Emulab <ul><li>Evolve Emulab to be the network-device-independent control and integration center for experimentation, research, development, debugging, measurement, data management, and archiving </li></ul><ul><ul><li>Collaboratory: Emulab’s project abstraction </li></ul></ul><ul><ul><li>Workbench: Emulab’s experiment abstraction </li></ul></ul><ul><ul><li>Device-independent: Emulab’s builtin abstractions for </li></ul></ul><ul><ul><li>all things network-related </li></ul></ul>
  9. 9. Collaboratory Subsystems <ul><li>Source repository:Sourceforge, CVS, Subversion </li></ul><ul><li>Datapository </li></ul><ul><li>“ My Wikis” </li></ul><ul><li>Mailing list(s) </li></ul><ul><li>Bug database </li></ul><ul><li>Chat/IM, chatroom management </li></ul><ul><li>Moodle? </li></ul><ul><li>Approach </li></ul><ul><ul><li>Transparently do authentication, authorization and membership mgmt: “single signon” </li></ul></ul><ul><ul><li>Use separate server for information and resource security and management </li></ul></ul><ul><ul><li>Support flexible access policies: default is project-private, but project leader can change, per-subsytem </li></ul></ul><ul><ul><ul><li>Private, public read-only, public read/write </li></ul></ul></ul>
  10. 10. Experimentation Workbench <ul><li>Four types: </li></ul><ul><ul><li>Workflow management (processes), including </li></ul></ul><ul><ul><ul><li>Measurement and feedback steps </li></ul></ul></ul><ul><ul><ul><li>mandatory pipelines </li></ul></ul></ul><ul><ul><li>Experiment management </li></ul></ul><ul><ul><li>Data management </li></ul></ul><ul><ul><li>Analyses </li></ul></ul>
  11. 11. Workbench: “Time Travel” and Stateful Swapout <ul><li>Time-travel of distributed systems for debugging </li></ul><ul><ul><li>Generalize disk image format and handling </li></ul></ul><ul><ul><li>Periodic disk checkpointing </li></ul></ul><ul><ul><li>Full state-save on swapout </li></ul></ul><ul><ul><li>Xen-based virtual machines </li></ul></ul><ul><ul><li>Challenge: network state (packets in flight) </li></ul></ul><ul><ul><ul><li>Pragmatic approach: quiesce senders, flush buffers </li></ul></ul></ul><ul><li>Stateful swapout/swapin [easier] </li></ul><ul><ul><li>Allows transparent pre-emption experiment </li></ul></ul><ul><li>Related to workbench: history, tree traversal </li></ul><ul><ul><li>Can share some mechanisms, some UI </li></ul></ul>
  12. 12. Planetlab: Requirements <ul><li>It must provide a global platform that supports both short-term experiments and long-running services. </li></ul><ul><ul><li>services must be isolated from each other </li></ul></ul><ul><ul><li>multiple services must run concurrently </li></ul></ul><ul><ul><li>must support real client workloads </li></ul></ul><ul><li>Key Ideas </li></ul><ul><ul><li>Slices </li></ul></ul><ul><ul><li>Virtualization </li></ul></ul><ul><ul><ul><li>multiple architectures on a shared infrastructure </li></ul></ul></ul><ul><ul><li>Programmable </li></ul></ul><ul><ul><ul><li>virtually no limit on new designs </li></ul></ul></ul><ul><ul><li>Opt-in on a per-user / per-application basis </li></ul></ul><ul><ul><ul><li>attract real users </li></ul></ul></ul><ul><ul><ul><li>demand drives deployment / adoption </li></ul></ul></ul>
  13. 13. PlanetLab: Slices
  14. 14. Slices
  15. 15. Slices
  16. 16. User Opt-in Server NAT Client
  17. 17. Virtualization: Per Node View Virtual Machine Monitor (VMM) Node Mgr Owner VM VM 1 VM 2 VM n … Linux kernel (Fedora Core) + Vservers (namespace isolation) + Schedulers (performance isolation) + VNET (network virtualization) Auditing service Monitoring services Brokerage services Provisioning services
  18. 18. Global View PLC … … …
  19. 19. Requirements <ul><li>It must be available now, even though no one knows for sure what “it” is </li></ul><ul><ul><li>deploy what we have today, and evolve over time </li></ul></ul><ul><ul><li>make the system as familiar as possible (e.g., Linux) </li></ul></ul><ul><ul><li>accommodate third-party management services </li></ul></ul>
  20. 20. Brokerage Service PLC (SA) VMM NM VM VM . . . . . . VM Broker VM … (broker contacts relevant nodes) Bind(slice, pool) User BuyResources( )
  21. 21. Requirements <ul><li>Convince sites to host nodes running code written by unknown researchers from other organizations. </li></ul><ul><ul><li>protect the Internet from PlanetLab traffic </li></ul></ul><ul><ul><li>must get the trust relationships right </li></ul></ul><ul><ul><li>trusted intermediary: PLC </li></ul></ul>Node Owner PLC Service Developer (User) 1) PLC expresses trust in a user by issuing it credentials to access a slice 2) Users trust PLC to create slices on their behalf and inspect credentials 3) Owner trusts PLC to vet users and map network activity to right user 4) PLC trusts owner to keep nodes physically secure 1 2 3 4
  22. 22. Requirements <ul><li>Sustaining growth depends on support for site autonomy and decentralized control </li></ul><ul><ul><li>sites have final say over the nodes they host </li></ul></ul><ul><ul><li>must minimize (eliminate) centralized control </li></ul></ul><ul><li>Owner autonomy </li></ul><ul><ul><li>owners allocate resources to favored slices </li></ul></ul><ul><ul><li>owners selectively disallow unfavored slices </li></ul></ul><ul><li>Delegation </li></ul><ul><ul><li>PLC grants tickets that are redeemed at nodes </li></ul></ul><ul><ul><li>enables third-party management services </li></ul></ul><ul><li>Federation </li></ul><ul><ul><li>create “private” PlanetLabs using MyPLC </li></ul></ul><ul><ul><li>establish peering agreements </li></ul></ul>
  23. 23. Requirements <ul><li>It must scale to support many users with minimal resources available </li></ul><ul><ul><li>expect under-provisioned state to be the norm </li></ul></ul><ul><ul><li>shortage of logical resources too (e.g., IP addresses) </li></ul></ul><ul><li>Decouple slice creation and resource allocation </li></ul><ul><ul><li>given a “fair share” by default when created </li></ul></ul><ul><ul><li>acquire additional resources, including guarantees </li></ul></ul><ul><li>Fair share with protection against thrashing </li></ul><ul><ul><li>1/N th of CPU </li></ul></ul><ul><ul><li>1/N th of link bandwidth </li></ul></ul><ul><ul><ul><li>owner limits peak rate </li></ul></ul></ul><ul><ul><ul><li>upper bound on average rate (protect campus bandwidth) </li></ul></ul></ul><ul><ul><li>disk quota </li></ul></ul>
  24. 24. Slice Creation PLC (SA) VMM NM VM VM … . . . . . . plc.scs PI SliceCreate( ) SliceUsersAdd( ) User/Agent GetTicket( ) (redeem ticket with plc.scs) CreateVM(slice)
  25. 25. Combining PlanetLab and Emulab: “Pelab” <ul><li>Motivation: </li></ul><ul><ul><li>PlanetLab (sort of) sees the “real Internet” </li></ul></ul><ul><ul><ul><li>But its hosts are hugely overloaded, unpredictable </li></ul></ul></ul><ul><ul><ul><li>Internet and host variability </li></ul></ul></ul><ul><ul><ul><li> Takes many many runs to get statistical significance </li></ul></ul></ul><ul><ul><ul><li> Hard to debug </li></ul></ul></ul><ul><ul><li>Emulab provides predictable, dedicated host resources and a controlled, repeatable environment </li></ul></ul><ul><ul><ul><li>But network model is completely fake </li></ul></ul></ul>
  26. 26. Approach <ul><li>App runs on Emulab nodes </li></ul><ul><ul><li>Chosen Plab nodes peered with Emulab nodes </li></ul></ul><ul><ul><li>App-traffic gen and measurement stubs start up on Plab </li></ul></ul><ul><ul><li>Send real time network conditions to Emulab </li></ul></ul><ul><ul><li>Develop & continuously run adaptive Plab path-condition monitor </li></ul></ul><ul><ul><ul><li>Pour results into Datapository </li></ul></ul></ul><ul><ul><ul><li>Use for initial conditions or when app goes idle on certain pairs </li></ul></ul></ul>
  27. 27. GENI Design <ul><li>Key Idea </li></ul><ul><ul><li>Slices embedded in a substrate of networking resources </li></ul></ul><ul><li>Two central pieces </li></ul><ul><ul><li>Physical network substrate </li></ul></ul><ul><ul><ul><li>expandable collection of building block components </li></ul></ul></ul><ul><ul><ul><li>nodes / links / subnets </li></ul></ul></ul><ul><ul><li>Software management framework </li></ul></ul><ul><ul><ul><li>knits building blocks together into a coherent facility </li></ul></ul></ul><ul><ul><ul><li>embeds slices in the physical substrate </li></ul></ul></ul>
  28. 28. National Fiber Facility
  29. 29. + Programmable Routers
  30. 30. + Clusters at Edge Sites
  31. 31. + Wireless Subnets
  32. 32. + ISP Peers MAE-West MAE-East
  33. 33. Closer Look Internet backbone wavelength backbone switch Sensor Network Edge Site Wireless Subnet Customizable Router Dynamic Configurable Switch
  34. 34. Summary of Substrate <ul><li>Node Components </li></ul><ul><ul><li>edge devices </li></ul></ul><ul><ul><li>customizable routers </li></ul></ul><ul><ul><li>optical switches </li></ul></ul><ul><li>Bandwidth </li></ul><ul><ul><li>national fiber facility </li></ul></ul><ul><ul><li>tail circuits (including tunnels) </li></ul></ul><ul><li>Wireless Subnets </li></ul><ul><ul><li>urban 802.11 </li></ul></ul><ul><ul><li>wide-area 3G/WiMax </li></ul></ul><ul><ul><li>cognitive radio </li></ul></ul><ul><ul><li>sensor net </li></ul></ul><ul><ul><li>emulation </li></ul></ul>
  35. 35. Management Framework GMC Management Services Substrate Components <ul><li>name space for users, slices, & components </li></ul><ul><li>set of interfaces (“plug in” new components) </li></ul><ul><li>support for federation (“plug in” new partners) </li></ul>
  36. 36. GENI Management Core (GMC) Resource Controller Auditing Archive Slice Manager GMC node control sensor data CM Virtualization SW Substrate HW CM Virtualization SW Substrate HW CM Virtualization SW Substrate HW
  1. A particular slide catching your eye?

    Clipping is a handy way to collect important slides you want to go back to later.

×