• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Talk Slides
 

Talk Slides

on

  • 514 views

 

Statistics

Views

Total Views
514
Views on SlideShare
514
Embed Views
0

Actions

Likes
0
Downloads
3
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • This slide tries to lay out the overall goals. Make Grids like PC => OWNERSHIP. No need to limit yourself to a resource provider’s configurations, or to specific resource providers that have the configuration you need. Get away from batch processing model. This is “The Dell Model” => you order up a virtual machine, or set of machines with a specific configuration. It arrives, you plug it into your network, power it up, configure as you need, etc. Lowering the level of abstract (as compared to today’s grid computing, which raises it) to simplify making this goal possible. Our work focuses on hiding the complexity involved in this approach form the user.
  • Platform agnostic => *any* protocol that runs on top of Ethernet will work here. Mobility => easier to hide mobility details at such a low layer. MAC, IP, etc addresses don’t change. An assumption is made here that both the user’s machine and the destination machine have ethernet interfaces. The first reaction to anyone suggesting a layer 2 approach to this is that it won’t scale. However, we only need it to scale to the size of a user’s group of virtual machines. Furthermore, unlike on a regular ethernet network, we get to *assign* addresses here, so we can, in principle, assign them hierarchically and do hierarchical routing. (2^24 addresses per ethernet vendor)
  • The point here is that while we start with a star topology centered on the user’s client machine, we will be able to bootstrap to more complex and efficient topologies that reflect the movement of virtual machines and the security concerns at each site. We are just starting on this.

Talk Slides Talk Slides Presentation Transcript

  • Towards Virtual Networks for Virtual Machine Grid Computing Ananth I. Sundararaj Peter A. Dinda Prescience Lab Department of Computer Science Northwestern University http://virtuoso.cs.northwestern.edu
  • Outline
    • Virtual machine grid computing
    • Virtuoso system
    • Networking challenges in Virtuoso
    • Enter VNET
    • VNET Adaptive virtual network
    • Related Work
    • Conclusions
    • Current Status
  • Aim Grid Computing New Paradigm Traditional Paradigm Deliver arbitrary amounts of computational power to perform distributed and parallel computations Problem1: Grid Computing using virtual machines Problem2: Solution How to leverage them? Virtual Machines What are they? 6b 6a 5 4 3b 3a 2 1 Resource multiplexing using OS level mechanism Complexity from resource user’s perspective Complexity from resource owner’s perspective
  • Virtual Machines Virtual machine monitors (VMMs)
      • Raw machine is the abstraction
      • VM represented by a single
      • image
      • VMware GSX Server
  • Virtual machine grid computing
    • Approach: Lower level of abstraction
      • Raw machines, not processes, jobs, RPC calls
    • R. Figueiredo, P. Dinda, J. Fortes, A Case For Grid Computing on Virtual Machines , ICDCS 2003
    • Mechanism: Virtual machine monitors
    • Our Focus: Middleware support to hide complexity
      • Ordering, instantiation, migration of machines
      • Virtual networking
      • remote devices
      • Connectivity to remote files, machines
      • Information services
      • Monitoring and prediction
      • Resource control
  • The Simplified Virtuoso Model Orders a raw machine User Specific hardware and performance Basic software installation available User’s LAN VM Virtual networking ties the machine back to user’s home network Virtuoso continuously monitors and adapts
  • User’s View in Virtuoso Model User User’s LAN VM
  • Outline
    • Virtual machine grid computing
    • Virtuoso system
    • Networking challenges in Virtuoso
    • Enter VNET
    • VNET Adaptive virtual network
    • Related Work
    • Conclusions
    • Current Status
  • User’s friendly LAN Foreign hostile LAN Virtual Machine Why VNET? A Scenario IP network User has just bought
  • User’s friendly LAN Foreign hostile LAN Virtual Machine VNET: A bridge with long wires Host Proxy X Why VNET? A Scenario VM traffic going out on foreign LAN IP network
    • A machine is suddenly plugged into a foreign network. What happens?
        • Does it get an IP address?
        • Is it a routeable address?
        • Does firewall let its traffic
        • through? To any port?
  • Outline
    • Virtual machine grid computing
    • Virtuoso system
    • Networking challenges in Virtuoso
    • Enter VNET
    • VNET Adaptive virtual network
    • Related Work
    • Conclusions
    • Current Status
  • A Layer 2 Virtual Network for the User’s Virtual Machines
    • Why Layer 2?
      • Protocol agnostic
      • Mobility
      • Simple to understand
      • Ubiquity of Ethernet on end-systems
    • What about scaling?
      • Number of VMs limited (~1024 per user)
      • One VNET per user
      • Hierarchical routing possible because MAC addresses can be assigned hierarchically
  • VNET operation Host VM Proxy VNET Client vmnet0 ethx ethz “ eth0” VNET ethy “ eth0” Client LAN IP Network Ethernet Packet Tunneled over TCP/SSL Connection Ethernet Packet Captured by Promiscuous Packet Filter Ethernet Packet Injected Directly into VM interface “ Host Only” Network Traffic outbound from the user’s LAN
  • Performance Evaluation Main goal Convey the network management problem induced by VMs to the home network of the user
    • VNET’s performance should be
    • In line with physical network
    • Comparable to other options
    • Sufficient for scenarios
    However Metrics Latency Bandwidth
    • small transfer
    • Interactivity
    • Large transfer
    • low throughput
    Why? How? How? Why?
    • ping
    • hour long intervals
    • ttcp
    • socket buffer
    • 1 GB of data
  • VNET test configuration Proxy 100 mbit Switches Client 100 mbit Switch Firewall 1 Router Host 100 mbit Switches 100 mbit Switch Firewall 2 VM Local Local area configuration Proxy 100 mbit Switches Client 100 mbit Switch Firewall 1 Router Host 100 mbit Switch Router VM Local IP Network (14 hops via Abilene ) Wide area configuration Northwestern University, IL Carnegie Mellon University, PA
  • Average latency over WAN Northwestern University, IL Carnegie Mellon University, PA (Physical Network) Host - VM Client - Proxy Proxy - Host Proxy Client Host VM IP Network
  • Standard deviation of latency over WAN What: VNET increases variability in latency TCP connection between VNET servers trades packet loss for increased delay Why: (Physical Network)
  • Bandwidth over WAN What do we see: VNET achieves lower than expected throughput VNET’s is tricking TTCP’s TCP connection Why: Expectation: VNET to achieve throughput comparable to the physical network
  • Outline
    • Virtual machine grid computing
    • Virtuoso system
    • Networking challenges in Virtuoso
    • Enter VNET
    • VNET Adaptive virtual network
    • Related Work
    • Conclusions
    • Current Status
  • User’s friendly LAN Foreign hostile LAN 1 Host 2 + VNET Proxy + VNET VNET Overlay IP network Host 3 + VNET Host 4 + VNET Host 1 + VNET Foreign hostile LAN 3 Foreign hostile LAN 4 Foreign hostile LAN 2 VM 1 VM 4 VM 3 VM 2
  • Bootstrapping the Virtual Network
    • Topology may change
        • Links can be added or removed on demand
        • Virtual machines can migrate
    VM Vnetd VM Host + VNETd Proxy + VNETd VM
    • Star topology always possible
    • Forwarding rules can change
        • Forwarding rules can be added or removed on demand
  • VM Layer VNETd Layer Physical Layer Application communication topology and traffic load; application processor load Network bandwidth and latency; sometimes topology Vnetd layer can collect all this information as a side effect of packet transfers and invisibly act
      • Reservation
      • Routing change
      • VM migrates
      • Topology changes
  • Outline
    • Virtual machine grid computing
    • Virtuoso system
    • Networking challenges in Virtuoso
    • Enter VNET
    • VNET Adaptive virtual network
    • Related Work
    • Conclusions
    • Current Status
  • Related Work
    • Collective / Capsule Computing (Stanford)
      • VMM, Migration/caching, Hierarchical image files
    • Denali (U. Washington)
      • Highly scalable VMMs (1000s of VMMs per node)
    • SODA and VIOLIN (Purdue)
      • Virtual Server, fast deployment of services
    • VPN
    • Virtual LANs, IEEE
    • Overlay Networks: RON, Spawning networks, Overcast
    • Ensim
    • Virtuozzo (SWSoft)
      • Ensim competitor
    • Available VMMs: IBM’s VM, VMWare, Virtual PC/Server, Plex/86, SIMICS, Hypervisor, VM/386
  • Conclusions
    • There exists a strong case for grid computing using virtual machines
    • Challenging network management problem induced by VMs in the grid environment
    • Described and evaluated a tool, VNET, that solves this problem
    • Discussed the opportunities, the combination of VNET and VMs present, to exploit an adaptive overlay network
  • Current Status
    • Application traffic load measurement and topology inference [Ashish Gupta]
    • Support for arbitrary topologies and forwarding rules
    • Dynamic adaptation to improve performance
  • Current Status Snapshots Pseudo proxy
    • For More Information
      • Prescience Lab (Northwestern University)
        • http://plab.cs.northwestern.edu
      • Virtuoso: Resource Management and Prediction for Distributed Computing using Virtual Machines
        • http://virtuoso.cs.northwestern.edu
    • VNET is publicly available from
        • http:// virtuoso.cs.northwestern.edu
  • Isn’t It Going to Be Too Slow? Experimental setup: physical: dual Pentium III 933MHz, 512MB memory, RedHat 7.1, 30GB disk; virtual: Vmware Workstation 3.0a, 128MB memory, 2GB virtual disk, RedHat 2.0 NFS-based grid virtual file system between UFL (client) and NWU (server) Small relative virtualization overhead; compute-intensive Relative overheads < 5% 4.2% 9.70 VM, Grid virtual FS 4.0% 9.68 VM, local N/A 9.31 Physical SpecHPC Climate (serial, medium) 2.0% 16.8 VM, Grid virtual FS 1.2% 16.6 VM, local N/A 16.4 Physical SpecHPC Seismic (serial, medium) Overhead ExecTime (10^3 s) Resource Application
  • Isn’t It Going To Be Too Slow? Synthetic benchmark: exponentially arrivals of compute bound tasks, background load provided by playback of traces from PSC Relative overheads < 10%
  • Isn’t It Going To Be Too Slow?
    • Virtualized NICs have very similar bandwidth, slightly higher latencies
      • J . Sugerman, G. Venkitachalam, B-H Lim, “Virtualizing I/O Devices on VMware Workstation’s Hosted Virtual Machine Monitor”, USENIX 2001
    • Disk-intensive workloads (kernel build, web service): 30% slowdown
      • S. King, G. Dunlap, P. Chen, “OS support for Virtual Machines”, USENIX 2003
    • However: May not scale with faster NIC or disk
  • Average latency over WAN Comparison with options VNET = 37.535 ms = 35.525 ms (with SSL) VMware = 35.625 (NAT) = 37.435 ms (bridged) Inline with Physical? Physical= C-P + P-H + H-VM = 0.34 + 36.993 + 0.189 = 37.522 ms VNET = 37.535 ms = 35.525 ms (with SSL) Client -- C Proxy -- P Host -- H Physical network VMware options VNET options H-VM P-H C-P
  • Standard deviation of latency over WAN Inline with Physical? Physical= C-P + P-H + H-VM = 1.11 + 18.702 + 0.095 = 19.907 ms VNET = 77.287 ms = 40.763 ms (with SSL) Client -- C Proxy -- P Host -- H H-VM H-VM C-P What: VNET increases variability in latency TCP connection between VNET servers trades packet loss for increased delay Why:
  • Bandwidth over WAN Inline with Physical? Physical= 1.93 MB/s VNET = 1.22 MB/s = 0.94 MB/s (with SSL) What: VNET achieves lower than expected throughput VNET’s is tricking TTCP’s TCP connection Why: Expect: VNET to achieve throughput comparable to the physical network VMWare bridged networking Physical network