Your SlideShare is downloading. ×
TH1_3-8O
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

TH1_3-8O

207
views

Published on

Published in: Technology

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
207
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
0
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. A study of introduction of the virtualization technology into operator consoles T.Ohata, M.Ishii / SPring-8 ICALEPCS 2005, October 10-14, 2005 Geneva, Switzerland
  • 2. Contents Virtualization technology overview Categorize virtualization technologies Performance evaluation How many virtual machines run on a server Introduction into the control system System setup Conclusion ICALEPCS 2005 in Geneva, Switzerland
  • 3. What is virtualization technology? ICALEPCS 2005 in Geneva, Switzerland
  • 4. Overview of a virtualization technology Originated from IBM CPU CPU System/360 Network Network Enable to consolidate card card many computers into a small number of host MEMORY MEMORY computer Each virtual machine DISK DISK (VMs) has independent resources (CPU, disks, MAC address, etc.) like a Mainframe stand-alone computer VM VM VM VM VM VM VM VM Host computer ICALEPCS 2005 in Geneva, Switzerland
  • 5. Why we need virtualization technology? ICALEPCS 2005 in Geneva, Switzerland
  • 6. Problem of present control system Network distributed computing is standard method We can construct an We have over 200 computers efficient control system only in beamline control system Computer proliferation • Increasing maintenance tasks such as version up, patching etc. • We faced increasing hardware failure maintain them by a few staff ICALEPCS 2005 in Geneva, Switzerland
  • 7. Virtualization technology has revived We can reduce a number of computers. We can cut hardware costs and their maintenance costs consolidation drastically. General-purpose server ICALEPCS 2005 in Geneva, Switzerland
  • 8. Category of virtualization technology - Three virtualization approaches - typical products Resource Xen*, LPAR(IBM), nPartition(HP) multiplex VMware*, VirtualPC, QEMU, Bochs Emulation User-Mode-Linux*, coLinux Application Solaris container*, jail, chroot shielding * Evaluated products ICALEPCS 2005 in Geneva, Switzerland
  • 9. 1. Resource multiplex Special OS to suit Originated from mainframe layer interface Major UNIX vendors released S/W S/W S/W several products OS OS OS A layer multiplexes hardware Multiplex resources (called hypervisor hardware resources or virtual machine monitor) CPU, memory, etc. Hardware Need small patch to kernel Less overhead ICALEPCS 2005 in Geneva, Switzerland
  • 10. 2. Emulation Many emulator for PC/AT, 68K emulation overhead and game machines S/W S/W S/W Suitable for development and OS OS OS debugging Emulation layer Usable unmodified OS Operating system Some overhead in transform Hardware instructions ICALEPCS 2005 in Geneva, Switzerland
  • 11. 3. Application shielding Developed for web hosting partitions of IPS (internet service provider) to obtain separate computing environment S/W S/W S/W Partition makes invisible Operating system computing space from Hardware others No overhead ICALEPCS 2005 in Geneva, Switzerland
  • 12. Performance evaluation How many VMs can run on a server computer ICALEPCS 2005 in Geneva, Switzerland
  • 13. Evaluated products Host OS Products Comments Guest OS VMware 4.5 Linux-2.6.8 Commercial, Workstation Linux-2.6.8 Support many OS User-Mode-Linux Linux-2.6.8 Only Linux on x86 (UML) Linux-2.4.26um Sparc and x86 Solaris container Solaris 10 FSS*, CPU pinning* Linux-2.6-xen0 FSS, CPU pinning Xen 2.06 Linux-2.6-xenU Live migration* * Next sheet ICALEPCS 2005 in Geneva, Switzerland
  • 14. Special function ◆ Fair Share Scheduler (FSS) ◆Scheduling policy, CPU usage is equally distributed among tasks ◆ CPU pinning ◆Pin a VM to specific CPU (effective in SMP environment) ◆(Linux has “affinity” function, which can pin only a process) ◆ Live migration ◆VMs migrate to other host dynamically ◆VMs can be running during migration VM VM VM VM VM VM VM VM Host 1 Host 2 Live migration ICALEPCS 2005 in Geneva, Switzerland
  • 15. Measurement procedure Response time between virtual machine and VME by using MADOCA application Message queue (SYSV IPC) VM and ONC-RPC network communication protocol RPC Message size is 350 bytes (Remote Procedure Call) including RPC header and Ethernet frame header VME MADOCA: Message And Database Oriented Control Architecture ICALEPCS 2005 in Geneva, Switzerland
  • 16. Measurement bench MADOCA VM VM client VM VM • 1~10 VMs are running on single server computer (Dual Xeon 3.0GHz) Network • MADOCA client is running on each VM MADOCA MADOCA server server Measure response time MADOCA MADOCA server server 1~10 MADOCA servers on a network ICALEPCS 2005 in Geneva, Switzerland
  • 17. Number of VM dependency of average response time HP B2000 is HP B2000 (reference) present operator average response time console VMware and UML becomes worse at many VMs 5~6 VMs of Solaris and Xen are [sec] comparable to HP workstation Number of VMs ICALEPCS 2005 in Geneva, Switzerland
  • 18. Statistics of response time @ 10VMs VMware UML Solaris container Xen HP B2000 response time[msec] 2,000 15.00 1,500 11.25 1,000 7.50 better better 500 3.75 0 0.00 Max. Min. Ave. SD. ICALEPCS 2005 in Geneva, Switzerland
  • 19. Limit of hardware resources - CPU utilization - (%) CPU utilization of Solaris container the Host of VMs CPU utilization No more IDLE time at 5~6 VMs 5~6 VMs are optimum Number of VMs ICALEPCS 2005 in Geneva, Switzerland
  • 20. Limit of hardware resources - Network interface card (NIC) utilization - (MB/s) Traffic on the GbE Solaris container network interface NIC utilization card Utilization is a few percent of full bandwidth Saturation comes from CPU overload Number of VMs ICALEPCS 2005 in Geneva, Switzerland
  • 21. Limit of hardware resources - Page fault frequency - page fault frequency 150 Solaris container Page fault wastes CPU time 100 It makes performance deterioration 50 Saturation come from miss hit of TLB and 0 swap out 1 2 3 4 5 6 7 8 9 10 Number of VMs ICALEPCS 2005 in Geneva, Switzerland
  • 22. How many VMs are optimum? 5~6 VMs are optimum@(Dual Xeon 3.0GHz) If you want to run more VMs… Large page size on large addressing space architecture is important. - Physical Address Extension (PAE) or 64-bit architecture Many core CPU is attractive. - One CPU core is enough for 2~3 VMs ICALEPCS 2005 in Geneva, Switzerland
  • 23. Introduction into the control system We installed virtualization technology into a beamline control.  We use Xen and Linux PC servers by replacing HP operator console.  Control application programs ported onto VM (Linux).  We installed a pair of Xen host and NFS server to keep image file of VM. ICALEPCS 2005 in Geneva, Switzerland
  • 24. System setup and live migration It is possible to use continuously X-server (thin client) during maintenance. Primary Xen host Secondary Xen host VM VM VM Migration VM VM VM Enable shutdown Control Control A few programs 100msec programs Gigabit Ethernet NFS server VM Image VM Image VM Image ICALEPCS 2005 in Geneva, Switzerland
  • 25. Future plan - High availability cluster -  We are studying high availability Single System Image (SSI) cluster configuration with Xen • Migration function of Xen is not effective when host computer suddenly dies. software software software software Single System Image cluster VM VM VM VM VM VM VM VM VM Xen hypervisor Xen hypervisor Xen hypervisor Structure of OpenSSI with Xen ICALEPCS 2005 in Geneva, Switzerland
  • 26. Future plan (cont’) - reduandant storage - We will introduce a redundant storage system such as SAN, iSCSI and NAS. NFS server is a single failure point Primary Xen host FC Switch SAN fibers SAN storage FC Switch Secondary Xen host ICALEPCS 2005 in Geneva, Switzerland
  • 27. Cost estimation About 50 HP-UX workstations will be replaced 8 PC-base servers + redundant storage (6 VMs runs on each PC server) 75% of total cost can be saved (only hardware) ICALEPCS 2005 in Geneva, Switzerland
  • 28. Conclusion We studied several virtualization technology to introduce as operator console. We measured performances of some virtualization environments, and verified they are stable. 5~6 VMs are optimum for one server computer. We introduced Xen, which has live migration function, into beamline control system. We have plan to apply Xen for more beamline. ICALEPCS 2005 in Geneva, Switzerland
  • 29. Thank you for your attention. ICALEPCS 2005 in Geneva, Switzerland
  • 30. Running on Xen primary host Running on Xen secondly host

×