Virtualization in the Real World
Upcoming SlideShare
Loading in...5
×

Like this? Share it with your network

Share

Virtualization in the Real World

  • 541 views
Uploaded on

 

  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
    Be the first to like this
No Downloads

Views

Total Views
541
On Slideshare
541
From Embeds
0
Number of Embeds
0

Actions

Shares
Downloads
5
Comments
0
Likes
0

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide
  • Typical Unix Environment Dedicated processors and disks Complex system management " Heavy lifting " required for new servers Increased Network complexity Non-shared system resources Linux on z/VM Shared resources Simplified system management New servers online in minutes Network consolidation Economies of scale
  • Typical Unix Environment Dedicated processors and disks Complex system management " Heavy lifting " required for new servers Increased Network complexity Non-shared system resources Linux on z/VM Shared resources Simplified system management New servers online in minutes Network consolidation Economies of scale
  • Typical Unix Environment Dedicated processors and disks Complex system management " Heavy lifting " required for new servers Increased Network complexity Non-shared system resources Linux on z/VM Shared resources Simplified system management New servers online in minutes Network consolidation Economies of scale

Transcript

  • 1. Virtualization in the Real World A customer experience Session 9214 Speaker: Mike Reeves [email_address] Fidelity Investments One Destiny Way MZ CC2O Westlake, Tx 76262
  • 2. Virtualization in the Real World A customer experience Virtualization in the Real World
  • 3. zSeries & s/390 Linux The zSeries Linux Implementation Formula Unix versus z/VM & Linux Infrastructure Reduction Grid on zSeries Support Model Practical Examples TCO Model Workload management
  • 4.
    • ((((RIT * PT)NITP)M 2 )V 3 ) NTW
    • ISVP TABNYD
    zSeries & s/390 Linux PIT RIT PT NITP M V NTW ISVP TABNYD - Project Implementation Time - Real Implementation Time - Project Time - Needlessly Involved Technical People - Managers - Vice Presidents - Number of Turf Wars - ISV Products - Talked About But Not Yet Delivered The zSeries Linux Implementation Formula PIT=
  • 5. Unix versus z/VM & Linux
  • 6. Unix versus z/VM & Linux IEEE VLAN Discrete compared to virtualized with z/VM z/OS Network Server UNIX App Cable Server AIX App Server Win.. App Cable Cable Typical Open environment Linux App Linux App Linux App Virtual Cables Shared Disks z/VM processors, memory, channels... Cable Linux on z/VM
  • 7. Linux Linux Linux Manageability of the Virtual Environment CMS VM OPER (REXX) Linux CP Hypervisor operations CP Monitor
    • Virtual Consoles
    • Single Console Image Facility
    • PROP(CA) or VM/Oper (CA)
    • Performance Toolkit
    • Standard VM monitor data
    • MICS and/or Merrill’s MXG
    • Integrate with z/OS data
    • RMF LPAR reporting
    • RMF for Linux
    Virtual Console SCIF Monitor Data z/VM CMS Perf. Toolkit Workload Management
  • 8. zSeries Hardware zSeries Hardware layer Automation z/VM Virtualization layer Management Unix versus z/VM & Linux On Demand is there today!! Dynamic addition of resources is possible for certain resources and is expanding rapidly in the zSeries infrastructure. Linux OS Middleware Application Linux OS Middleware Application Linux OS Middleware Application Linux OS Middleware Application Linux OS Middleware Application Linux OS Middleware Application
  • 9. Typical Open environment Unix versus z/VM & Linux Discrete compared to virtualized with VMWare Server UNIX App Cable Server AIX App Server Win.. App Cable Cable Windows & Linux on VMWare Network Virtual Cables Cable Win.. App Linux App VMWare cpu-mem-I/O Win.. App Linux App VMWare cpu-mem-I/O Virtual Cables Win.. App Linux App VMWare cpu-mem-I/O Cable Blade Frame Blade Frame z/OS
  • 10. Unix versus z/VM & Linux What are some differences? Virtualization with 3 decades of IBM software and hardware experience behind it.
    • Instruction based Virtualization
    • End-to-End Error Recovery
    • Workload Management
    • Dynamic pathing to Disk
    • Hipersockets between LPARs
    • On Demand Infrastructure
    • Simplified Administration, Monitoring and Automation
    • Infrastructure simplification
    • Shared Segments & Disk Sharing
    • Maintenance and Upkeep
    zSeries zSeries Hardware layer z/VM Virtualization layer
  • 11. Unix versus z/VM & Linux Virtualization Considerations for Mainframe Users  Environment requires high degree of sharing  Dynamic provisioning (creating & manipulate guests on the fly)  Total Cost of Ownership (infrastructure for power/cooling)  Dynamic “On Demand” resource allocation  Disk I/O subsystem dynamic pathing  Centralized Administration and Capacity Management  Linux automation capability  Total Cost of Acquisition (initial cost for small implementation)  Hipersocket connectivity to other LPARs  Virtualization of Windows  Autonomic Workload Management  End-to-End Error Recovery z/VM Consideration
  • 12. Infrastructure Reduction Firewall Web Server eMail Server Application Server Database Server Security Server Backup Server Security Server Web Server Web Server eMail Server Application Server Application Server Database Server Database Server Backup Server Firewall Firewall Firewall Typical Server Environment – What are the Problems? What is Missing? Internet Intranet
  • 13.
    • Something will always be broken or malfunctioning
    • Something in this infrastructure needs upgrade
      • Hardware/software upgrade
      • Upgrade (technology exchange) is very disruptive
      • No provision for dynamic upgrades
    • The majority of this infrastructure will be underutilized
      • But when processing spikes occur, there will always be a bottleneck somewhere
      • Unknown SPOFs
    • End-to-End management is difficult to impossible
      • Monitoring Management & control does not span silos
      • Administration is difficult and requires to many levels of interaction to solve problems
    • No real way to achieve significant infrastructure & administrative cost reduction
    • Automation is difficult so autonomic computing and Disaster Recover are nearly impossible to achieve
    Infrastructure Reduction What are the problems with the distributed Infrastructure?
  • 14. Infrastructure Reduction What’s Missing?  Support Infrastructure! Firewall Web Server eMail Server Application Server Database Server Security Server Backup Server Security Server Web Server Web Server eMail Server Application Server Application Server Database Server Database Server Backup Server Firewall Firewall Firewall Intranet This configuration contains 50+ levels of infrastructure Internet
  • 15. In this configuration, 14+ levels of infrastructure have been eliminated Infrastructure Reduction Reduce Infrastructure with Linux on zSeries Firewall Web Server eMail Server Application Server Database Server Security Server Backup Server Security Server Web Server Web Server eMail Server Application Server Application Server Database Server Database Server Backup Server Firewall Firewall Firewall Intranet Internet Backup server Application server DB2 Connect z/VM-a File server tape z9xx z/OS-a Backup server Application server DB2 Connect z/VM-b File server z9xx DB2 CICS Tape mgt z/OS-b DB2 CICS Tape mgt MQSeries MQSeries
  • 16.
    • Virtualization simplifies the infrastructure
    • Common software provides for simpler upgrades and hardware can be transparently upgraded
    • Administration and management simplified
    • Real cost savings can be achieved because levels are moved from real to virtual
    • Resources can be better utilized
    • On Demand dynamic addition of resources
    • Better automation, autonomic computing
    • Disaster recovery actually possible
    Infrastructure Reduction Consolidation on zSeries – What are the Benefits?
  • 17. Grid on zSeries
    • JES2 MAS
      • Jobs processed where resources are available
    • CICS MRO
      • Function shipping throughout sysplex based on available resources
      • Transactions routed based on available resources or transaction affinity
    • DB2 Data Sharing
      • Data-sharing allows any CICS region to access data as though it were local
    • VSAM Record Level Sharing
      • Allows access to VSAM from individual regions across a sysplex rather than from file owning regions
    • On Demand resource addition
    Workload Manager Parallel Sysplex Network CICS CICS CICS CICS CICS CICS CICS CICS CICS CICS CICS CICS Java Java Java Java Java DB2 VSAM
  • 18. WebSphere – A Grid?
    • Sysplex Websphere grid
      • Servers dynamically added & quiesced
      • Resources balanced across sysplex
    • WebSphere Application Server
      • Can take advantage of z/OS security, crypto and zAAP features
    • Work Load Manager
      • Dynamic Management of WAS application servers
      • Work loads prioritized and balanced
      • Running hardware at 100% with heterogeneous workloads
    • On Demand resource addition
      • Activate standard processors, zAAPs, IFLs and Memory dynamically
      • Deactivate resources dynamically
    Grid on zSeries Work Load Manager Parallel Sysplex Network Servelet Java EJB
  • 19. DB2 Data Sharing – A Grid?
    • WebSphere & CICS
      • CICS Web Server
      • J2EE, Java transactions
      • Business transformation logic
    • DB2 Data Sharing
      • Enterprise Java Beans
      • Stored Procedures
      • DB2 Connect
    • VSAM Record Level Sharing
      • Sysplex wide sharing of VSAM files
      • Web enabled VSAM connectors
    • On Demand resource addition
      • Add resources manually or automatically
      • Scale up and/or out
    Grid on zSeries Workload Manager Parallel Sysplex Network DB2 VSAM Java Java Java Java CICS CICS CICS CICS CICS CICS CICS CICS
  • 20. Job Manager Grid on zSeries Data Grid Exploitation with zSeries We could this, but our applications groups would have to recode all of our applications to fit this model. Eventually this will happen, but not in the short term. Resource Library Process Process Process Process Client Gatekeeper Security Infrastructure Open Grid Services Architecture Resource Manager Hosting Environment Grid Service Container User-Defined Services Base Services System-Level Services OGSI Spec Implementation Security Infrastructure Web Service Engine Security admin. RSL admin.
  • 21. z/VM 1 z/OS 1 OSA/X z/VM 2 z/OS 2 OSA/X Data Grid Exploitation with zSeries Linux & DB2 Connect hipersockets DB2 Conn Guest 1 DB2 Conn Guest 2 DB2 Conn Guest 3 DB2 Conn Guest ..n DB2/DS DB2/DS DB2/DS hipersockets DB2/DS DB2/DS DB2/DS DB2 Conn Guest 1 DB2 Conn Guest 2 DB2 Conn Guest 3 DB2 Conn Guest ..n sysplex Compute environment taking advantage of zSeries data grid to provide a high speed connection to DB2 data on the zSeries sysplex. Low network latency & high data rates can be achieved with hipersockets. Example of this configuration in “Practical Example”. Grid on zSeries Compute Intensive Processing
  • 22. Middleware & DBMS Support OS support with IBM for level1 & 2 – level 3 support with RedHat. Support Model How we do zSeries Linux installation & support zSeries Hardware z/VM & Virtual Guests WAS WAS MQ DB DB Test/Dev/QA – Mainframe z/OS Support Production – UNIX Technical Support Mainframe Hardware & Storage Management WAS DB Java
  • 23. Linux MQSeries Linux DB2 Conn z/VM (ZVMx) Linux WAS 5 z/OS (CPUx) LPAR1 LPAR2 Server Creation Servers can be provisioned through “Server Central”. Once the request is received it takes about ½ hour to create the server and in many cases the server can be completely provisioned in less than one day. Test/Dev/QA supported by z/OS support group. Production supported by UNIX Technical Support group. Middleware & DBMS supported by Open Systems DBMS support. Server Creation Support Model YOUR New Linux
  • 24. Linux mail Linux Java Linux c++/ftp Linux MQSeries Linux DB2 Conn z/VM Linux WAS 5 z/OS zSeries – Test/QA Dev – Test – QA Intranet TestPlex QA Plex Support Model
  • 25. Linux mail Linux Java Linux c++/ftp Linux MQSeries Linux DB2 Conn z/VM Linux WAS 5 z/OS zSeries – Prod Intranet Other Plexes Site 1 Site 2 Other Plexes Production zOS/zVM zOS/zVM zOS/zVM zOS/zVM Support Model
  • 26.
    • DB2 Connect
    • AIX Servers in a High Availability multi-site configuration resulting in unused capacity
    • Maintenance difficult to schedule because all connects share the same DB2 binaries
    • Multiple network hops increase latency resulting in higher response times
    • Memory configuration limited to total memory available on hardware
    z/OS 2 DBMS DBMS DBMS DBMS z/OS 1 DBMS DBMS DBMS DBMS IP Dist. Site Dist. Site 1 AIX C1 C2 C3 Site 2 AIX C1 C2 C3 Old Configuration Practical Examples
  • 27.
    • DB2 Connect
    • Shares hardware in a continuous availability configuration
    • Maintenance can be easily scheduled because each instance has its own DB2 binaries
    • One network hop reduces network latency to near zero
    • Memory can be customized for each server guest
    z/VM 1 z/VM 2 IP Dist. New Configuration Practical Examples z/OS 1 DBMS DBMS DBMS DBMS z/OS 2 DBMS DBMS DBMS DBMS
  • 28.
    • WAS 5.1.0 applications
    • CSC Hostbridge & EOS
    • High availability configuration
    • Mainframe centric applications with low utilization.
    • One network hop reduces network latency to near zero (except in failover)
    • Both Hostbridge and EOS are running on a single guest to leverage server costs.
    Practical Examples
  • 29.
    • DB2 Connect/Java
    • High availability configuration
    • Maintenance can be easily scheduled because each instance has its own DB2 binaries
    • One network hop reduces network latency to near zero (except in failover)
    • Low utilization server allows for consolidation, simplification and low network latency
    Merrimack z/VM 1 z/OS 1 DBMS IP Dist. z/VM 1 z/OS 1 DBMS Dallas High Availability Failover Practical Examples
  • 30. SNA WAN OSA/e z/OS EE SNA App SNA Remote Data Center TCP/IP SNA environments will be around for some time and have evolved to become a complex infrastructure. SNA over IP requires many levels of infrastructure. DLSw and EE gateway technologies are not always compatible and when a problem occurs, diagnosis is very difficult. Channel Attached CIP Routers TN3270 37xx 37xx 37xx z9xx z9xx TCP/IP SNA 37xx Load Balancing SNA Apps SNA Apps SNA Elimination – Current Environment Practical Examples DLSw DLSw EE GW CIP CIP CIP
  • 31. OSA/e z/OS SNA App Remote Data Center zSeries Linux Communications Server, Communications Controller, and SSL server provide the ability to collapse the SNA infrastructure back into the mainframe platform eliminating the need for distributed SNA appliance technology which is reaching end-of-life status over the next 12-24 months. TN3270 SNA SNA Apps SNA Apps CSCC SNA Elimination – Future Environment Practical Examples TCP/IP
  • 32. “ We project that improving UNIX/Intel workload management will drive average utilization rates from the 15% to 20% to 40% to 50% within three years. When the significant Intel/zSeries annual price/performance improvement gap is overlaid on these projections, it becomes clear that any business case for mainframe Linux will evaporate by 2005/06, in the face of the Linux on Intel juggernaut.” (Meta Group, “Mainframe Linux Server Consolidation: The Near-Term Business Case”, Delta 2107 Mar 03) TCO versus TCA!
  • 33. “ Action Item: Investigate all options to consolidate. Closely evaluate the migration costs, all assumptions (including staffing efficiency and over-provisioning for peak workloads), availability requirements and alternative mechanisms for reducing TCO. Those who dismissed Linux on the zSeries two years ago may wish to revisit it because IBM has made progress.” (Gartner, “The IBM Mainframe: 40 Years, Now What?”, 30 November – 2 December 2004 ) TCO versus TCA!
  • 34.
    • Long Term costs versus initial cost!
      • How long before hardware push-pull is required?
    • Total Infrastructure costs versus server hardware cost!!
      • How much infrastructure resources does the server require?
    • How much capacity will go unused?
      • Low utilization equals poor ROI
      • Utilization only during certain time frames
    • Downtime does have a cost!
      • Server outages should include appropriate resolution costs
      • Business outages do cost real dollars
    • Ongoing maintenance, monitoring and capacity planning costs real dollars!
      • What real networking, monitoring, admin & capacity planning costs are visible to the project?
    TCO versus TCA!
  • 35.
    • Benchmarks are not real workloads!!
      • Benchmarks don’t represent real production workloads
    • One-to-One hardware comparisons don’t work!!
      • Single application hardware comparison: ex. blade-IFL $$$
    • Sharing not considered as part of the model!!
      • Workload sharing is become a necessity in all environments
      • 24X7 utilization
    • Downtime not considered as part of the model!!
      • Outages should include appropriate resolution costs
    • Infrastructure reduction not considered!!
      • Networking, monitoring, admin & capacity planning cost $$$
    • On Demand versus excess capacity is a reality on zSeries!
      • Add and remove resources dynamically
      • No unused infrastructure for capacity is required
    TCO versus TCA!
  • 36. zSeries 20% 2 IFL One-to-One Comparisons are Misleading Cost Comparison Intel 40% 3 Ghz The comparison is done on one box but the deployment is implemented in the standard high availability configuration which is much more costly. TCO versus TCA! zSeries 10% 2 IFL Actual Implementation Intel 10% 3 Ghz zSeries 10% 2 IFL Intel 10% 3 Ghz Intel 10% 3 Ghz Intel 10% 3 Ghz zSeries Test zSeries qa Intel test Intel qa Shared resources
  • 37. So far all of the testing has focused on “Primary Shift” projects. This only takes advantage of a window of resources available on zSeries Linux. This leaves more than 60% of the resources available for other application deployments. 8:00 5:00 5:00 8:00 WAS Oracle UDB Java/DB2 Connect Web Portal Offshore development Extracts and reporting Other exploitation of unused timeframe Great area of opportunity lies between end and start of primary shift TCO versus TCA! What are the Opportunities
  • 38. Prime Shift Applications Non-Prime Hours Applications TCO versus TCA! App Server App Server App Server DBMS Server DBMS Server DBMS Server Web Server Web Server Web Server Web Server Web Server Web Server Report Extract Database Server Database Server Database Server Report Extract Report Extract DBMS Server Report Extract Report Extract z/VM-a App Server tape z9xx z/OS-a Database Server CICS MQSeries CICS DBMS Server App Server Database Server Report Extract DBMS Server App Server MQSeries zSeries is designed for sharing so scaling can be accomplished both vertically and horizontally
  • 39. TCO versus TCA! z/VM z/OS Heterogeneous workloads can reduce costs Use the workload management capability of z/VM to allow production peaks to utilize the test & development resources.
    • Provision Test/Dev
      • Build as many test/development guests as you can to fill unused resources
      • Set the priority of the test/dev guests low
    • Provision Production
      • Build production guests with the intent of satisfying peaks by stealing resources from test/dev
      • Set the priority of the production guests high
    • Configure the LPAR with sufficient resources to run both
    Linux DB2 Conn Linux WAS 5 Linux Java Linux DB2 Conn Linux WAS 5 Linux WAS 5 Linux WAS 5 Test Production Linux Java Linux DB2 Conn Linux WAS 5 Linux WAS 5 Linux WAS 5 Test Linux DB2 Conn Linux WAS 5 Production
  • 40. Current Unix Life Cycle Strategy TCO versus TCA! The Boundless Proliferation loop!
    • Provision server
      • Floor space, Power & Hardware
      • OS, Network & Middleware
    • Test the configuration
    • Install the application
    • QA the configuration
    • Run parallel to validate the application
    • Cutover to production
    • Decommission the old server
    Network Server UNIX App Cable h/w s/w OLD disk App Cable UNIX s/w Server h/w NEW disk
  • 41. zSeries Linux Life Cycle Strategy OLD TCO versus TCA! Ending the loop with zSeries Linux! Network Linux App Linux App Linux App Virtual Cables Shared Disks z/VM processors, memory, channels... Cable z/VM processors, memory, channels... NEW Linux App Linux App Linux App Virtual Cables Shared Disks Cable
  • 42. TCO versus TCA! What works in zSeries Linux!
  • 43. How do you decide what works? TCO versus TCA!  Dynamic “On Demand” resource allocation  CPU intensive workloads (where CPU is not I/O related)  Non-primary shift workloads  Low CPU utilization  High I/O activity  Infrastructure Simplification/Reduction  Time to Market  Mainframe reliability requirements  Test & Development  Scalability beyond 4 CPUs  DB2 Connect & MQSeries Concentration  Needs access to mainframe data or application zLinux Workload Characteristics
  • 44. Linux Linux Linux Manageability of the Virtual Environment CMS VM OPER (REXX) Linux CP Hypervisor operations CP Monitor
    • Virtual Consoles
    • Single Console Image Facility
    • PROP(CA) or VM/Oper (CA)
    • Performance Toolkit
    • Standard VM monitor data
    • MICS and/or Merrill’s MXG
    • Integrate with z/OS data
    • RMF LPAR reporting
    • RMF for Linux
    Virtual Console SCIF Monitor Data z/VM CMS Perf. Toolkit Workload Management
  • 45. Workload Management Linux mail Linux Java Linux c++/ftp Linux MQSeries Linux DB2 Conn z/VM (ZVMx) IFL Memory Disk OSA/X Linux WAS 5 z/OS (CPUx) CPU Memory Disk zSeries hardware z/VM & z/OS hipersockets FMN PROD CICS / DBMS QPRD CICS / DBMS RPRD CICS / DBMS CPRD CICS / DBMS Local Remote z/VM #1 z900 z/VM #2 z990 z990 z990 Test – 25 Prod – 10 Test – 22 Prod – 12 Test – 0 Prod – 2 Test – 0 Prod –0
  • 46. z900 All z/VM accounting data was pulled for the week of 2005/01/31 and 2005/02/04. Only records between the hours of 09:00 and 17:00 EST were included. The data was summarized for 15 minute intervals. The graph below reflects the average cpu utilization for the week between 09:00 and 17:00 EST normalized to 100%. z/VM #1 Average CPU Usage for 01/31-02/24 Workload Management
  • 47. All z/VM accounting data was pulled for the week of 2005/01/31 and 2005/02/04. Only records between the hours of 09:00 and 17:00 EST were included. The data was summarized for 15 minute intervals. The graph below reflects the average server size for production and test as well as the demand paging rate for each. z/VM #1 Average Paging for 01/31-02/24 Workload Management
  • 48. z990 All z/VM accounting data was pulled for the week of 2005/01/31 and 2005/02/04. Only records between the hours of 09:00 and 17:00 EST were included. The data was summarized for 15 minute intervals. The graph below reflects the average cpu utilization for the week between 09:00 and 17:00 EST normalized to 100%. z/VM #2 Average CPU Usage for 01/31-02/24 Workload Management
  • 49. All z/VM accounting data was pulled for the week of 2005/01/31 and 2005/02/04. Only records between the hours of 09:00 and 17:00 EST were included. The data was summarized for 15 minute intervals. The graph below reflects the average server size for production and test as well as the demand paging rate for each. z/VM #1 Average Paging for 01/31-02/24 Workload Management
  • 50. Based on the current usage patterns of the ZVM5 infrastructure the average production utilization is ~.5% and that of test is ~.25%. The chart below shows how that utilization would scale across 8+ CPUs in one or more environments. This would allow for ~.5 hours of CPU utilization for each production guest. ** Workload Management
  • 51. Assuming estimated usage of 10% for production guests and .5% for test guests the chart below shows how that utilization would scale across 31+ CPUs in one or more environments. This would allow for ~2.5 hours of CPU utilization for each production guest. ** Workload Management
  • 52. Workload Management Linux Guest Chat Server Chat Client Start Server 1-16 JVMs Start Client Exit Test Collect Results No Yes Test Configuration for all tests: Each user added increases the thread count by 16.
  • 53. zLinux 1 Chat Client/Server zLinux 2 Chat Client/Server zLinux 3 Chat Client/Server zLinux 4 Chat Client/Server Start Server 1-16 JVMs Start Client Exit Test Collect Results No Yes Quickdsp Share 400 Quickdsp Share 400 Quickdsp Share 400 Quickdsp Share 400 Test was run on all four guests simultaneously to simulate multiple high priority workloads. Each guest was set at a relative share of 400 and quick dispatch. z/VM Workload Management - Example 1 Workload Management
  • 54. BASELine - z/VM Workload Management - Example 1 Workload Management
  • 55. RESULTS - z/VM Workload Management - Example 1 Workload Management
  • 56. zLinux 1 Chat Client/Server zLinux 2 Chat Client/Server zLinux 3 Chat Client/Server zLinux 4 Chat Client/Server Start Server 16 JVMs Start Client Exit Test Collect Results No Yes Quickdsp Share 400 Quickdsp Share 300 Quickdsp Share 200 Quickdsp Share 100 Test was run on all four guests simultaneously to simulate multiple high priority workloads. Each guest was set at a relative share of 400 and quick dispatch. z/VM Workload Management - Example 2 Workload Management
  • 57. RESULTS - z/VM Workload Management - Example 2 Workload Management
  • 58. zLinux 1 Chat Client/Server zLinux 4-20 Chat Client/Server Start Server 16 JVMs Start Client Exit Test Collect Results No Yes Share 400 down Test was run on one guest with 4 and 8 CPUs. 1-16 JVMs were started and each test was run with 2-16 Threads per JVM. z/VM Workload Management - Example 3 Workload Management
  • 59. Benchmark results in the Poughkeepsie benchmark center showed scalability to 8 processors and beyond for a single guest. The chart below shows that as the number of JVMs and the number of threads per JVM increases, scalability increases dramatically until the processor capacity reaches 100 percent as indicated by the red shade. Workload Management
  • 60. Benchmark results in the Poughkeepsie benchmark center showed scalability to 8 processors and beyond for a single guest. The chart below shows that as the number of JVMs and the number of threads per JVM increases, scalability increases dramatically until the processor capacity reaches 100 percent as indicated by the red shade. Workload Management
  • 61. zLinux 1-3 Chat Client/Server zLinux 4-6 Chat Client/Server zLinux 7-20 Chat Client/Server Start Server 16 JVMs Start Client Exit Test Collect Results No Yes Share 400 Share 200 Share 100 Test was run on all twenty guests simultaneously to simulate multiple high priority workloads. Guests 1-3 share at 400, guests 4-6 share at 200 and the remaining guests share was at 100. z/VM Workload Management - Example 4 Workload Management
  • 62. Benchmark results in the Poughkeepsie benchmark center showed scalability to 8 processors and beyond for a multiple guests. The chart below contrasts a single guest (orange) 1-16 JVM with 2 Threads per JVM versus 20 guests with varying workloads. Workload Management
  • 63. Benchmark results in the Poughkeepsie benchmark center showed scalability to 16 processors and beyond for a multiple guests. The chart below contrasts a single guest (orange) 1-16 JVM with 2 Threads per JVM versus 20 guests with varying workloads. Workload Management
  • 64. Benchmark results in the Poughkeepsie benchmark center showed scalability to 16 processors and beyond for a multiple guests. The chart below contrasts varying workloads across 20 guests with 8 and 16 CPUs. Workload Management
  • 65. Virtualization in the Real World A customer experience
  • 66. Virtualization in the Real World A customer experience