Virtualizing and Tuning Large Scale Java Platforms

3,326 views

Published on

Speakers: Emad Benjamin and Guillermo Tantachuco
The session will cover various GC tuning techniques, in particular focus on tuning large scale JVM deployments. Come to this session to learn about GC tuning recipe that can give you the best configuration for latency sensitive applications. While predominantly most enterprise class Java workloads can fit into a scaled-out set of JVM instances of less than 4GB JVM heap, there are workloads in the in memory database space that require fairly large JVMs.
In this session we take a deep dive into the issues and the optimal tuning configurations for tuning large JVMs in the range of 4GB to 128GB. In this session the GC tuning recipe shared is a refinement from 15 years of GC engagements and an adaptation in recent years for tuning some of the largest JVMs in the industry using plain HotSpot and CMS GC policy. You should be able to walk away with the ability to commence a decent GC tuning exercise on your own. The session does summarize the techniques and the necessary JVM options needed to accomplish this task. Naturally when tuning large scale JVM platforms, the underlying hardware tuning cannot be ignored, hence the session will take detour from the traditional GC tuning talks out there and dive into how you optimally size a platform for enhanced memory consumption. Lastly, the session will also cover vFabric reference architecture where a comprehensive performance study was done.

Published in: Technology
0 Comments
8 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
3,326
On SlideShare
0
From Embeds
0
Number of Embeds
2
Actions
Shares
0
Downloads
163
Comments
0
Likes
8
Embeds 0
No embeds

No notes for slide

Virtualizing and Tuning Large Scale Java Platforms

  1. 1. Virtualizing and Tuning Large Scale Java Platforms By Guillermo Tantachuco & Emad Benjamin © 2013 SpringOne 2GX. All rights reserved. Do not distribute without permission.
  2. 2. About the Speaker – Guillermo Tantachuco  Over 18 years experience as software engineer and architect  Have been at Pivotal and VMware for the past 3 years as Sr. Field Engineer – received 2012 President’s Club Award!  At Pivotal, Guillermo works with customers to understand their business needs and challenges and helps them seize new opportunities by leveraging Pivotal solutions to modernize their IT architecture  Guillermo is passionate about his family, business, technology and soccer. 2
  3. 3. About the Speaker – Emad Benjamin  I have been with VMware for the last 8 years, working on Java     3 and vSphere – received 2011 VMware President’s Club Award. 20 years experience as a Software Engineer/Architect, with last 15 years focused on Java development Open source contributions Prior work with Cisco, Oracle, and Banking/Trading Systems Authored the following books
  4. 4. Agenda        4 Overview Design and Sizing Java Platforms Performance Best Practices and Tuning ( & GC Tuning ) Tuning Demo Customer Success Stories Questions
  5. 5. Java Platforms Overview 5
  6. 6. Conventional Java Platforms Java Platforms are multitier and multi org Load Balancer Tier Web Server Tier Java App Tier DB Server Tier Load Balancers Web Servers Java Applications DB Servers IT Operations Network Team IT Operations Server Team IT Apps – Java Dev Team Organizational Key Stakeholder Departments 6 IT Ops & Apps Dev Team
  7. 7. Middleware Platform Architecture on vSphere Load Balancers as VMs High Uptime, Scalable, and Dynamic Enterprise Java Applications Load balancers Web Servers Java Applications DB Servers Web Servers VMware vSphere APPLICATION SERVICES Java Application Servers Capacity On Demand Dynamic High Availability SHARED INFRASTRUCTURE SERVICES SHARED, ALWAYS-ON INFRASTRUCTURE 7
  8. 8. Java Platforms Design and Sizing 8
  9. 9. Design and Sizing of Java Platforms on vSphere Step 1 – Step 3 – Establish Load profile Establish Benchmark Size Production Env. From production logs/monitoring reports measure: Concurrent Users Requests Per Second Peak Response Time Average Response Time Establish your response time SLA 9 Step 2  Iterate through Benchmark test until you are satisfied with the Load profile metrics and your intended SLA after each benchmark iteration you may have to adjust the Application Configuration  Adjust the vSphere environment to scale out/up in order to achieve your desired number of VMs, number of vCPU and RAM configurations The size of the production environment would have been established in Step2, hence either you roll out the environment from Step-2 or build a new one based on the numbers established 
  10. 10. Scale Up Test Step 2 – Establish Benchmark ESTABLISH BUILDING BLOCK VM Establish Vertical scalability Scale Up Test  Establish how many JVMs on a VM?  Establish how large a VM would be in terms of vCPU and memory Building Block VM Investigate bottlnecked layer Network, Storage, Application Configuration, & vSphere If building block app/VM config problem, adjust & iterate If scale out bottlenecked layer is removed, iterate scale out test No Test complete Scale Out Test Building Block VM 10 Building Block VM DETERMINE HOW MANY VMs Establish Horizontal Scalability Scale Out Test  How many VMs do you need to meet your Response Time SLAs without reaching 70%-80% saturation of CPU?  Establish your Horizontal scalability Factor before bottleneck appear in your application SLA OK?
  11. 11. Design and Sizing HotSpot JVMs on vSphere VM Memory Guest OS Memory Java Stack JVM Memory Other mem Perm Gen JVM Max Heap -Xmx 11 -Xss per thread -XX:MaxPermSize Direct native Memory “off-the-heap” Non Direct Initial Heap -Xms Memory “Heap”
  12. 12. Design and Sizing of HotSpot JVMs on vSphere VM Memory = Guest OS Memory + JVM Memory JVM Memory = JVM Max Heap (-Xmx value) + JVM Perm Size (-XX:MaxPermSize) + NumberOfConcurrentThreads * (-Xss) + “other Mem”  Guest OS Memory approx 1G (depends on OS/other processes)  Perm Size is an area additional to the –Xmx (Max Heap) value and is not GC-ed because it contains class-level information.  “other mem” is additional mem required for NIO buffers, JIT code cache, classloaders, Socket Buffers (receive/send), JNI, GC internal info  If you have multiple JVMs (N JVMs) on a VM then: • VM Memory = Guest OS memory + N * JVM Memory 12
  13. 13. Sizing Example set mem Reservation to 5088m VM Memory (5088m) Guest OS Memory Java Stack JVM Memory (4588m) JVM Max Heap -Xmx (4096m) 13 500m used by OS -Xss per thread (256k*100) Other mem (=217m) Perm Gen Initial Heap -XX:MaxPermSize (256m) -Xms (4096m)
  14. 14. Larger JVMs for In-Memory Data Management Systems Set memory reservation to 34g VM Memory for SQLFire (34g) Guest OS Memory JVM Memory for SQLFire (32g) JVM Max Heap -Xmx (30g) 14 Java Stack 0.5-1g used by OS -Xss per thread (1M*500) Other mem (=1g) Perm Gen Initial Heap -XX:MaxPermSize (0.5g) -Xms (30g)
  15. 15. NUMA Local Memory with Overhead Adjustment Number of Sockets On vSphere host Number of VMs On vSphere host Physical RAM On vSphere host 15 Physical RAM On vSphere host vSphere RAM overhead 1% RAM overhead
  16. 16. Middleware ESXi Cluster Locator/heart beat for middleware DO NOT VMotion Memory Available for all VMs = 96*0.98 -1GB => 94GB Per NUMA memory => 94/2 47GB 16 Middleware components 47GB RAM VMs with 8vCPU 96GB RAM 2 sockets 8 pCPU per socket
  17. 17. ESX Scheduler 8 vCPU VMs less than 47GB RAM on each VM If VM is sized greater than 47GB or 8 CPUs, Then NUMA interleaving Occurs and can cause 30% drop in memory throughput performance Each NUMA Node has 94/2 47GB 17 96 GB RAM on Server
  18. 18. 1 5 3 ESXi Scheduler 2 4vCPU VM 40GB RAM 4 2vCPU VMs less than 20GB RAM on each VM split by ESXi into 2 NUMA Clients available in ESX4.1 128 GB RAM on server 18
  19. 19. Java Platform Categories – Category 1  Smaller JVMs < 4GB heap, 4.5GB Java process, and 5GB for VM Resource Pool 1 Gold LOB 1  vSphere hosts with <96GB RAM is more suitable, as by the time you stack the many JVM instances, you are likely to reach CPU boundary before you can consume all of the RAM. For example if instead you chose a vSphere host Resource Pool 2 SilverLOB 2 with 256GB RAM, then 256/4.5GB => 57JVMs, this would clearly reach CPU boundary Use 4 sockets servers to get more cores  Multiple JVMs per VM  Use Resource pools to manage LOBs 19 Category 1: 100s to 1000s of JVMs
  20. 20. Most Common Sizing and Configuration Question JVM-1 2GB JVM-2 2GB 2vCPU 2vCPU JVM-1 JVM-3 2GB JVM-2 JVM-4 2GB Option-1 Scale out VM and JVM ( best) 2vCPU 2vCPU Option-3 Scale up VM and JVM (3rd best) JVM-1 JVM-1A 4GB 2vCPU JVM-2A 4GB 2vCPU Option-2 Scale Up JVM heap size (2nd best) 20 JVM-2
  21. 21. What Else to Consider When Sizing?  Mixed workloads Job Scheduler vs Web app require different GC Tuning JVM-2 Web  Job Schedulers care about Throughput Job  Web apps care about minimize latency and response time Web  You can’t have both reduced response time and increased throughput, without compromise – best to separate the concerns for optimal tuning Vertical Job JVM-3 Web JVM-1 Job Web Web Job 21 JVM-4 Horizontal Job
  22. 22. Java Platform Categories – Category 2  Fewer JVMs < 20  Very large JVMs, 32GB to 128GB  Always deploy 1 VM per NUMA node and size to fit perfectly  1 JVM per VM  Choose 2 socket vSphere hosts, and install ample memory128GB to 512GB Use 2 sockets servers to get larger NUMA nodes  Example is in memory databases, like SQLFire and GemFire  Apply latency sensitive BP disable interrupt coalescing pNIC and vNIC, Dedicated vSphere cluster 22 Category 2: a dozen of very large JVMs
  23. 23. Java Platform Categories – Category 3 Resource Pool 1 Gold LOB 1 Resource Pool 2 SilverLOB 2 Category 3: Category-1 accessing data from Category-2 23
  24. 24. Java Platforms Performance 24
  25. 25. Performance Perspective See the Performance of Enterprise Java Applications on VMware vSphere 4.1 and SpringSource tc Server at http://www.vmware.com/resources/techresources/10158 . 25
  26. 26. Performance Perspective See the Performance of Enterprise Java Applications on VMware vSphere 4.1 and SpringSource tc Server at http://www.vmware.com/resources/techresources/10158 . % CPU R/T 80% Threshold 26
  27. 27. SQLFire vs. Traditional RDBMS  SQLFire scaled 4x compared to RDBMS  Response times of SQLFire are 5x to 30x faster than RDBMS  Response times on SQLFire are more stable and constant with increased load  RDBMS response times increase with increased load 27
  28. 28. Load Testing SpringTrader Using Client-Server Topology SpringTrader Integration Services 4 Application Services Application Tier SpringTrader Application Service SpringTrader Data Tier Integration Patterns Redundant Locators SQLFire Member1 28 SQLFire Member 2
  29. 29. vFabric Reference Architecture Scalability Test Maximum Passing Users and Scaling With this topology 12000 4.00 10400 users session or Number of Users 3.00 8000 2.50 6000 2.00 1.50 4000 1.00 2000 0.50 0 0.00 1 29 2 3 Number of Application Services VMs 4 Scaling from 1 App Services VM 3.50 3300 txns per second 10000
  30. 30. 10k Users Load Test Response Time Operation 90th-Percentile Response-Time Four Application Services VMs 7 6 Seconds 5 4 3 10400 users session Approx. 0.25 seconds response time 2 1 0 0 2000 4000 6000 Number of Users 8000 10000 HomePage Login DashboardTab PortfolioTab TradeTab GetHoldingsPage GetOrdersPage SellOrder GetQuote BuyOrder 30 Register Logout MarketSummary 12000
  31. 31. Java Platforms Best Practices and Tuning 31
  32. 32. Most Common VM Size for Java Workloads  2 vCPU VM with 1 JVM, for tier-1 production workloads  Maintain this ratio as you scale out or scale-up, i.e. 1 JVM : 2vCPU  Scale out preferred over Scale-up, but both can work  You can diverge from this ratio for less critical workloads 2 vCPU VM 1 JVM (-Xmx 4096m) Approx 5GB RAM Reservation 32
  33. 33. However for Large JVMs + CMS  Start with 4+ vCPU VM with 1 JVM, for tier-1 in memory data management systems type of production workloads  Likely increase JVM size, instead of launching a second JVM instance  Multiple 4vCPU+ will allow for ParallelGCThreads to be allocated 50% of the available vCPUs to the JVM, i.e. 2 GC Threads +  Ability to increase ParallelGCThreads is critical to YoungGen scalability for large JVMs  ParallelGCThreads should be allocated 50% of available vCPU to the JVM and not more. You want to ascertain there other vCPUs available for other txns 33 For large JVMs 4+ vCPU VM 1 JVM (8-128GB)
  34. 34. Which GC? ESX doesn’t care which GC you select, because of the degree of independence of Java to OS and OS to Hypervisor 34
  35. 35. GC Policy Types GC Policy Type Description Serial GC • Mark, sweep and compact algorithm • Both minor and full GC are stop the world threads • Stop the world GC means application is stopped while GC is executing • Not very scalable algorithm • Suited for smaller <200MB JVMs like Client machines Throughput GC • Parallel GC • Similar to Serial GC, but uses multiple worker Threads in parallel to increase throughput • Both Young and Old Generation collection are multi thread, but still stop-the-world • number of threads allocated by -XX:ParallelGCThreads=<nThreads> • NOT Concurrent, meaning when the GC worker threads run, they will pause your application threads. If this is a problem move to CMS where GC threads are concurrent. 35
  36. 36. GC Policy Types GC Policy Type Description Concurrent GC • Concurrent Mark and Sweep, no compaction • Concurrent implies when GC is running it doesn't pause your application threads – this is the key difference to throughput/parallel GC • Suited for application that care more about response time than throughput • CMS does use more heap when compared to throughput/ParallelGC • CMS works on OLD gen concurrently, but young generation is collected using ParNewGC, a version of the throughput collector • Has multiple phases: • Initial mark (short pause) • concurrent mark (no pause) • Pre-cleaning (no pause) • re-mark (short pause) • Concurrent sweeping (no pause) G1 • Only in J7 and mostly experimental, equivalent to CMS + compacting 36
  37. 37. Tuning GC – Art Meets Science! Either you tune for Throughput or Latency, one at the cost of the other Reduce Latency • improved R/T • reduce latency impact • slightly reduced throughput Tuning Decisions Web • improved throughput • longer R/T • increased latency impact Job 37 Increase Throughput
  38. 38. Parallel Young Gen and CMS Old Gen -Xmn Xmx minus Xmn Young Generation Minor GC Old Generation Major GC Parallel GC in YoungGen using XX:ParNewGC & XX:ParallelGCThreads Concurrent using in OldGen using XX:+UseConcMarkSweepGC S S 0 1 minor GC threads 38 application threads concurrent mark and sweep GC
  39. 39. What to measure when tuning GC? Young Gen Minor GC frequency Young Gen minor GC duration 39 Old Gen Major GC frequency Old Gen GC duration
  40. 40. Why is Duration and Frequency of GC Important? Young Gen Minor GC Old Gen Major GC We want to ensure regular application user threads get a chance to execute in between GC activity frequency frequency Young Gen minor GC duration 40 Old Gen GC duration
  41. 41. Further GC Tuning Considerations  General approach to investigating latency • Determine Minor GC duration • Determine Minor GC Frequency • Determine worst Full GC duration • Determine worst Full GC frequency  Minor GC measurements drive Young Generation refinements  Full GC measurements drive Old Generation refinements  The decision to switch to –XX:+UseConcMarkSweepGC • If throughput collector's worst case FullGC duration/frequency compared with app latency is requirements is not tolerable 41
  42. 42. High Level GC Tuning Recipe Step CSurvivor Spaces Tuning Measure Minor GC Duration and Frequency Adjust –Xmn And/or SurvivorSpaces Adjust –Xmn Young Gen size and /or ParallelGCThreads Step A-Young Gen Tuning Measure Major GC Duration And Frequency 42 Adjust Heap space –Xmx Or adjust CMSInitiatingOccupancyFraction Step B-Old Gen Tuning
  43. 43. Impact of Increasing Young Generation (-Xmn) Young Gen Minor GC less frequent Minor GC but longer duration You can mitigate the increase in Minor GC duration by increasing ParallelGCThreads 43 Old Gen Major GC potentially increased Major GC frequency You can mitigate the increase in GC frequency by increasing -Xmx
  44. 44. Impact of Reducing Young Generation (-Xmn) Young Gen Minor GC more frequent Minor GC but shorter duration Old Gen Major GC Potentially increased Major GC duration You can mitigate the increase in Major GC duration by decreasing -Xmx 44
  45. 45. Survivor Spaces  Survivor Space Size = -Xmn / (-XX:SurvivorRatio + 2 ) • Decrease Survivor Ratio causes an increase in Survivor Space Size • Increase in Survivor Space Size causes Eden space to be reduced hence • MinorGC frequency will increase • More frequent MinorGC causes Objects to age quicker • Use –XX:+PrintTenuringDistribution to measure how effectively objects age in survivor spaces. 45
  46. 46. Sizing The Java Heap Slower Full GC JVM Max Heap -Xmx (4096m) OldGen 27460m YoungGen -Xmn (1350m) Old Generation Survivor Space 2 Survivor Space 1 Eden Space 46 Quick Minor GC
  47. 47. Decrease Survivor Spaces by Increasing Survivor Ratio Young Gen Minor GC Old Gen Major GC more frequent Minor GC but shorter duration Reduce Survivor Space Hence Minor GC frequency is reduced with slight increase in minor GC duration S0 47 SS1S 0 1
  48. 48. Increasing Survivor Ratio Impact on Old Generation Young Gen Minor GC Old Gen Major GC Increased Tenure ship/promotion to old Gen hence increased Major GC S S 0 1 48
  49. 49. Why is Duration and Frequency of GC Important? Young Gen Minor GC frequency Old Gen Major GC We want to ensure regular application user threads get a chance to execute in between GC activity Young Gen minor GC duration 49 frequency Old Gen GC duration
  50. 50. CMS Collector Example java –Xms30g –Xmx30g –Xmn10g -XX:+UseConcMarkSweepGC -XX:+UseParNewGC –XX:CMSInitiatingOccupancyFraction=75 –XX:+UseCMSInitiatingOccupancyOnly -XX:+ScavengeBeforeFullGC -XX:TargetSurvivorRatio=80 -XX:SurvivorRatio=8 -XX:+UseBiasedLocking -XX:MaxTenuringThreshold=15 -XX:ParallelGCThreads=4 -XX:+UseCompressedOops -XX:+OptimizeStringConcat -XX:+UseCompressedStrings -XX:+UseStringCache  This JVM configuration scales up and down effectively  -Xmx=-Xms, and –Xmn 33% of –Xmx  -XX:ParallelGCThreads=< minimum 2 but less than 50% of available vCPU to the JVM. NOTE: Ideally use it for 4vCPU VMs plus, but if used on 2vCPU VMs drop the -XX:ParallelGCThreads option and let Java select it 50
  51. 51. IBM JVM – GC Choice -Xgc:mode Usage Example -Xgcpolicy:Optthruput (Default) Performs the mark and sweep operations during garbage collection when the application is paused to maximize application throughput. Mostly not suitable for multi CPU machines. Apps that demand a high throughput but are not very sensitive to the occasional long garbage collection pause -Xgcpolicy:Optavgpause Performs the mark and sweep concurrently while the application is running to minimize pause times; this provides best application response times. There is still a stop-the-world GC, but the pause is significantly shorter. After GC, the app threads help out and sweep objects (concurrent sweep). Apps sensitive to long latencies transaction-based systems where Response Time are expected to be stable Treats short-lived and long-lived objects differently to provide a combination of lower pause times and high application throughput. Before the heap is filled up, each app helps out and mark objects (concurrent mark). Latency sensitive apps, objects in the transaction don't survive beyond the transaction commit Job -Xgcpolicy:Gencon Web 51 Web
  52. 52. Demo 52
  53. 53. Load Testing SpringTrader Using Client-Server Topology jConsole SpringTrader jMeter 53
  54. 54. Results EXECUTING JMETER SCRIPTS 10 TIMES - 5,000 SAMPLES PER JMETER RUN Average 1st scenario 10M Latency Throughput 164M Latency Throughput 2nd scenario 164M Latency Throughput 10M Latency Throughput 54 percentage 61 32 20 18 13 16 18 17 20 16 23.1 270 411 491 562 541 540 507 537 486 511 485.6 45 10 7 5 6 6 6 6 7 6 10.4 54.978355 345 521 597 605 541 548 530 544 558 545 533.4 9.84349259 64 12 6 5 6 5 5 6 5 6 12 49.790795 248 570 612 626 583 614 632 617 590 595 568.7 13.0166932 56 33 22 18 18 18 17 17 19 21 23.9 297 416 515 526 554 560 555 559 544 506 503.2
  55. 55. Results 70 700 55% Better R/T 60 10% Better Throughput 600 50 500 40 400 10M Latency 10M Throughput 164M Latency 30 164M Throughput 300 20 200 10 100 0 1 55 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10
  56. 56. Middleware on VMware – Best Practices Enterprise Java Applications on VMware Best Practices Guide http://www.vmware.com/resources/techresources/1087 Best Practices for Performance Tuning of Latency-Sensitive Workloads in vSphere VMs http://www.vmware.com/resources/techresources/10220 vFabric SQLFire Best Practices Guide http://www.vmware.com/resources/techresources/10327 vFabric Reference Architecture http://tinyurl.com/cjkvftt 56
  57. 57. Middleware on VMware – Best Practices Summary  Follow the design and sizing examples we discussed thus far  Set appropriate memory reservation  Leave HT enabled, size bases on vCPU=1.25pCPU if needed  RHEL6 and SLES 11 SP1 have tickless kernel that does not rely on a high frequency interrupt-based timer, and is therefore much friendlier to virtualized latency-sensitive workloads  Do not overcommit memory  Locators/heartbeat process should not be vMotion® migrated, it otherwise would lead to network split brain problems  vMotion over 10Gbps when doing scheduled maintenance  Use Affinity and Anti-Affinity rules to avoid redundant copies on the same VMware ESX®/ESXi host 57
  58. 58. Middleware on VMware – Best Practices  Disable NIC interrupt coalescing on physical and virtual NIC  Extremely helpful in reducing latency for latency-sensitive virtual machines  Disable virtual interrupt coalescing for VMXNET3 • It can lead to some performance penalties for other virtual machines on the ESXi host, as well as higher CPU utilization to deal with the higher rate of interrupts from the physical NIC  This implies it is best to use dedicated ESX cluster for Middleware Platforms • All host are configured the same way for latency sensitivity and this insures non middleware workloads, such as other enterprise applications are not negatively impacted • This is applicable in category 2 workloads 58
  59. 59. Middleware on VMware – Benefits  Flexibility to change compute resources, VM sizes, add more hosts  Ability to apply hardware and OS patches while minimizing downtime  Create more manageable system through reduced middleware sprawl  Ability to tune the entire stack within one platform  Ability to monitor the entire stack within one platform  Ability to handle seasonal workloads, commit resources when they are needed and then remove them when not needed 59
  60. 60. Customer Success Stories 60
  61. 61. NewEdge  Virtualized GemFire workload  Multiple geographic activeactive datacenters  Multiple Terabytes of data kept in memory  1000s of transactions per second  Multiple vSphere clusters  Each cluster 4 vSphere hosts and 8 large 98GB+ JVMs http://www.vmware.com/files/pdf/customers/VMware-Newedge-12Q4-EN-Case-Study.pdf 61
  62. 62. Cardinal Health Virtualization Journey Theme Timeline Virtual DC HW SW 62 Centralized IT Shared Service 2005 – 2008 Capital Intensive - High Response 2009 – 2011 Consolidation  < 40% Virtual  <2,000 VMs  <2,355 physical Internal cloud  >58% Virtual  >3,852 VMs  <3,049 physical Data Center Optimization  30 DCs to 2 DCs Power Remediation  P2Vs on refresh Transition to Blades  <10% Utilization  <10:1 VM/Physical HW Commoditization  15% Utilization  30:1 VM/Physical Business Critical Systems  SAP ~ 382  WebSphere ~ 290 Low Criticality Systems  8X5 Applications Variable Cost SubscriptionServices 2012 – 2015 Cloud Resources • >90% Virtual  >8,000 VMs  <800 physical Optimizing DCs  Internal disaster recovery  Metered service offerings (SAAS, PAAS, IAAS) Shrinking HW Footprint  > 50% Utilization  > 60:1 VM/Physical Heavy Lifting Systems  Database Servers
  63. 63. Virtualization Why Virtualize WebSphere on VMWare  DC strategy alignment • Pooled resources capacity ~15% utilization • Elasticity – for changing workloads • Unix to Linux • Disaster Recovery  Simplification and manageability • High availability for thousands instead of thousands  Five year cost savings ~ $6 million • Hardware Savings ~ $660K of high availability solutions • Network & system management in DMZ • WAS Licensing ~ $862K • Unix to Linux ~ $3.7M • DMZ – ports~ >$1M 63
  64. 64. Thank you and are there any Questions? Emad Benjamin, ebenjamin@vmware.com You can get the book here: https://www.createspace.com/3632131 64
  65. 65. Second Book  Emad Benjamin, ebenjamin@vmware.com  Preview chapter available at VMworld bookstore You can get the book here: Safari: http://tinyurl.com/lj8dtjr Later on Amazon  http://tinyurl.com/kez9trj 65
  66. 66. Why have Java developers chosen Spring? J(2)EE usability Deployment Flexibility DI Powerful Service Abstractions Application Portability AOP Core Model TX Testable, lightweight model for programming
  67. 67. Spring Deploy to Cloud or on premise Big, Fast, Flexible Data GemFire Core Model Web, Integration, Batch
  68. 68. Spring Stack Spring Data Spring for Apache Hadoop Redis HBase GemFire JPA QueryDSL HDFS MapReduce Hive MongoDB Neo4j Solr JDBC Splunk Pig Cascading SI/Batch Google App Eng. AWS Beanstalk Spring Batch Spring AMQP Spring XD Spring Web Flow Spring Web Services Spring Social Spring Integration Twitter Heroku LinkedIn Facebook Cloud Foundry OpenShift Spring Security Spring Security OAuth Spring Framework DI AOP TX JMS JDBC ORM OXM Scheduling MVC REST HATEOAS JMX Testing Caching Profiles Expression JTA JDBC 4.1 JMX 1.0+ Tomcat 5+ GlassFish 2.1+ WebLogic 9+ WebSphere 6.1+ Java EE 1.4+/SE5+ JPA 2.0 JSF 2.0 JSR-250 JSR-330 JSR-303
  69. 69. Learn More. Stay Connected. <your CTA here> <related sessions> Twitter: twitter.com/springsource YouTube: youtube.com/user/SpringSourceDev Google +: plus.google.com/+springframework LinkedIn: springsource.org/linkedin Facebook: facebook.com/groups/springsource

×