As a starting point, we want to point out that cloud is replacing a very big structure that was developed over many years. The CIO build their IT capacity from the ground up and usually included buildings and facilities including power and cooling. Access control to the physical server space and auditing was also a task under their control. From there, the stack builds from hardware infrastructure including servers, switches, routers and disk to telecommunication equipment for WAN and LAN connectivity. What a large task! And predicting actual demand so that enough IT resources where available without spending too much was the balancing trick that IT had to pull off. And that balancing trick relied on the underlying performance of the infrastructure.As we move to the cloud, many of these functions are now available as an “on-demand” service. And while the market continues to evolve including IAAS, PAAS, and SAAS – the underlying performance of the infrastructure is critical. In fact, in the cloud world with all services metered, the cost a users pays is directly related to how much work can be accomplished within a time period.
Savvis is an IT outsourcing provider delivering visionary enterprise-class cloud and IT solutions and proactive service, and enabling enterprises to gain a competitive advantage through IT.Nearly 2,500 unique clientsDeep expertise in technical operations, client support, engineering and consulting$1.039B USD* revenue and growingInfrastructure extends to 45 countriesMore than 50 global data centers with ~2 million square feet of raised floor spaceIronclad security and compliance
Savvis began the project with CloudHarmony in March of 2012. The results were completed several weeks later. Savvis’s only input was to select the vendors we would like to have the benchmark run against – not the specific compute instances or configurations – those were all up to the vendor.When one reads the official benchmark results, CloudHarmony split the compute instances into two groups, Large and Small (see last bullet on the slide). There was no games or benefit to Savvis splitting the results into three groups based on memopry configurations. The splits were similar to the 2011 results and also made sure that just about every vendor had at least one instance in each grouping. I felt the three groups; Small, Medium and Large, provide more detail relative to the sizing. Many large instances can now be found with 60GB or more of memory – so the additional middle group helps target instance sizes for the viewer.
These specific point discuss the methodology that CloudHarmony uses. Savvis did not have input into any of the methodology, test or results. We did select who we wanted to have tested against VPDC and which instances of VPDC to test. The goal is for the viewer to walk away believing that there is a solid methodology to the benchmarks ran by CloudHarmony.
The goal is to spend a minute on this slide so the viewer walks away understanding that the test are fair and complete. The purpose of the Baseline configuration is to then measure all results relative to the baseline. If a base line score was 50, then a compute instance that has an aggregate result of 100 would be consider twice as fast relative to the baseline. This is also again the time to point out that all the results are calculated into 5 single measurements shown on the “unit” line. The rest of the presentation focuses on the results of the CCU, IOP, MIOP, Encode and Lang.For this benchmark – the test add up to 44. (19+7+7+7+4) – so 44 test were ran multiple times on each instance with all of them within a standard deviation.
The CPU performance metric (CCU) is based on the Amazon ECU (EC2 Compute Unit). Amazon uses the ECU to define how much CPU resources are allocated to different server configurations in their cloud. These configurations range in size from their smallest 1.7 GB m1.small with 1 ECU to their high end 22 GB cc1.4xlarge with 33.5 ECUs. The CCU metric is based on a compilation of 19 different CPU benchmarks. Approximately 50% of the CCU is based on multi-core aware benchmarks (benchmarks that take advantage of multiple CPU cores if they are present). Because EC2 is the baseline for this benchmark metric, there is a direct correlation between CCU and ECU scores for each of the EC2 configurations.To calculate the CCU Cloud Harmony selected 19 CPU benchmarks that showed clear upward scaling on smaller to larger sized EC2 instances. They used the average of the highest scores in the US East region to produce a 100% baseline for each of these 19 benchmarks. Each instance is then assigned a relative score as a ratio of that instance's score to the highest average score. The relative scores for all 19 benchmarks are then aggregated to produce a CPU comparison score (CCS) for each instance. In calculating the CCS,some benchmarks are weighted higher than others. For example, the Geekbench and Unixbench results are weighted 200 points for the baseline, while opstone and john-the-ripper are weighted 33 points each (the remaining benchmarks are all weighted 100). They then use these results to create a CCU evaluation table where the left column in the table is the # of CCUs, and the right column is the average CCS corresponding with that CCU value. This table was then populated with 5 rows, one for EC2 instancesizes m1.small, m1.large, m2.xlarge, m2.2xlarge and m2.4xlarge, using an average of the CCS for those instances in all 4 EC2 regions. Once the comparison table was populated, they use the same algorithm to compute CCS values for every cloud server benchmarked. To translate from CCS to CCU, they determine the closest matching row(s) to the CCS for a given cloud server using the left column, and then compute an CCU value using the right column (if the CCS falls between 2 rows, a proportional average CCU value is calculated).
The idea here is to just look at the performance of a single core – the raw power of the processors in the compute instance under test. Based on the compute instances available for testing, there was not a single core instance for IBM or Rackspace. While there best score for the smallest number of cores is show in the graph – they are gray in color to point out that they were not single core tests.
A user may ask themselves, does this really matter? This slide points out some relevant data to suggest that small performance differences do matter. Take for example the Google quote. A 400 milliseconds delay in response, faster than the blink of an eye, has ½ percentage point reduction in searches per user. Or take the Shopzilla example – where a 5 second reduction in page load times can help a company increase revenue by up to double digits! So performance does effect revenue!
Disk IO performance combines the result of 7 disk IO benchmarks to measure how fast and efficiently compute instances can read from and write to disk. Disk IO is an important performance characteristic and a common performance bottleneck for many workloads including database and web servers. With respect to cloud compute services, there are generally two block storage architectures:Local Storage – The first method for compute storage utilizes shared local storage on the physical host server. This method is generally inexpensive and easy to deploy, and provides fast throughput because disk IO does not cross the network. However, the downside to this architecture is that compute instances cannot be easily migrated from one physical host to another. If any part of the physical host fails (disks, power supply, motherboard, etc.), all compute instances running on that host will fail and cannot be restored until the host system is repaired. GoGrid and Rackspace cloud utilize this storage architecture exclusively, while EC2 utilizes both local and off instance storage.Off Instance Storage - The second method for compute storage utilizes a shared, external storage system such as a SAN(Storage Area Network). In this model, compute instance storage is managed external to the physical host system it runs on. The advantage to this approach are that is allows a compute instances to be migrated easily between physical hosts and provides generally better reliability and high availability as long as the storage system remains operational. SANs typically utilize higher end and redundant components such as Raid, power supplies, controllers, etc so they are also less prone to failure than atypical local storage system. The disadvantage to this method is generally higher costs, greater complexity, and occasionally lower performance due to network overhead. Savvis, EC2, IBM, BlueLock, Terremark, OpSource and SoftLayer utilize this storage architecture.It should be pointed out that in addition to OpSource winning the Large and Medium class test with off-instance storage, GoGrid won the Small class with local storage. Also note that in our comparisons between the 5 vendors highlighted in this deck, Savvis was beat by IBM in the small class and the results (98 to 91) are noted on the slide.
Just as before – slight performance differences can make a change in how customers feel about the business. Using the data from AutoAnything we see that a 50% improvement in web performance from 14 to 7 seconds – all kinds of customer satisfaction data can be achieved. Performance matters because it can influence customer satisfaction!
This metric measures the performance of memory IO. In other words, how fast is the CPU able to read and write data to memory. The server we chose as the baseline is a physical server containing dual Intel E5504 quad core 2.00 GHz processors and 48GB DDR3 ECC ram. We assigned the baseline server a score of 100. All server configurations were assigned a score proportional to the performance of that server, where greater than 100 represents better results and less than 100 represents poorer results. Forexample, a VM with a score of 50 scored 50% lower than the baseline server overall, while a VM with a score of 125, scored 25% higher.See the slide “Benchmark Details” were this information was presented prior to reviewing this results screen.To compute the score, the results from each of the 7 benchmarks on the baseline server are compared to the same benchmark results for each server configuration. The baseline server benchmark score represents 100% for each benchmark. If the server scores higher than the baseline it receives a score higher than 100% (based on how much higher the score is) and vise-versa for a lower score.Points to make here, and add to the credibility is the fact that AWS beat Savvis by 0.7 in the Large class. But looking at the details – they had twice the number of cores (16) and 7GB more RAM.
This metric is based on the performance of 4 common interpreted (full or byte-code) programming languages: Java, Ruby, Python and PHP. These are common languages used to develop server-side software applications.The server we chose as the baseline for this metric is a dual processor Intel E5506 2.13 GHz (8 cores total) with 4 x 15K RPM SAS drives configured in hardware managed Raid 1+0. This is a fairly high-end server with many cores and fast IO. We assigned this server a score of 100. All tested instances were assigned a score proportional to the performance of that server, where greater than 100 represents better results and less than 100 represents poorer results. For example, a VM with a score of 50 scored50% lower than the baseline server overall, while a VM with a score of 125, scored 25% higher.To compute the score, the results from each of the 4 language benchmarks on the baseline server are compared to the same results from a tested server. If the VM scored higher than the baseline it receives a score higher than 100% and vise-versa for a lower score.
This metric measures the how fast servers able capable of performing standard encoding and encryption operations. The server we chose as the baseline is a physical server containing dual Intel E5504 quad core 2.00 GHz processors and 48GB DDR3 ECC ram. We assigned the baseline server a score of 100. All server configurations were assigned a score proportional to the performance of that server, where greater than 100 represents better results and less than 100 represents poorer results. For example, aVM with a score of 50 scored 50% lower than the baseline server overall, while a VM with a score of 125, scored 25% higher.To compute the score, the results from each of the 7 benchmarks on the baseline server are compared to the same benchmark results for each server configuration. The baseline server benchmark score represents 100% for each benchmark. If the server scores higher than the baseline it receives a score higher than 100% (based on how much higher the score is) and vise-versa for a lower score.
In the Cloud, Performance MattersJamie Tyler, Solutions Engineering Manager, Savvis EMEA
Today’s agenda IT challenges for the enterprise About Savvis and Virtual Private Data Centre (VPDC) Cloud Harmony Benchmarking Report Summary & further information2Savvis Proprietary & Confidential
Enterprise IT Challenge Cloud Deployment Models Application Portfolio* Business Processing Collaborative & Web Decision Support Shared Services ERP OLTP Batch CRM Email Work- Web Data DW & App. Systems File & Group Serving & Analysis BI Dev Mgt Print Streamin & Mining g Software-as-a-Service Infrastructure-as-a-Service Platform-as-a-Service Compute Compute Compute Compute Storage Storage LAN LAN LAN Data Center Data Center Data Center WAN Infrastructure Domain * Based on IDC’s Workload Category Taxonomy, 20083Savvis Proprietary & Confidential
Pricing is Metered. Performance Matters Infrastructure Storage Security • IP addresses • Per-GB costs • VPN’s • Load balancers • I/O fees above a • Threat management • Firewalls specific rate service • Server class • Backup • Intrusion detection • Virtualisation software • Snapshot • Log management • Retention and vaulting • URL filtering • Drive options Compute Network Support Networking Hourly Usage based on: • Inbound / outbound • Business day email • CPU Internet • Optional phone support • RAM • Network-to-network • Additional 24/7 • OS • IP addresses • Monitoring • Location • Hourly per-SLB fee per bandwidth fees • Load balancer bandwidth4Savvis Proprietary & Confidential
Performance and Availability of the Cloud Performance in the cloud is key – compute, memory, IO, network etc. To differentiate services and service profiles look at your cloud provider’s model for over-subscription – What is guarantee of availability? You need to consider: Absolute performance levels Sizing Cloud in the context of noisy neighbors – Can apps tolerate performance impact due to other consumers in the cloud? Service levels – infrastructure and network Remember – You need a great performing cloud but not at the expense of security5Savvis Proprietary & Confidential
The Recognised Leader The Gartner Magic Quadrant for The Gartner Magic Quadrant for Managed Hosting Public Cloud Infrastructure as a Service Gartner, Inc., Magic Quadrant for Managed Hosting, Lydia Leong, Ted Chamberlin, March 5, 2012. Gartner, Inc., Magic Quadrant for Public Cloud Infrastructure as a Service, Lydia Leong, Ted Chamberlin, December 8, 2011. Gartner does not endorse any vendor, product or service depicted in our research publications, and does not advise technology users to select only those vendors with the highest ratings. Gartner research publications consist of the opinions of Gartners research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose. This Magic Quadrant graphic was published by Gartner, Inc. as part of a larger research note and should be evaluated in the context of the entire report. The Gartner report is available upon request from Savvis.7Savvis Proprietary & Confidential
Global Infrastructure North America Europe Asia Pacific Savvis is a global leader in cloud infrastructure Savvis Data Center and hosted IT solutions for enterprises New Savvis Data Center Core Node • 54 data centers globally by end of Q3 2012 Metro Ring • 2 million sq. ft. of raised floor Point of Presence8Savvis Proprietary & Confidential 8/29/12
Savvis Symphony VPDC An enterprise-class Virtual Private Data Centre (VPDC) built on Cisco UCS and Nexus technology Pay as you go, on-demand Cloud – Hourly based pricing model – Flex your IT resources inline with business need Multi-Tiered Service Profiles – Environments suitable for Dev Test, Web Hosting and Mission Critical environments Savvis Security & Reliability – Best-of-breed infrastructure: VMware, Cisco, Compellent – Secure: enterprise grade security components & principles Global Availability – Available in multiple Savvis Data Centres including UK, US East & West Coast, Singapore and CanadaSavvis Proprietary & Confidential 9
Cloud Harmony Benchmark CloudHarmony’s (www.cloudharmony.com) intent is to be the go to source for independent, un-biased and objective performance metrics for cloud services – CloudHarmony is not affiliated with, owned or funded by any cloud provider Conducts extensive benchmarking of public clouds Developed suite including 44 measurements covering both synthetic (CPU, DISK IO, Memory IO) and real-world performance testing Benchmark test grouped into 5 aggregated metrics10Savvis Proprietary & Confidential
CloudHarmony Project – June 2012 Savvis asked CloudHarmony to run benchmarks including Symphony VPDC solutions against the following vendors: – Amazon EC2 − Rackspace – IBM SmartCloud Enterprise − BlueLock – Terremark vCloud Express − GoGrid – OpSource Cloud − SoftLayer The results were grouped into three general server sizes based on memory – Small has less-then or equal to 2GB RAM – Medium has more the 2GB and less-than or equal to 8GB RAM – Large has greater than 8GB RAM11Savvis Proprietary & Confidential
CloudHarmony Benchmark Methodology Multiple weighted-average test grouped to produce a single metric for: – CPU − Interpreted Language Processing – Memory − Encryption and Encoding – Disk I/O To improve accuracy, test are run multiple times until the standard deviation between each execution achieves a minimum threshold A common baseline compute instance is utilized so that the results indicate how much better or worse the tested compute instance performed relative to the baseline12Savvis Proprietary & Confidential
Benchmark Details CPU Memory Disk Encryption Language Unit CCU MIOP IOP Encode Lang Baseline Amazon EC2 ECU Dual Intel Dual Intel Dual Intel Dual Intel Config E5504 Quad E5506 2.13 E5504 Quad E5506 2.13 core 2.00GHz GHz (8 cores) core 2.00GHz GHz (8 cores) processors and with 4X15K processors and with 4X15K 48GB DDR3 RPM SAS 48GB DDR3 RPM SAS ECC ram drives with ECC ram drives with hardware RAID hardware RAID 1+0 1+0 Test 19 7 7 7 4 Test c-ray, crafty, CacheBench, Blogbench, Monkey Audio SPECjvm, Names dcraw, espeak, Geekbench, Bonnie++, Encoding, WAV Ruby, Python, geekbench, hdparm, Dbench, To FLAC, WAV PHP graphics-magick, RAMspeed, Flexible IO To MP3, WAV To hmmer, john- Redis Tester, hdparm Ogg, WAV To the-ripper- Benchmark, buffered disk WavPack, (blowfish, Stream, reads, IOzone, FFmpeg AVI to des,md5), mafft, Unixbench Threaded I/O NTSC VCD, nero2d, openss, Tester GnuPG opstone-(svd, svsp, vsp), sudokut, tscp, unixbench13Savvis Proprietary & Confidential
CPU Results – Single Core CCU 14 1 core Config Cores Memory (GB) Savvis 1 2 12 Amazon 1 3.75 IBM 2 2 Rackspace 4 2 10 Terremark 1 1 8 Trying to compare a single core between all 6 vendors. In the benchmark the smallest 4 IBM configuration was a 2 core and the smallest 2 Rackspace was a 4 core. 0 Savvis VPDC AWS EC2 IBM SmartCloud Rackspace Terremark Enterprise vCloud Express15Savvis Proprietary & Confidential
Why Performance Matters - Revenue found that a 2 second slowdown 4.3 % reduction in revenue/user stated that a 400 0.59 % millisecond delay fewer searches/users Noticed that users who experience the fastest page load times view 50 % more pages/visits than users experiencing the slowest page load times reduced page load times from ~7 seconds to ~2 seconds, leading to a 7-12 % increase in revenue16Savvis Proprietary & Confidential Source: Steve Souders @ Velocity Conference 2009
Disk Results IOP Terremark Express vCloud GoGrid and Rackspace tested with local storage. All other Rackspace vendors tested with off-instance storage such as SAN SmartCloud 98 Enterprise IBM Small Medium AWS EC2 Large 91 Savvis VPDC 0 20 40 60 80 100 120 140 160 180 200 OpSource had best IOP in large class with 198.26 and best in medium class with 190.24. GoGrid had the best IOP in the small class with 150.7217Savvis Proprietary & Confidential
Performance Matters- Customer Satisfaction Web Performance Improvement from 14 to 7 seconds Source: Microsoft Tech-Ed, May 201118Savvis Proprietary & Confidential
Why does performance matter? Awareness Source: Microsoft Tech-Ed, May 201120Savvis Proprietary & Confidential
Interpreted Programming Language Results 180 Lang 160 140 120 100 Large 80 Medium Small 60 40 20 0 Savvis VPDC AWS EC2 IBM SmartCloud Rackspace Terremark vCloud Enterprise Express Test include SPECjvm, Ruby, Python and PHP21Savvis Proprietary & Confidential
Encoding and Encryption Results 180 Encode 160 140 120 100 Large 80 Medium Small 60 40 20 0 Savvis VPDC AWS EC2 IBM SmartCloud Rackspace Terremark vCloud Enterprise Express22Savvis Proprietary & Confidential
Additional Performance Considerations Network connectivity and performance concerns Speed Quality of Service (QoS) policies ensure maximum utilization of available bandwidth and allow key applications can be prioritized over ones deemed less vital Visibility and Control Web-based tools provide visibility into traffic composition and performance on all network links, allowing for proactive network administration Security Multiple levels of network-based and premises-based solution options address compliance and privacy concerns23Savvis Proprietary & Confidential
Summary Performance matters in the cloud Savvis has partnered with Cisco to provide enterprise class Virtual Private Data Centre (VPDC) services VPDC performed at or near the top of all five categories in the recent CloudHarmony Benchmarking report VPDC is available today from Savvis Data Centres in the UK, US, Canada and SingaporeSavvis Proprietary & Confidential 24
Savvis Symphony Cloud Services ● Visit Savvis at Booth G311 during the show ● Pick up a copy of the Cisco CloudHarmony Performance Benchmark White Paper ● Request a demo of our Virtual Private Data Centre service ● Visit savvis.com/cloud for product overviews and customer case studies ● Email firstname.lastname@example.org for further information after the show Savvis Symphony: The Transformational Cloud25Savvis Proprietary & Confidential