The document summarizes benchmark testing of Container-Native Storage (CNS) on Red Hat OpenShift Container Platform using two different storage media: solid-state drives (SSDs) and hard disk drives (HDDs). It tested CNS under IO-intensive and CPU-intensive workloads. For the IO-intensive workload, the SSD configuration significantly outperformed the HDD configuration, achieving over 5 times the maximum performance. However, for the CPU-intensive workload, the two configurations achieved similar maximum performance levels, with SSDs having less than a 5% improvement. The testing demonstrated that CNS can provide scalable storage and that storage performance depends on understanding how it matches the workload characteristics.
Cost and performance comparison for OpenStack compute and storage infrastructurePrincipled Technologies
To be competitive in the cloud, businesses need more efficient hardware and software for their OpenStack environment and solutions that can pool physical resources efficiently for consumption and management through OpenStack. Software-defined storage has made it easier to create storage resource pools that spread across your datacenter, but external or separate distributed storage systems are still a challenge for many. VMware vSphere with Virtual SAN converges storage resources with the hypervisor, which can allow for management of an entire infrastructure through a single vCenter Server, increase performance, save space, and reduce costs.
Both solutions used the same twelve drive bays per storage node for virtual disk storage, however the tiered design of VMware Virtual SAN allowed for greater performance in our tests. We used a mix of two SSDs and 10 rotational drives for the VMware Virtual SAN solution, while we used 12 rotational drives for the Red Hat Storage Server solution behind a battery-backed RAID controller, the Red Hat recommended approach.
In our testing, the VMware vSphere with Virtual SAN solution performed better than the Red Hat Storage solution in both real world and raw performance testing by providing 53 percent more database OPS and 159 percent more IOPS. In addition, the vSphere with Virtual SAN solution can occupy less datacenter space, which can result in lower costs associated with density. A three-year cost projection for the two solutions showed that VMware vSphere with Virtual SAN could save your business up to 26 percent in hardware and software costs when compared to the Red Hat Storage solution we tested.
Dell PowerEdge R930 with Oracle: The benefits of upgrading to PCIe storage us...Principled Technologies
Strong server performance is essential to companies running Oracle Database. The new Dell PowerEdge R930 provided strong performance with 22 SAS HDDs, but this performance improved when we replaced all of the drives with SAS solid-state drives. It improved further when we used a mix of HDDs and SDDs along with SanDisk DAS Cache. We saw the greatest performance boost when we used eight PCIe SSDs with SanDisk DAS Cache. The upgraded configuration of the Dell PowerEdge R930 with PCIe SSDs and SanDisk DAS Cache delivered 11.1 times the database performance of the all-HDD configuration. This makes the new Dell PowerEdge R930 a powerful platform with scalable storage options that can potentially translate into significant service improvements for your business and your customers, which helps in maximizing ROI.
Cost and performance comparison for OpenStack compute and storage infrastructurePrincipled Technologies
To be competitive in the cloud, businesses need more efficient hardware and software for their OpenStack environment and solutions that can pool physical resources efficiently for consumption and management through OpenStack. Software-defined storage has made it easier to create storage resource pools that spread across your datacenter, but external or separate distributed storage systems are still a challenge for many. VMware vSphere with Virtual SAN converges storage resources with the hypervisor, which can allow for management of an entire infrastructure through a single vCenter Server, increase performance, save space, and reduce costs.
Both solutions used the same twelve drive bays per storage node for virtual disk storage, however the tiered design of VMware Virtual SAN allowed for greater performance in our tests. We used a mix of two SSDs and 10 rotational drives for the VMware Virtual SAN solution, while we used 12 rotational drives for the Red Hat Storage Server solution behind a battery-backed RAID controller, the Red Hat recommended approach.
In our testing, the VMware vSphere with Virtual SAN solution performed better than the Red Hat Storage solution in both real world and raw performance testing by providing 53 percent more database OPS and 159 percent more IOPS. In addition, the vSphere with Virtual SAN solution can occupy less datacenter space, which can result in lower costs associated with density. A three-year cost projection for the two solutions showed that VMware vSphere with Virtual SAN could save your business up to 26 percent in hardware and software costs when compared to the Red Hat Storage solution we tested.
Dell PowerEdge R930 with Oracle: The benefits of upgrading to PCIe storage us...Principled Technologies
Strong server performance is essential to companies running Oracle Database. The new Dell PowerEdge R930 provided strong performance with 22 SAS HDDs, but this performance improved when we replaced all of the drives with SAS solid-state drives. It improved further when we used a mix of HDDs and SDDs along with SanDisk DAS Cache. We saw the greatest performance boost when we used eight PCIe SSDs with SanDisk DAS Cache. The upgraded configuration of the Dell PowerEdge R930 with PCIe SSDs and SanDisk DAS Cache delivered 11.1 times the database performance of the all-HDD configuration. This makes the new Dell PowerEdge R930 a powerful platform with scalable storage options that can potentially translate into significant service improvements for your business and your customers, which helps in maximizing ROI.
Scale-out Storage on Intel® Architecture Based Platforms: Characterizing and ...Odinot Stanislas
Issue du salon orienté développeurs d'Intel (l'IDF) voici une présentation plutôt sympa sur le stockage dit "scale out" avec une présentation des différents fournisseurs de solutions (slide 6) comprenant ceux qui font du mode fichier, bloc et objet. Puis du benchmark sur certains d'entre eux dont Swift, Ceph et GlusterFS.
This eight page technical report documents a series of tests that demonstrate the benefits of using large amounts of server memory for a large-scale Decision Support System workload on an eX5 platform
Demartek evaluated the Lenovo S3200 SAN supporting multiple workloads and saw tremendous results. Read this report and find out why the S3200 should be considered for your SAN deployments!
Dell PowerEdge R920 running Oracle Database: Benefits of upgrading with NVMe ...Principled Technologies
Strong server performance is essential to companies running Oracle Database. The new Dell PowerEdge R920 provides strong performance in its base configuration with 24 SAS hard disks, but this performance gets an enormous boost when running the configuration containing NVMe Express Flash PCIe SSDs. In our testing, the upgraded configuration of the Dell PowerEdge R920 delivered 14.9 times the database performance of the base configuration. In addition, in testing the raw I/O throughput of the NVMe Express Flash PCIe SSDs, we saw as much as 192.8 times the IOPS as compared to the base configuration. Given that the storage subsystem is critical in servers and specifically database applications, the performance improvements offered by NVMe Express Flash PCIe SSDs can lead to great service improvements for your customers, making this upgrade a very wise investment.
Get insight from document-based distributed MongoDB databases sooner and have...Principled Technologies
With additional drive bays and 2nd Generation Intel Xeon Scalable processors, Dell EMC PowerEdge R640 servers handled more Yahoo Cloud Serving Benchmark (YCSB) operations per second than previous-generation servers and handled them more efficiently
Overview of the architecture, and benefits of Dell HPC Storage with Intel EE Lustre in High Performance Computing and Big Science workloads.
Presented by Andrew Underwood at the Melbourne Big Data User Group - January 2016.
Lustre is a trademark of Seagate Technology.
The Best Infrastructure for OpenStack: VMware vSphere and Virtual SANEMC
Side-by-side OpenStack benchmarking conducted by a third-party technical validation firm shows that infrastructure based on vSphere with VSAN performs better than a similar Red Hat stack built upon KVM and Red Hat Storage (GlusterFS). OpenStack workloads run faster on vSphere than on KVM and cost less - benefits of the hyperconverged Virtual SAN architecture.
Scale-out Storage on Intel® Architecture Based Platforms: Characterizing and ...Odinot Stanislas
Issue du salon orienté développeurs d'Intel (l'IDF) voici une présentation plutôt sympa sur le stockage dit "scale out" avec une présentation des différents fournisseurs de solutions (slide 6) comprenant ceux qui font du mode fichier, bloc et objet. Puis du benchmark sur certains d'entre eux dont Swift, Ceph et GlusterFS.
This eight page technical report documents a series of tests that demonstrate the benefits of using large amounts of server memory for a large-scale Decision Support System workload on an eX5 platform
Demartek evaluated the Lenovo S3200 SAN supporting multiple workloads and saw tremendous results. Read this report and find out why the S3200 should be considered for your SAN deployments!
Dell PowerEdge R920 running Oracle Database: Benefits of upgrading with NVMe ...Principled Technologies
Strong server performance is essential to companies running Oracle Database. The new Dell PowerEdge R920 provides strong performance in its base configuration with 24 SAS hard disks, but this performance gets an enormous boost when running the configuration containing NVMe Express Flash PCIe SSDs. In our testing, the upgraded configuration of the Dell PowerEdge R920 delivered 14.9 times the database performance of the base configuration. In addition, in testing the raw I/O throughput of the NVMe Express Flash PCIe SSDs, we saw as much as 192.8 times the IOPS as compared to the base configuration. Given that the storage subsystem is critical in servers and specifically database applications, the performance improvements offered by NVMe Express Flash PCIe SSDs can lead to great service improvements for your customers, making this upgrade a very wise investment.
Get insight from document-based distributed MongoDB databases sooner and have...Principled Technologies
With additional drive bays and 2nd Generation Intel Xeon Scalable processors, Dell EMC PowerEdge R640 servers handled more Yahoo Cloud Serving Benchmark (YCSB) operations per second than previous-generation servers and handled them more efficiently
Overview of the architecture, and benefits of Dell HPC Storage with Intel EE Lustre in High Performance Computing and Big Science workloads.
Presented by Andrew Underwood at the Melbourne Big Data User Group - January 2016.
Lustre is a trademark of Seagate Technology.
The Best Infrastructure for OpenStack: VMware vSphere and Virtual SANEMC
Side-by-side OpenStack benchmarking conducted by a third-party technical validation firm shows that infrastructure based on vSphere with VSAN performs better than a similar Red Hat stack built upon KVM and Red Hat Storage (GlusterFS). OpenStack workloads run faster on vSphere than on KVM and cost less - benefits of the hyperconverged Virtual SAN architecture.
Ceph is an open source project, which provides software-defined, unified storage solutions. Ceph is a distributed storage system which is massively scalable and high-performing without any single point of failure. From the roots, it has been designed to be highly scalable, up to exabyte level and beyond while running on general-purpose commodity hardware.
Business-critical applications on VMware vSphere 6, VMware Virtual SAN, and V...Principled Technologies
Moving to the virtualized, software-defined datacenter can offer real benefits to today’s organizations. As our testing showed, virtualizing business-critical applications with VMware vSphere, VMware Virtual SAN, and VMware NSX not only delivered reliable performance in a peak utilization scenario, but also delivered business continuity during and after a simulated site evacuation.
Using this VMware Validated Design with QCT hardware and Intel SSDs, we demonstrated a virtualized critical Oracle Database application environment delivering strong performance, even when under extreme duress.
Recognizing that many organizations have multiple sites, we also proved that our environment performed reliably under a site evacuation scenario, migrating the primary site VMs to the secondary in just over eight minutes with no downtime.
With these features and strengths, the VMware Validated Design SDDC is a proven solution that allows for efficient deployment of components and can help improve the reliability, flexibility, and mobility of your multi-site environment.
New Ceph capabilities and Reference ArchitecturesKamesh Pemmaraju
Have you heard about Inktank Ceph and are interested to learn some tips and tricks for getting started quickly and efficiently with Ceph? Then this is the session for you!
In this two part session you learn details of:
• the very latest enhancements and capabilities delivered in Inktank Ceph Enterprise such as a new erasure coded storage back-end, support for tiering, and the introduction of user quotas.
• best practices, lessons learned and architecture considerations founded in real customer deployments of Dell and Inktank Ceph solutions that will help accelerate your Ceph deployment.
Software Defined Storage, Big Data and Ceph - What Is all the Fuss About?Red_Hat_Storage
Software Defined Storage, Big Data and Ceph - What Is all the Fuss About? By: Kamesh Pemmaraju,Neil Levine
Have you heard about Inktank Ceph and are interested to learn some tips and tricks for getting started quickly and efficiently with Ceph? Then this is the session for you! In this two part session you learn details of: • the very latest enhancements and capabilities delivered in Inktank Ceph Enterprise such as a new erasure coded storage back-end, support for tiering, and the introduction of user quotas. • best practices, lessons learned and architecture considerations founded in real customer deployments of Dell and Inktank Ceph solutions that will help accelerate your Ceph deployment.
Webinar: Hardware-Based Hyperconvergence vs. Hyperconvergence SoftwareStorage Switzerland
In our on demand webinar, Storage Switzerland and Maxta compare the pros and cons of the two hyperconverged deployment models. Those attending learn how to clarify their goals for hyperconvergence and how to decide which architecture is the best fit for their organizations.
Attendees will learn how each deployment model overcomes these challenge:
1. Ease of initial evaluation and testing
2. How to transition to hyperconvergence while having current servers
3. How to avoid hypervisor lock-in
4. How to build a future proof hyperconverged architecture
By upgrading from the legacy solution we tested to the new Intel processor-based Dell and VMware solution, you could do 18 times the work in the same amount of space. Imagine what that performance could mean to your business: Consolidate workloads from across your company, lower your power and cooling bills, and limit datacenter expansion in the future, all while maintaining a consistent user experience—the list of potential benefits is huge.
Try running DPACK, which can help you identify bottlenecks in your environment and inform you about your current performance needs. Then consider how the consolidation ratio we proved could be helpful for your company. The Intel processor-powered Dell PowerEdge R730 solution with VMware vSphere and Dell Storage SC4020, also powered by Intel, could be the right destination for your upgrade journey.
A presentation about OpenStack storage solutions in production that presented in Iran OpenStack users group in 10th, 24th November and 8th December of 2015.
Distributed storage performance for OpenStack clouds using small-file IO work...Principled Technologies
OpenStack cloud environments demand strong storage performance to handle the requests of end users. Software-based distributed storage can provide this performance while also providing much needed flexibility for storage resources.
In our tests, we found that Red Hat Storage Server better handled small-file IO workloads than did Ceph Storage, handling up to two times the number of files per second in some instances. The smallfile tool we used simulated users performing actions on their files to show the kind of end-user performance you could expect using both solutions at various node, VM, and thread counts.
These results show that Red Hat Storage Server can provide equivalent or better performance than Ceph Storage for similar workloads in OpenStack cloud environments, which can help users better access the files they keep in the cloud.
Gestione gerarchica dei dati con SUSE Enterprise Storage e HPE DMFSUSE Italy
In questa sessione HPE e SUSE illustrano con casi reali come HPE Data Management Framework e SUSE Enterprise Storage permettano di risolvere i problemi di gestione della crescita esponenziale dei dati realizzando un’architettura software-defined flessibile, scalabile ed economica. (Alberto Galli, HPE Italia e SUSE)
Similar to Performance of persistent apps on Container-Native Storage for Red Hat OpenShift Container Platform (20)
Investing in GenAI: Cost‑benefit analysis of Dell on‑premises deployments vs....Principled Technologies
Conclusion
Diving into the world of GenAI has the potential to yield a great many benefits for your organization, but it first requires consideration for how best to implement those GenAI workloads. Whether your AI goals are to create a chatbot for online visitors, generate marketing materials, aid troubleshooting, or something else, implementing an AI solution requires careful planning and decision-making. A major decision is whether to host GenAI in the cloud or keep your data on premises. Traditional on-premises solutions can provide superior security and control, a substantial concern when dealing with large amounts of potentially sensitive data. But will supporting a GenAI solution on site be a drain on an organization’s IT budget?
In our research, we found that the value proposition is just the opposite: Hosting GenAI workloads on premises, either in a traditional Dell solution or using a managed Dell APEX pay-per-use solution, could significantly lower your GenAI costs over 3 years compared to hosting these workloads in the cloud. In fact, we found that a comparable AWS SageMaker solution would cost up to 3.8 times as much and an Azure ML solution would cost up to 3.6 times as much as GenAI on a Dell APEX pay-per-use solution. These results show that organizations looking to implement GenAI and reap the business benefits to come can find many advantages in an on-premises Dell solution, whether they opt to purchase and manage it themselves or choose a subscription-based Dell APEX pay-per-use solution. Choosing an on-premises Dell solution could save your organization significantly over hosting GenAI in the cloud, while giving you control over the security and privacy of your data as well as any updates and changes to the environment, and while ensuring your environment is managed consistently.
Workstations powered by Intel can play a vital role in CPU-intensive AI devel...Principled Technologies
In three AI development workflows, Intel processor-powered workstations delivered strong performance, without using their GPUs, making them a good choice for this part of the AI process
Conclusion
We executed three AI development workflows on tower workstations and mobile workstations from three vendors, with each workflow utilizing only the Intel CPU cores, and found that these platforms were suitable for carrying out various AI tasks. For two of the workflows, we learned that completing the tasks on the tower workstations took roughly half as much time as on the mobile workstations. This supports the idea that the tower workstations would be appropriate for a development environment for more complex models with a greater volume of data and that the mobile workstations would be well-suited for data scientists fine-tuning simpler models. In the third workflow, we explored tower workstation performance with different precision levels and learned that using 16-bit floating point precision allowed the workstations to execute the workflow in less time and also reduced memory usage dramatically. For all three AI workflows we executed, we consider the time the workstations needed to complete the tasks to be acceptable, and believe that these workstations can be appropriate, cost-effective choices for these kinds of activities.
Enable security features with no impact to OLTP performance with Dell PowerEd...Principled Technologies
Get comparable online transaction processing (OLTP) performance with or without enabling AMD Secure Memory Encryption and AMD Secure Encrypted Virtualization - Encrypted State
Conclusion
You’ve likely already implemented many security measures for your servers, which may include physical security for the data center, hardware-level security, and software-level security. With the cost of data breaches high and still growing, however, wise IT teams will consider what additional security measures they may be able to implement.
AMD SME and SEV-ES are technologies that are already available within your AMD processor-powered 16th Generation Dell PowerEdge servers—and in our testing, we saw that they can offer extra layers of security without affecting performance. We compared the online transaction processing performance of a Dell PowerEdge R7625 server, powered by AMD EPYC 9274F processors, with and without these two security features enabled. We found that enabling AMD Secure Memory Encryption and Secure Encrypted Virtualization-Encrypted State did not impact performance at all.
If your team is assessing areas where you might be able to enhance security—without paying a large performance cost—consider enabling AME SME and AMD SEV-ES in your Dell PowerEdge servers.
Improving energy efficiency in the data center: Endure higher temperatures wi...Principled Technologies
In high-temperature test scenarios, a Dell PowerEdge HS5620 server continued running an intensive workload without component warnings or failures, while a Supermicro SYS‑621C-TN12R server failed
Conclusion: Remain resilient in high temperatures with the Dell PowerEdge HS5620 to help increase efficiency
Increasing your data center’s temperature can help your organization make strides in energy efficiency and cooling cost savings. With servers that can hold up to these higher everyday temperatures—as well as high temperatures due to unforeseen circumstances—your business can continue to deliver the performance your apps and clients require.
When we ran an intensive floating-point workload on a Dell PowerEdge HS5620 and a Supermicro SYS-621CTN12R in three scenario types simulating typical operations at 25°C, a fan failure, and an HVAC malfunction, the Dell server experienced no component warnings or failures. In contrast, the Supermicro server experienced warnings in all three scenario types and experienced component failures in the latter two tests, rendering the system unusable. When we inspected and analyzed each system, we found that the Dell PowerEdge HS5620 server’s motherboard layout, fans, and chassis offered cooling design advantages.
For businesses aiming to meet sustainability goals by running hotter data centers, as well as those concerned with server cooling design, the Dell PowerEdge HS5620 is a strong contender to take on higher temperatures during day-to-day operations and unexpected malfunctions.
Dell APEX Cloud Platform for Red Hat OpenShift: An easily deployable and powe...Principled Technologies
The 4th Generation Intel Xeon Scalable processor‑powered solution deployed in less than two hours and ran a Kubernetes container-based generative AI workload effectively
Dell APEX Cloud Platform for Red Hat OpenShift: An easily deployable and powe...Principled Technologies
The 4th Generation Intel Xeon Scalable processor‑powered solution deployed in less than two hours and ran a generative AI workload effectively
Conclusion
The appeal of incorporating GenAI into your organization’s operations is likely great. Getting started with an efficient solution for your next LLM workload or application can seem daunting because of the changing hardware and software landscape, but Dell APEX Cloud Platform for Red Hat OpenShift powered by 4th Gen Intel Xeon Scalable processors could provide the solution you need. We started with a Dell Validated Design as a reference, and then went on to modify the deployment as necessary for our Llama 2 workload. The Dell APEX Cloud Platform for Red Hat OpenShift solution worked well for our LLM, and by using this deployment guide in conjunction with numerous Dell documents and some flexibility, you could be well on your way to innovating your next GenAI breakthrough.
Upgrade your cloud infrastructure with Dell PowerEdge R760 servers and VMware...Principled Technologies
Compared to a cluster of PowerEdge R750 servers running VMware Cloud Foundation (VCF)
For organizations running clusters of moderately configured, older Dell PowerEdge servers with a previous version of VCF, upgrading to better-configured modern servers can provide a significant performance boost and more.
Upgrade your cloud infrastructure with Dell PowerEdge R760 servers and VMware...Principled Technologies
Compared to a cluster of PowerEdge R750 servers running VMware Cloud Foundation 4.5
If your company is struggling with underperforming infrastructure, upgrading to 16th Generation Dell PowerEdge servers running VCF 5.1 could be just what you need to handle more database throughput and reduce vSAN latencies. As an additional benefit to IT admins, we also found that the embedded VMware Aria Operation adapter provided useful infrastructure insights.
Realize 2.1X the performance with 20% less power with AMD EPYC processor-back...Principled Technologies
Three AMD EPYC processor-based two-processor solutions outshined comparable Intel Xeon Scalable processor-based solutions by handling more Redis workload transactions and requests while consuming less power
Conclusion
Performance and energy efficiency are significant factors in processor selection for servers running data-intensive workloads, such as Redis. We compared the Redis performance and energy consumption of a server cluster in three AMD EPYC two-processor configurations against that of a server cluster in two Intel Xeon Scalable two-processor configurations. In each of our three test scenarios, the server cluster backed by AMD EPYC processors outperformed the server cluster backed by Intel Xeon Scalable processors. In addition, one of the AMD EPYC processor-based clusters consumed 20 percent less power than its Intel Xeon Scalable processor-based counterpart. Combining these measurements gave us power efficiency metrics that demonstrate how valuable AMD EPYC processor-based servers could be—you could see better performance per watt with these AMD EPYC processor-based server clusters and potentially get more from your Redis or other data intensive applications and workloads while reducing data center power costs.
Improve performance and gain room to grow by easily migrating to a modern Ope...Principled Technologies
We deployed this modern environment, then migrated database VMs from legacy servers and saw performance improvements that support consolidation
Conclusion
If your organization’s transactional databases are running on gear that is several years old, you have much to gain by upgrading to modern servers with new processors and networking components and an OpenShift environment. In our testing, a modern OpenShift environment with a cluster of three Dell PowerEdge R7615 servers with 4th Generation AMD EPYC processors and high-speed 100Gb Broadcom NICs outperformed a legacy environment with MySQL VMs running on a cluster of three Dell PowerEdge R7515 servers with 3rd Generation AMD EPYC processors and 25Gb Broadcom NICs. We also easily migrated a VM from the legacy environment to the modern environment, with only a few steps required to set up and less than ten minutes of hands-on time. The performance advantage of the modern servers would allow a company to reduce the number of servers necessary to perform a given amount of database work, thus lowering operational expenditures such as power and cooling and IT staff time for maintenance. The high-speed 100Gb Broadcom NICs in this solution also give companies better network performance and networking capacity to grow as they embrace emerging technologies such as AI that put great demands on networks.
Boost PC performance: How more available memory can improve productivityPrincipled Technologies
With more memory available, system performance of three Dell devices increased, which can translate to a better user experience
Conclusion
When your system has plenty of RAM to meet your needs, you can efficiently access the applications and data you need to finish projects and to-do lists without sacrificing time and focus. Our test results show that with more memory available, three Dell PCs delivered better performance and took less time to complete the Procyon Office Productivity benchmark. These advantages translate to users being able to complete workflows more quickly and multitask more easily. Whether you need the mobility of the Latitude 5440, the creative capabilities of the Precision 3470, or the high performance of the OptiPlex Tower Plus 7010, configuring your system with more RAM can help keep processes running smoothly, enabling you to do more without compromising performance.
Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...Principled Technologies
A Principled Technologies deployment guide
Conclusion
Deploying VMware Cloud Foundation 5.1 on next gen Dell PowerEdge servers brings together critical virtualization capabilities and high-performing hardware infrastructure. Relying on our hands-on experience, this deployment guide offers a comprehensive roadmap that can guide your organization through the seamless integration of advanced VMware cloud solutions with the performance and reliability of Dell PowerEdge servers. In addition to the deployment efficiency, the Cloud Foundation 5.1 and PowerEdge solution delivered strong performance while running a MySQL database workload. By leveraging VMware Cloud Foundation 5.1 and PowerEdge servers, you could help your organization embrace cloud computing with confidence, potentially unlocking a new level of agility, scalability, and efficiency in your data center operations.
Upgrade your cloud infrastructure with Dell PowerEdge R760 servers and VMware...Principled Technologies
Compared to a cluster of PowerEdge R750 servers running VMware Cloud Foundation 4.5
Conclusion
If your company is struggling with underperforming infrastructure, upgrading to 16th Generation Dell PowerEdge servers running VCF 5.1 could be just what you need to handle more database throughput and reduce vSAN latencies. We found that a Dell PowerEdge R760 server cluster running VCF 5.1 processed over 78 percent more TPM and 79 percent more NOPM than a Dell PowerEdge R750 server cluster running VCF 4.5. It’s also worth noting that the PowerEdge R750 cluster bottlenecked on vSAN storage, with max write latency at 8.9ms. For reference, the PowerEdge R760 cluster clocked in at 3.8ms max write latency. This higher latency is due in part to the single disk group per host on the moderately configured PowerEdge R750 cluster, while the better-configured PowerEdge R760 cluster supported four disk groups per host. As an additional benefit to IT admins, we also found that the embedded VMware Aria Operation adapter provided useful infrastructure insights.
Based on our research using publicly available materials, it appears that Dell supports nine of the ten PC security features we investigated, HP supports six of them, and Lenovo supports three features.
Increase security, sustainability, and efficiency with robust Dell server man...Principled Technologies
Compared to the Supermicro management portfolio
Conclusion
Choosing a vendor for server purchases is about more than just the hardware platform. Decision-makers must also consider more long-term concerns, including system/data security, energy efficiency, and ease of management. These concerns make the systems management tools a vendor offers as important as the hardware.
We investigated the features and capabilities of server management tools from Dell and Supermicro, comparing Dell iDRAC9 against Supermicro IPMI for embedded server management and Dell OpenManage Enterprise and CloudIQ against Supermicro Server Manager for one-to-many device and console management and monitoring. We found that the Dell management tools provided more comprehensive security, sustainability, and management/monitoring features and capabilities than Supermicro servers did. In addition, Dell tools automated more tasks to ease server management, resulting in significant time savings for administrators versus having to do the same tasks manually with Supermicro tools.
When making a server purchase, a vendor’s associated management products are critical to protect data, support a more sustainable environment, and to ease the maintenance of systems. Our tests and research showed that the Dell management portfolio for PowerEdge servers offered more features to help organizations meet these goals than the comparable Supermicro management products.
Increase security, sustainability, and efficiency with robust Dell server man...Principled Technologies
Compared to the Supermicro management portfolio
Conclusion
Choosing a vendor for server purchases is about more than just the hardware platform. Decision-makers must also consider more long-term concerns, including system/data security, energy efficiency, and ease of management. These concerns make the systems management tools a vendor offers as important as the hardware.
We investigated the features and capabilities of server management tools from Dell and Supermicro, comparing Dell iDRAC9 against Supermicro IPMI for embedded server management and Dell OpenManage Enterprise and CloudIQ against Supermicro Server Manager for one-to-many device and console management and monitoring. We found that the Dell management tools provided more comprehensive security, sustainability, and management/monitoring features and capabilities than Supermicro servers did. In addition, Dell tools automated more tasks to ease server management, resulting in significant time savings for administrators versus having to do the same tasks manually with Supermicro tools.
When making a server purchase, a vendor’s associated management products are critical to protect data, support a more sustainable environment, and to ease the maintenance of systems. Our tests and research showed that the Dell management portfolio for PowerEdge servers offered more features to help organizations meet these goals than the comparable Supermicro management products.
Scale up your storage with higher-performing Dell APEX Block Storage for AWS ...Principled Technologies
In our tests, Dell APEX Block Storage for AWS outperformed similarly configured solutions from Vendor A, achieving more IOPS, better throughput, and more consistent performance on both NVMe-supported configurations and configurations backed by Elastic Block Store (EBS) alone.
Dell APEX Block Storage for AWS supports a full NVMe backed configuration, but Vendor A doesn’t—its solution uses EBS for storage capacity and NVMe as an extended read cache—which means APEX Block Storage for AWS can deliver faster storage performance.
Scale up your storage with higher-performing Dell APEX Block Storage for AWSPrincipled Technologies
Dell APEX Block Storage for AWS offered stronger and more consistent storage performance for better business agility than a Vendor A solution
Conclusion
Enterprises desiring the flexibility and convenience of the cloud for their block storage workloads can find fast-performing solutions with the enterprise storage features they’re used to in on-premises infrastructure by selecting Dell APEX Block Storage for AWS.
Our hands-on tests showed that compared to the Vendor A solution, Dell APEX Block Storage for AWS offered stronger, more consistent storage performance in both NVMe-supported and EBS-backed configurations. Using NVMe-supported configurations, Dell APEX Block Storage for AWS achieved 4.7x the random read IOPS and 5.1x the throughput on sequential read operations per node vs. Vendor A. In our EBS-backed comparison, Dell APEX Block Storage for AWS offered 2.2x the throughput per node on sequential read operations vs. Vendor A.
Plus, the ability to scale beyond three nodes—up to 512 storage nodes with capacity of up to 8 PBs—enables Dell APEX Block Storage for AWS to help ensure performance and capacity as your team plans for the future.
Get in and stay in the productivity zone with the HP Z2 G9 Tower WorkstationPrincipled Technologies
We compared CPU performance and noise output of an HP Z2 G9 Tower Workstation in High Performance Mode to Dell Precision 3660 and 5860 tower workstations in optimized performance modes
Conclusion
HP Z2 G9 Tower Workstation users can change the BIOS settings to dial in the performance mode that best suits their needs: High Performance Mode, Performance Mode, or Quiet Mode. In good
news for both creative and technical professionals, we found that an Intel Core i9-13900 processor-powered HP Z2 G9 Tower Workstation set to High Performance mode received higher CPU-based benchmark scores than both a similarly configured Dell Precision 3660 and a Dell Precision 5860 equipped with an Intel Xeon w5-2455x processor. Plus, the HP Z2 G9 Tower Workstation was quieter while running CPU-intensive Cinebench 2024 and SPECapc for Solidworks 2022 workloads than both Dell Precision tower workstations. This means HP Z2 G9 Tower Workstation users who prize performance over everything else can do so without sacrificing a quiet workspace.
Open up new possibilities with higher transactional database performance from...Principled Technologies
In our PostgreSQL tests, R7i instances boosted performance over R6i instances with previous-gen processors
If you use the open-source PostgreSQL database to run your critical business operations, you have many cloud options from which to choose. While many of these instances can do the job, some can deliver stronger performance, which can mean getting a greater return on your cloud investment.
We conducted hands-on testing with the HammerDB TPROC-C benchmark to see how the PostgreSQL performance of Amazon EC2 R7i instances, enabled by 4th Gen Intel Xeon Scalable processors, stacked up to that of R6i instances with previous-generation processors. We learned that small, medium-sized, and large R7i instances with the newer processors delivered better OLTP performance, with improvements as high as 13.8 percent. By choosing the R7i instances, your organization has the potential to support more users, deliver a better experience to those users, and even lower your cloud operating expenditures by requiring fewer instances to get the job done.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Assuring Contact Center Experiences for Your Customers With ThousandEyes
Performance of persistent apps on Container-Native Storage for Red Hat OpenShift Container Platform
1. Performance of persistent apps on Container-Native Storage for Red Hat OpenShift Container Platform March 2018
Performance of persistent apps on Container-Native
Storage for Red Hat OpenShift Container Platform
Introduction
Companies are constantly seeking ways to improve on application and services delivery
while using flexible technologies at the service, application, and storage layers. Open-
source software-defined storage (SDS) and container technologies are evolving to enable
this. Meanwhile, companies are searching for ways to use new, advanced solid-state storage
hardware while also leveraging their investments in both licensing and expertise on other
enterprise technologies, such as VMware®
vSphere®
.
The Red Hat OpenShift Container Platform is a container application platform that brings
containers and Kubernetes®
to the enterprise. Container-Native Storage (CNS) for Red Hat®
OpenShift Container Platform is software-defined storage built upon Red Hat Gluster Storage.
It provides a platform-agnostic storage layer for persistent applications running on OpenShift.
With this integrated solution, Red Hat aims to provide a flexible and scalable solution for
companies looking to adopt a container-based platform for apps with persistent data needs. By
virtualizing CNS and OpenShift on VMware vSphere, companies can avoid having to abandon
existing virtualization technologies. In this paper, we discuss results of benchmarking several
web application workloads with this Container-Native Storage solution. We also demonstrate
performance and sizing using two Western Digital®
storage offerings: the Ultrastar®
SS200 solid-
state drive (SSD) and Ultrastar He10
hard disk drive (HDD).
A Principled Technologies proof-of-concept study: Hands-on work. Real-world results.
2. Performance of persistent apps on Container-Native Storage for Red Hat OpenShift Container Platform March 2018 | 2
Executive summary of testing
Test matrix
We tested the Container-Native Storage for Red Hat
OpenShift Container Platform solution in de-coupled
mode running on Red Hat®
Enterprise Linux®
(RHEL)
Atomic Host VMs in a VMware cluster for two hardware
configurations that differ in their persistent-storage media:
• Ultrastar SS200 solid-state drives
• Ultrastar He10
hard drives
On each configuration, we we scaled many application
instances running in OpenShift containers, and
executed two test workloads, which stressed different
system components of the containers:
• IO-intensive workload against a MySQL database
• CPU-intensive workload against a Linux, Apache,
MySQL, and PHP (LAMP) stack application
For the IO-intensive workload, we tested two CNS-
node vCPU allocations.
These choices yielded six permutations of test
conditions. For each, we first tested with a single
app instance and then scaled to many app instances,
stopping when app performance declined or app
response time became unacceptably long.
Findings
• CNS operating in de-coupled mode successfully
provided storage to the container-based apps in
both the IO-intensive and CPU-intensive scenarios.
• Understanding your application’s performance
profile is critical to the design of your CNS
configuration. For example, IO-intensive apps
backed by CNS may need additional CPU
resources allocated to the CNS nodes. CPU-
intensive apps may perform evenly regardless of
whether the backing storage uses hard drives or
solid-state drives.
• CNS provided scalable storage to our container-
based apps in several hardware and workload
configurations, up to 128 app instances.
• On the IO-intensive, database-only workload, the
Ultrastar SS200 solid-state drive configuration
achieved maximum performance greater than five
times that of the He10
hard drive configuration,
showing that CNS on the solid-state drive was
better suited to handle the IO-intensive workload.
• On the CPU-intensive, full-web app LAMP stack
workload, the two configurations achieved similar
maximum performance levels, with the solid-state
drive configuration outperforming the hard drive
configuration by less than 5 percent.
• The modest improvement the solid-state storage
delivered over the hard drive configuration on the
CPU-intensive workload illustrates the importance
of understanding your workload when deciding
among hardware options.
• Depending on a business’s needs, either storage
option can be a good choice. Because the total
cost of the solutions tested (including hardware
and software) differs by only 1.24 percent,
choosing the solid-state drive configuration can
offer added value.
Acronyms we use in this report
CNS Container-Native Storage for Red Hat OpenShift
Container Platform
CRS container-ready storage
DS2 DVD Store 2
DS3 DVD Store 3
HDD hard disk drive
LAMP Linux, Apache, MySQL, and PHP/Python/Perl
OCI Open Container Initiative
OLTP online transaction processing
OPM orders per minute
SDLC software development lifecycle
SDS software-defined storage
SSD solid-state drive
VM virtual machine
3. Performance of persistent apps on Container-Native Storage for Red Hat OpenShift Container Platform March 2018 | 3
We worked with Red Hat and Western Digital Data Propulsion Lab®
Technologists to configure the test bed and design the testing
approach. For details on the roles the partners played and who
executed each part of the study, see Appendix B.
Our two-phase approach
Our testing consisted of two phases. The first used an IO-intensive
app scenario and the second used a CPU-intensive LAMP stack
app scenario. For our test workload, we selected DVD Store, an
online transaction processing (OLTP) database workload generator.
DVD Store, which is part of the VMware VMmark®
benchmark
suite, is available in two versions, DVD Store 2 (DS2) and DVD
Store 3 (DS3). Both versions model real-world, human-interactive
applications. The versions have different database schemas and
queries, resulting in a CPU-intensive workload profile for DS3 and an
IO-intensive workload profile for DS2. We measured performance
in the IO-intensive scenario using only the database tier in DS2, and
in the CPU-intensive scenario using the full LAMP stack for DS3.
We provide more detail about the two versions in the Workload
discussion section.
In each scenario, we compared how well the solution executed each
workload profile when we used two different types of storage media,
Ultrastar SS200 solid-state drives and Ultrastar He10
hard drives.
To summarize, we tested the following:
• Phase 1: IO-intensive, database-only workload
yy Phase 1a: 4 vCPUs per CNS node
yy Phase 1b: 16 vCPUs per CNS node
• Phase 2: CPU-intensive, full-web app LAMP stack workload
(16 vCPUs per CNS node)
The following performance and sizing questions guided our
test design:
• In both the IO-intensive scenario and the CPU-intensive scenario, how did total workload performance
scale as the number of app and workload instances increased?
• Where did bottlenecks occur? How did cluster saturation manifest itself when additional instances caused
response times to increase (e.g., CPU pressure, RAM pressure, NIC pressure, disk IO saturation, etc.)?
• How did performance and scalability change when additional resources eliminated the bottleneck?
• How many database app instances (scenario 1) or LAMP app instances (scenario 2) could we run on
OpenShift Container Ready Storage backed by three nodes, with either 12 Ultrastar He10
hard drives per
node or five Ultrastar SS200 solid-state drives per node?
Western Digital drive technologies
SSD technology
We used HGST Ultrastar SS200 solid-state
drives. According to Western Digital, these
enterprise-grade 12Gb/s SAS drives have a
maximum read throughput up to 1,800MB/s
and a capacity of 7.68 TB in a 2.5-inch small
form factor. Western Digital positions the
drives as a high-endurance option well suited
to mixed-use and read-intensive workloads.
To learn more, see the Ultrastar SS2200 data
sheet at https://www.hgst.com/sites/default/
files/resources/Ultrastar-SS200-datasheet.pdf
HDD technology
We used HGST Ultrastar He10
hard disk
drives, with 10 TB of capacity in a 3.5-inch
form factor. According to Western Digital,
these drives offer greater capacity, power
efficiency, and reliability than 8TB air drives
do, and are appropriate for enterprise and
data center applications that demand high
capacity density. They offer both SATA 6Gb/s
and SAS 12Gb/s options. In our testing, we
used SATA drives.
To learn more, see the Ultrastar He10
data
sheet at https://www.hgst.com/sites/default/
files/resources/Ultrastar-He10-DS.pdf
4. Performance of persistent apps on Container-Native Storage for Red Hat OpenShift Container Platform March 2018 | 4
Container-Native Storage for Red Hat OpenShift Container Platform
Red Hat OpenShift Container Platform
Red Hat OpenShift Container Platform is an offering based on Kubernetes that provides automation, orchestration,
and self-service for application developers. OpenShift integrates several open-source technologies into an
enterprise-class solution that can deploy and manage many applications. These technologies include the following:
• Linux®
containers, Docker, and Kubernetes for cluster management and automation
• Open vSwitch®
software-defined networking for container communication within the cluster
• Red Hat CloudForms, which gives infrastructure and operations teams a unified management experience
across containers and the underlying bare-metal, virtual, or cloud infrastructure
The platform also offers a range of storage options for persistent application data and an internal container
registry to store and manage container images for deployment. For example, application and data center
administrators can create reproducible workflows that are designed to create efficiencies in the software
development lifecycle (SDLC) and accelerate application delivery.
Scaling through automation
By automating many processes, OpenShift makes it practical to implement containers in a large-scale
environment. OpenShift software-defined storage and networking capabilities support many types of cloud-
native and stateful apps and make it possible to connect these applications.
OpenShift containers
Containers bundle software applications with all the OS resources they need to run (e.g., application dependencies
on specific versions of system libraries). This ensures consistent app behavior across operating environments. The
OpenShift Container Platform uses an Open Container Initiative (OCI)-compatible runtime for container execution.
Docker
Docker is an open-source containerization technology that adds
flexibility to the creation and deployment of Linux containers.
Docker lets you create, deploy, and copy containers.
Kubernetes
Kubernetes is a container orchestration solution. OpenShift, the
Red Hat container application platform offering built on top of
Kubernetes, can extend and enhance Kubernetes functionality by
providing networking, routing, scaling, health checking, persistent
storage connection, and other features that enable enterprise-scale
implementation of containers.
Kubernetes, Docker, etc.
Docker
image A
OpenShift
Container
Platform
app node
App
A
App
A
App
B
App
B
App
B
App
B
Docker
image B
Hypervisor
RHEL Atomic Host (VM)
Server
Application
pods
5. Performance of persistent apps on Container-Native Storage for Red Hat OpenShift Container Platform March 2018 | 5
Container-Native Storage for Red Hat OpenStack Container Platform
Container-Native Storage for Red Hat OpenStack Container Platform is a software-defined storage solution for
container environments built on Red Hat Gluster Storage that pools and presents direct-attached storage for use
as a distributed file system. CNS features include data redundancy, tiering, read-only and read-write snapshots,
and—as we show in this paper—strong integration with the Red Hat OpenShift Container Platform. CNS can
run on physical machines, virtual machines, containers, or in a cloud environment. CNS can serve as a persistent
backing store for applications—such as database environments, as we show in this paper—and can run co-
located with applications co-residing on the same nodes. It can also provide storage for unstructured data such
as VM images, backup images, and video.
CNS operating in de-coupled mode
CNS can run in either full CNS mode or de-coupled mode. The primary difference between full CNS and de-
coupled modes is where the storage services are running. In full CNS mode, the storage services run co-located
with applications on the same OpenShift-managed nodes. In de-coupled mode, the storage services run on
separate nodes (virtual or bare metal). Our testing used de-coupled mode for all configurations. The OpenShift
app developer experience is the same with CNS configured in either mode (full CNS or de-coupled). In either
mode, persistent storage is dynamically provisioned by OpenShift when a persistent volume claim is submitted.
Test bed configuration
Western Digital assembled hardware to create a common configuration that would support the goal of
investigating Container-Native Storage for Red Hat OpenStack Container Platform performance, scaling, and
sizing and showcase the capacity of the two Ultrastar storage offerings.
Hardware details
The hardware configuration we used for our testing consisted of nine physical servers. Three had Ultrastar He10
hard drives and six had Ultrastar SS200 solid-state drives. All servers had a single additional SSD used for the OS
(OS SSD). The placement of virtual machines varied based on which configuration we tested, per the diagrams
on the next two pages.
In each configuration, for testing and pricing, we deployed the VMs for OpenShift components and test clients
only on the OS SSD, and not on either the five SSDs or the 12 HDDs. We used these latter drives only for CNS
pooled storage. The CNS server specifications were as follows:
• Three HPE™
ProLiant DL380 Gen9 small form
factor (SFF), each with
yy Two Intel®
Xeon®
E5-2697A v4 processors
yy 256 GB of RAM
yy Five Ultrastar SS200 solid-state drives
yy HPE SmartHBA H240ar
• Three HPE ProLiant DL380 Gen9 large form
factor (LFF), each with
yy Two Intel Xeon E5-2697 v4 processors
yy 256 GB of RAM
yy 12 Ultrastar He10
hard drives
yy HPE SmartArray P440ar
yy HPE SmartHBA H240ar
yy One RAID5 volume containing the 12 disks
6. Performance of persistent apps on Container-Native Storage for Red Hat OpenShift Container Platform March 2018 | 6
Virtual machine details and topology
Through the phases of the testing, the servers ran 24 VMs (also referred to as nodes) comprising the OpenShift
and CNS infrastructure, and up to 24 VMs for test-client nodes, as follows:
Virtual machine nodes Role
12 OpenShift app nodes
Run OpenShift apps for both test cases. In Phase 1, we ran apps, each
composed of one container running one MySQL database. In Phase 2, we
ran apps, each composed of two paired containers, a PHP web service, and a
MySQL database.
3 CNS storage nodes
Run CNS in de-coupled mode to present persistent storage for OpenShift
apps. We provisioned six CNS VMs, three for the SSD-backed storage and
three for the HDD-backed storage. For each test configuration, we used only
the three appropriate VMs and shut down the three unneeded ones.
9 OpenShift infra nodes OpenShift infrastructure nodes and other supporting services
1 to 24 test-client nodes Generate the workload targeting the test apps in each phase
This diagram shows one of the Ultrastar SS200 solid-state drive test configurations we used for both DVD Store 2
and DVD Store 3. We used the 16 test clients, shown below, to generate workloads for 128 apps.
OpenShift
infra node
NFS
OpenShift app OpenShift app
OpenShift
infra node
Performance
monitoring
OpenShift app OpenShift app
OpenShift
infra node
HA proxy
OpenShift app OpenShift app
OpenShift
master
OpenShift app OpenShift app
OpenShift
master
OpenShift app OpenShift app
OpenShift
master
OpenShift app OpenShift app
CNS storage
(SSD)
test
client
test
client
test
client
test
client
test
client
test
client
test
client
test
client
test
client
test
client
test
client
CNS storage
(SSD)
test
client
test
client
test
client
test
client
test
client
CNS storage
(SSD)
HPE ProLiant
DL380 Gen9
(small form
factor)
HPE ProLiant
DL380 Gen9
(large form
factor)
7. Performance of persistent apps on Container-Native Storage for Red Hat OpenShift Container Platform March 2018 | 7
This diagram shows one of the Ultrastar He10
hard drive test configurations we used for both DVD Store 2 and
DVD Store 3. We used the 16 test clients, shown below, to generate workloads for 128 apps.
Note that the DVD Store clients and CNS nodes are co-resident on their physical servers, as the above diagrams
show. This proximity did not sway our results because the end-to-end data flow of the benchmark utility requests
was as follows:
• The client utility initiated the application request.
• The client sent the request to the applications running on the OpenShift app nodes, running on remote
physical servers.
yy In the IO-intensive database-only DS2 scenario, the client sent the request directly to the MySQL
database running on the OpenShift app node.
yy In the full-web app DS3 scenario, the client sent the request to the Apache web server running as part
of the LAMP stack, which communicated with its MySQL server.
• In both scenarios, the networked backing store for MySQL data resided on remote CNS nodes, so the IO
requests went back the other direction.
• The IO returned to the application.
• The application signaled to the client that the application request was complete.
OpenShift
infra node
NFS
OpenShift app OpenShift app
OpenShift
infra node
HA proxy
OpenShift app OpenShift app
OpenShift
master
OpenShift app OpenShift app
OpenShift
master
OpenShift app OpenShift app
OpenShift
master
OpenShift app OpenShift app
test
client
test
client
test
client
test
client
test
client
CNS storage
(HDD)
test
client
test
client
test
client
test
client
test
client
CNS storage
(HDD)
CNS storage
(HDD)
test
client
test
client
test
client
test
client
test
client
test
client
OpenShift
infra node
OpenShift app OpenShift app
Performance
monitoring
HPE ProLiant
DL380 Gen9
(small form
factor)
HPE ProLiant
DL380 Gen9
(large form
factor)
8. Performance of persistent apps on Container-Native Storage for Red Hat OpenShift Container Platform March 2018 | 8
VM node configurations
Note: As we discuss on pages 11 and 12, we used two different VM configurations for the CNS nodes in the
IO-intensive workload testing. We sized VMs were sized to avoid oversubscription of underlying physical hardware.
Role # of vCPUs RAM (GB) Virtualized NICs
CNS de-coupled
nodes
Configuration 1 (IO-intensive only) 4 8 2x 10GbE
Configuration 2 16 32 2x 10GbE
OpenShift app nodes 12 96 2x 10GbE
OpenShift infrastructure nodes 2 8 2x 10GbE
OpenShift masters 8 32 1x 10GbE
OpenShift HA proxy 4 4 1x 10GbE
OpenShift NFS 4 32 1x 10GbE
Performance monitoring server 2 8 1x 10GbE
DVD Store clients 2 2 1x 10GbE
Software details
• VMware vSphere 6.5
• VMware vCenter Server™
6.5
• Red Hat Enterprise Linux 7.3
• Container-Native Storage 3.6 for Red Hat OpenShift Container Platform (based on Red Hat Gluster
Storage 3.3)
• Red Hat OpenShift Container Platform 3.6
yy OpenShift 3.6 is deployed in the system with the following subcomponents:
ŠŠ Features: Basic-Auth GSSAPI Kerberos SPNEGO
ŠŠ openshift v3.6.173.0.5
ŠŠ kubernetes v1.6.1+5115d708d7
9. Performance of persistent apps on Container-Native Storage for Red Hat OpenShift Container Platform March 2018 | 9
Workload discussion
DVD Store
This workload generator, two versions of which are available online at https://github.com/dvdstore, tests online
transaction processing systems and has been a part of benchmark suites, notably VMware VMmark.
DVD Store simulates an ecommerce store and reports results primarily in terms of number of orders per minute
(OPM) and response time. We used DVD Store 2 for the IO-intensive scenario and DVD Store 3 for the full-web
app scenario. Due to the different underlying structure of each version’s database, and the procedures they
execute, we have found they simulate significantly different systems. For this reason, the results from the two
scenarios should not be compared to one another.
DVD Store 2
The DVD Store version 2 workload generator involves simulated customers creating accounts, logging
in, searching for items, and placing orders. It is available for MySQL, Microsoft®
SQL Server®
, Oracle, and
PostgreSQL. In this study, we used MySQL 5.7 as it is shipped from the Red Hat Container Registry. While it is
possible to have DVD Store 2 run with a web front end, we did not do so in our testing.
The DS2 client, which runs on separate virtual machines, simulates a specified number of users or customers. After
invoking the workload, each user initiates a connection to the database and attempts to log into the application.
If the customer does not exist, DS2 creates a new customer; by default, this occurs 20 percent of the time. If the
customer does exist, the process continues. The customer searches for items a specified number of times, places an
order with multiple lines, and the order process completes. The client then waits for a specified period of think time.
DVD Store 3
DVD Store version 3 builds on the logic and operations of DVD Store 2, adding features that customers typically
expect in more modern applications. In DS3, customers can add product reviews and rate the helpfulness of other
users’ reviews on a scale of 1 to 10. To facilitate these additions, this version includes seven new stored procedures.
The product review logic and helpfulness ratings logic take place just prior to the order being placed. Additionally,
during the new customer creation step of the DS3 workload, logic occurs that enables the selection of membership
tiers, such as bronze, silver, or gold.
The addition of these features shifts the workload profile from an IO focus toward a more CPU-intensive focus.
Because of this shift, we determined that DVD Store 3 was the more appropriate choice for the full-web app
scenario and used the full LAMP stack with the web front end. In our testing, the front-end application resided in
one containerized app while the back-end MySQL store resided in another containerized app.
About response time
In both phases of our testing, our goal was to determine the maximum performance, in terms of OPM, that
each configuration could achieve without an excessively long response time, as reported by the application.
While opinions vary on what constitutes an acceptable threshold, faster is always better. In our analysis, we
used an upper cutoff of one second (1,000 milliseconds), though our solutions delivered sub-200ms response
times under many conditions.
10. Performance of persistent apps on Container-Native Storage for Red Hat OpenShift Container Platform March 2018 | 10
Test phases and results
Phase 1: Testing storage performance under IO-intensive conditions with DVD Store 2
Overview
In IO-intensive environments, one criterion for success is the IO subsystem being able to keep up with a very large
amount of concurrent app requests. We designed the first phase of our testing with this IO-intensive scenario
in mind. We tested both the Ultrastar SS200 solid-state drive and the Ultrastar He10
hard drive configurations to
demonstrate the impact and scaling of a heavy IO-intensive workload. To achieve our goal of determining the
maximum performance within acceptable response-time limits, we tested scaling with the following quantities of
concurrent application instances: 1, 2, 8, 32, 96, and 128. We stopped at 128 app instances because we found
performance began to decline or response time reached unacceptable levels at this point.
Both versions of DVD Store let you adjust the number of users and think time to have the tool simulate a heavier
or lighter load. In this phase of testing, we used DVD Store 2 with the following configuration parameters to
create a heavy, IO-intensive load:
• Database size: 20 GB
• Think time: 10 ms
• User count (per instance): 8
• Test mode: Database only
Test results
Phase 1a: Four vCPUs per CNS node
During these first test runs, we assigned our CNS nodes four vCPUs each. We started our testing with a single app
instance, and then scaled up to use 2, 8, 32, 96, and 128 app instances. The following figures show the effect of
this scaling on orders per minute and response time as reported by DVD Store 2 for the IO-intensive scenario. (The
total OPM is the sum of OPM from each DVD Store client, and the response time is the average response time from
each DVD Store client.)
0
50,000
100,000
150,000
200,000
250,000
300,000
350,000
0 10 20 30 40 50 60 70 80 90 100 110 120 130 140
DS2 scale out: Total OPM with 4 vCPUs per CNS node (higher is better)
OPM
Number of app instances
4 vCPUs HDD test 4 vCPUs SSD test
11. Performance of persistent apps on Container-Native Storage for Red Hat OpenShift Container Platform March 2018 | 11
As the charts above show, when the number of app instances increased from 32 to 96, performance in terms of
OPM ceased to scale optimally and response time increased dramatically. To understand why this happened, we
performed resource utilization analysis, which revealed that the limiting resource was CPU cycles on the CNS nodes.
We present resource utilization data in Appendix D. As those data show, overall system throughput (orders per
minute) was indeed constrained by the number of vCPUs assigned to the CNS node.
Phase 1b: I6 vCPUs per CNS node
Keeping in mind our goal of identifying bottlenecks, we wanted to confirm that the number of vCPUs allocated
to each CNS node was in fact constraining throughput. To do so, we increased this number from four to 16, and
repeated the testing above. The following figures show the effect of increasing the vCPU allocation to the CNS
nodes on OPM and response time as reported by DVD Store 2. (The total OPM is the sum of OPM from each DVD
Store client, and the response time is the average response time from each DVD Store client.)
0
100
200
300
400
500
600
700
800
900
0 10 20 30 40 50 60 70 80 90 100 110 120 130 140
DS2 scale out: Average response time with 4 vCPUs per CNS node (lower is better)
Number of app instances
Responsetime(ms)
4 vCPUs HDD test 4 vCPUs SSD test
0
100,000
200,000
300,000
400,000
500,000
600,000
0 10 20 30 40 50 60 70 80 90 100 110 120 130 140
DS2 scale out: Total OPM with 4 vCPUs and 16 vCPUs per CNS node (higher is better)
Number of app instances
4 vCPUs HDD test 4 vCPUs SSD test 16 vCPUs HDD test 16 vCPUs SSD test
OPM
12. Performance of persistent apps on Container-Native Storage for Red Hat OpenShift Container Platform March 2018 | 12
Analysis
As the OPM figure above shows, the additional vCPUs had little effect on Ultrastar He10
hard drive performance
at lower app instance quantities. With 128 app instances, however, response time on this configuration increased
so much that the workload timed out before returning a result.
In contrast, the additional vCPUs allowed the performance of the Ultrastar SS200 solid-state drives to increase
as we scaled from 96 to 128 app instances. This shows that our hypothesis—that the number of vCPUs allocated
to each CNS node was constraining throughput—was correct. It also shows that even when vCPU is adequate,
having a slower disk subsystem will cause latencies to rise and throughput to decrease, as the 16 vCPU HDD test
results show.
The response time graph above shows that the additional vCPUs reduced the already low response time with the
Ultrastar SS200 solid-state drive configuration. Being able to deliver a sub-200ms threshold for more app instances,
and by extension more end users, can have a direct impact on a company’s bottom line.
0
100
200
300
400
500
600
700
800
900
0 10 20 30 40 50 60 70 80 90 100 110 120 130 140
DS2 scale out: Average response time with 4 vCPUs and 16 vCPUs per CNS node (lower is better)
Number of app instances
4 vCPUs HDD test 4 vCPUs SSD test 16 vCPUs HDD test 16 vCPUs SSD test
Responsetime(ms)
13. Performance of persistent apps on Container-Native Storage for Red Hat OpenShift Container Platform March 2018 | 13
Phase 2: Testing performance in a full-web app environment with the DVD Store 3 workload
Overview
Many modern applications put a heavier emphasis on CPU processing than on IO. To see how our solution handled
this kind of workload, we tested both the Ultrastar SS200 solid-state drive and the Ultrastar He10
hard drive
configurations to demonstrate the impact and scaling of a CPU-intensive, full-web app OLTP workload. We used
the vCPU allocation of 16 per CNS node that we used in phase 1b. We tested scaling at six app instance counts: 1,
2, 8, 32, 96, and 128.
To create the CPU-intensive workload, we used the following parameters:
• Database size: 5 GB
• Think time: 300 ms
• User count (per instance): 6
• Test mode: HTTP API
Test results
The following figures show the OPM and response time as reported by DVD Store 3 as we scaled from 1 to 128
app instances on the two configurations. (The total OPM is the sum of OPM from each DVD Store client, and the
response time is the average response time from each DVD Store client.)
0
5,000
10,000
15,000
20,000
25,000
30,000
35,000
40,000
45,000
0 10 20 30 40 50 60 70 80 90 100 110 120 130 140
DS3 scale out: OPM (higher is better)
16 vCPUs HDD test 16 vCPUs SSD test
Number of app instances
OPM
14. Performance of persistent apps on Container-Native Storage for Red Hat OpenShift Container Platform March 2018 | 14
Analysis
While running the CPU-intensive DVD Store 3 workload, the Ultrastar He10
hard drive configuration and the
Ultrastar SS200 solid-state drive configuration performed comparably, with maximum OPM counts of 35,786
and 37,424 respectively. From analyzing utilization data, we concluded that the OpenShift-based app itself was
constrained by CPU.
This difference of less than 5 percent was in marked contrast to the dramatic performance differential we
saw between the two configurations on the IO-intensive workload. These results highlight the importance of
understanding your application behavior so that you can select the appropriate hardware.
0
200
400
600
800
1,000
1,200
1,400
0 10 20 30 40 50 60 70 80 90 100 110 120 130 140
Number of app instances
DS3 scale out: Average response time (lower is better)
16 vCPUs HDD test 16 vCPUs SSD test
Responsetime(ms)
15. Performance of persistent apps on Container-Native Storage for Red Hat OpenShift Container Platform March 2018 | 15
Price/performance analysis
Whenever companies select an IT solution, they must consider the value in performance they will reap from
different levels of investment. In previous sections of this document, we presented findings from our proof-
of-concept study of a Container-Native Storage implementation using Red Hat OpenShift Container Platform
and Red Hat Container-Native Storage on VMware vSphere. We looked at two hardware configurations using
Western Digital media for CNS storage. Depending on a company’s needs, either of these storage options—the
Ultrastar SS200 solid-state drive or the Ultrastar He10
hard drive—can be an appropriate choice.
In this section, we consider the total pricing of the two configurations we tested—including hardware and
software—along with the performance results from our study to analyze the price/performance they deliver.
Note that the pricing for each configuration includes the list price of the nine HPE ProLiant DL380 Gen9 servers
in each solution including three-year support, the Ultrastar drives used for CNS storage, and licensing for the
VMware and Red Hat software we ran on the solution.
For ease of comparison, we present the total price for each solution and the maximum total performance each
achieved in normalized values. For actual pricing, and a discussion of our methods for determining it, see
Appendix E. For actual performance results, see Appendix D. (Note that we used the maximum performance
each configuration achieved while delivering a response time of less than one second.)
As the figure below shows, in the IO-intensive scenario, although the Ultrastar SS200 solid-state drive
configuration is only 1.24 percent more expensive than the Ultrastar He10
hard drive configuration, it delivered
more than five times the performance on this workload.
In the CPU-intensive scenario, the Ultrastar SS200 solid-state drive configuration delivered 1.05 times the
performance of the Ultrastar He10
hard drive configuration—an improvement of 4.6 percent—while again
costing only 1.24 percent more. For companies whose applications follow a similar workload profile in terms of
resource usage, this can make the Ultrastar solid-state drive-based solution a wise investment. The hard drive
configuration does provide more than double the total storage capacity over the SSD configuration.
Normalized comparison of pricing and maximum performance
Total price of
solution (hardware
and software;
lower is better)
1.012
1.046
1.000
1.000
Performance
on full-web
app workload
(higher is better)
5.402
1.000
Performance on
IO-intensive
database-only
workload
(higher is better)
Ultrastar SS200 solid-state drive configurationUltrastar He10
hard drive configuration
16. Performance of persistent apps on Container-Native Storage for Red Hat OpenShift Container Platform March 2018 | 16
Another way to look at the relationship between pricing and performance is to divide the total cost of the
solution by the maximum number of OPM it achieved. The charts below show a normalized comparison of cost
per OPM with the two solutions on the two workloads.
As the figures above show, those running workloads similar in nature to those we used in the IO-intensive phase
of testing could see an 81.3 percent better price-per-performance by choosing the SSD configuration over the
HDD configuration. Those whose workloads follow the more balanced profile we tested in the DS3 phase could
see a 3.2 percent better price-per-performance by making that same choice.
Normalized cost to perform one OPM on IO-intensive workload
DS2 IO-intensive
(Lower is better)
DS3 full-web app
(Lower is better)
Ultrastar SS200 solid-state drive configurationUltrastar He10
hard drive configuration
Normalized cost to perform one OPM on full-web app workload
1.00
1.03
5.34
1.00
17. Performance of persistent apps on Container-Native Storage for Red Hat OpenShift Container Platform March 2018 | 17
Conclusion
For companies in need of a comprehensive strategy for containers and software-defined storage, Red Hat
Container Ready Storage paired with Red Hat OpenShift Container Platform offer a solution that allows them to
leverage their investment in VMware vSphere. In our proof-of-concept study, we explored the scaling capabilities
of a CNS implementation using two types of Western Digital storage media, Ultrastar He10
hard drives and the new
Ultrastar SS200 solid-state drives. We tested the solutions under a variety of conditions, using both IO-intensive
and CPU-intensive workloads, multiple vCPU allocation counts, and a range of quantities of app instances. In this
document, we have presented some of the many resulting data points, including price/performance metrics, which
have the potential to assist IT professionals implementing CNS to meet the unique needs of their businesses.
18. Performance of persistent apps on Container-Native Storage for Red Hat OpenShift Container Platform March 2018 | 18
In September 2017, Western Digital’s Data Propulsion Lab, who hosted the hardware, finalized most of the
hardware we used for this testing. They added RAID controllers to the HDD servers on November 29, 2017. On
November 11, 2017, Red Hat finalized the CNS software configurations we tested. Updates for current and recently
released hardware and software appear often, so unavoidably these configurations may not represent the latest
versions available when this report appears. We concluded hands-on testing on December 8, 2017.
Appendix A: System configuration information
Server configuration
information
3x HPE ProLiant DL380 Gen9
(small form factor)
3x HPE ProLiant DL380
Gen9 (large form factor)
3x HPE ProLiant DL380
Gen9 (small form factor,
OpenShift only)
BIOS name and version P89 v2.40 (02/17/2017) P89 v2.40 (02/17/2017) P89 v2.40 (02/17/2017)
Non-default BIOS settings
Disabled C States, Enabled
Hyper Threading, Set to UEFI
Disabled C States, Enabled
Hyper Threading, Set to UEFI
Disabled C States, Enabled
Hyper Threading, Set to UEFI
Operating system name
and
version/release number
ESXi 6.5 4564106-HPE-
650.9.6.0.28
ESXi 6.5 4564106-HPE-
650.9.6.0.28
ESXi 6.5 4564106-HPE-
650.9.6.0.28
Power management policy
Max Performance Profiles, Max
Cooling
Max Performance Profiles, Max
Cooling
Max Performance Profiles, Max
Cooling
Processor
Number of processors 2 2 2
Vendor and model Intel Xeon E5-2697A v4 Intel Xeon E5-2697 v4 Intel Xeon E5-2697A v4
Core count (per processor) 16 18 16
Core frequency (GHz) 2.60 2.30 2.60
Memory module(s)
Total memory in system
(GB)
256 256 256
Number of memory
modules
8 8 8
Size (GB) 32 32 32
Type DDR4 DDR4 DDR4
Speed (MHz) 2,400 2,400 2,400
Storage controller 1
Vendor and model HPE SmartHBA H240ar HPE SmartHBA H240ar HPE SmartHBA H240ar
Storage controller 2
Vendor and model HPE SmartArray P440ar
Cache size (GB) 2
Firmware version 6.06
19. Performance of persistent apps on Container-Native Storage for Red Hat OpenShift Container Platform March 2018 | 19
Server configuration
information
3x HPE ProLiant DL380 Gen9
(small form factor)
3x HPE ProLiant DL380
Gen9 (large form factor)
3x HPE ProLiant DL380
Gen9 (small form factor,
OpenShift only)
OS drives
Number of drives 1 1 1
Drive vendor and model
SanDisk®
Lightning Ascend™
Gen II
SanDisk Lightning Ascend Gen II SanDisk Lightning Ascend Gen II
Drive size (GB) 800 800 800
Drive information
(speed, interface, type)
12 Gb/s, SAS, SSD 12 Gb/s, SAS, SSD 12 Gb/s, SAS, SSD
Controller HPE SmartHBA H240ar HPE SmartHBA H240ar HPE SmartHBA H240ar
Backing drives for CNS
Number of drives 5 12
Drive vendor and model HGST Ultrastar SS200 SSD HGST Ultrastar He10
Drive size (TB) 1.92 10
Drive information
(speed, interface, type)
12 Gb/s, SAS, SSD 6 Gb/s, SATA, HDD
Controller HPE SmartHBA H240ar HPE SmartHBA H240ar
RAID configuration 12 disks, RAID 5
Network adapter
Vendor and model Connectx3 HP LOM 764285-B21 Connectx3 HP LOM 764285-B21 Connectx3 HP LOM 764285-B21
Number and type of ports 2 2 2
Power supplies
Vendor HPE HPE HPE
Number of power supplies 2 2 2
Wattage of each (W) 800 800 800
The small form factor servers and the large form factor servers had slightly different processor models due to availability.
Virtual configurations were sized as equally as possible.
20. Performance of persistent apps on Container-Native Storage for Red Hat OpenShift Container Platform March 2018 | 20
Appendix B: How we tested
We worked with Red Hat and Western Digital engineers to configure the test bed and design the testing approach. Hardware resided
onsite at a Western Digital data center. Western Digital engineers performed hardware configuration, maintenance, and VMware vSphere
installation and configuration. Western Digital and Red Hat engineers installed virtual machines running Red Hat Enterprise Linux. Red
Hat engineers performed configuration of the Red Hat OpenShift Container Platform and the Container Ready Storage configuration. PT
engineers worked with Red Hat engineers on configuration changes through the course of the project and performed the scenario test runs.
Phase 1: IO-intensive tests
Preparing for testing
Note: We performed steps 1 through 8 on one of the OpenShift master nodes and steps 9 and 10 on all DVD Store client nodes.
1. We created one 20GB DS2 dataset, using the DS2 tools, and compressed the CSV tables. We deleted an unneeded copy of the prod.
csv file in the orders directory.
2. We created two shared persistent volumes via storage claims. We used one volume for the SSD tests and the second for HDD tests.
They held the DS2 scripts and data that we would load into each application. We executed this command, using the template create-
aux-storage.yml (see Appendix C), setting the claim size to 20GB, and the storage class to crs-ssd and then crs-hdd.
oc create -f create-aux-storage.yml
From each volume, we performed steps 3 through 6.
3. From each persistent-volume’s metadata, we identified the name of the Gluster volume, and temporarily mounted it on one OpenShift app
node.
4. We modified the DS2 database load SQL files to reference data tables in the directory GB20.
5. We created the directory GB20 and copied the DS2 data directories from step 1 to it.
6. We copied the script mysqlds2_create_all.zip.sh (see Appendix C) to this volume and unmounted it.
7. We created the app instances needed for this test run by running the script deploy-ds2-apps.sh, which references the template
ds2-only-mysql-on-openshift3.yml (see Appendix C for both files). We modified the script to set the number of app instances
in the loop.
8. After checking the logs from the MySQL pods to confirm that the app instances were ready, we ran the script reset-ds2.sh (see
Appendix C) to load the DS2 data. Note: You can see the progress of the data loading by checking the MySQL status on each pod
via remote command.
9. We opened one PowerShell window on each DVD Store client VM required for this run. (We needed one DVD Store client VM for every eight
app instances.)
10. On each DVD Store client node, we created static routes to the exposed IP addresses of the eight clients that would receive requests
from this node. The MySQL database uses an IP address on an internal OpenShift network that is exposed on the OpenShift master
nodes. These nodes act as gateways as they are on the same network as the DVD Store clients. We specified these IP addresses in the
app-creation template so that App1, App2, App3, …, received IP addresses of 172.29.0, 172.29.0,2, 172.29.0.3, … The static routes are
manually balanced between the three master nodes (namely, IP addresses 172.0.109.202-204). For example,
On DS2 client 1, we created these routes:
route add 172.29.0.1 mask 255.255.255.255 172.0.10.202
route add 172.29.0.2 mask 255.255.255.255 172.0.10.203
route add 172.29.0.3 mask 255.255.255.255 172.0.10.204
route add 172.29.0.4 mask 255.255.255.255 172.0.10.202
route add 172.29.0.5 mask 255.255.255.255 172.0.10.203
route add 172.29.0.6 mask 255.255.255.255 172.0.10.204
route add 172.29.0.7 mask 255.255.255.255 172.0.10.202
route add 172.29.0.8 mask 255.255.255.255 172.0.10.203
…
On DS2 client 4, we created these routes:
route add 172.29.0.1 mask 255.255.255.255 172.0.10.202
route add 172.29.0.2 mask 255.255.255.255 172.0.10.203
route add 172.29.0.3 mask 255.255.255.255 172.0.10.204
route add 172.29.0.4 mask 255.255.255.255 172.0.10.202
route add 172.29.0.5 mask 255.255.255.255 172.0.10.203
route add 172.29.0.6 mask 255.255.255.255 172.0.10.204
route add 172.29.0.7 mask 255.255.255.255 172.0.10.202
route add 172.29.0.8 mask 255.255.255.255 172.0.10.203
21. Performance of persistent apps on Container-Native Storage for Red Hat OpenShift Container Platform March 2018 | 21
Executing the test
Note: The following step performs a single test run. We conducted three runs.
1. To start the test, we ran the following command, with the targets set to the IP addresses for the app instances used by that client. For
example, on DVD Store client 1, using four app instances, we ran:
.ds2mysqldriver.exe --db_size=20GB --run_time=5 --warmup_time=2 --think_time=0.01 `
--n_threads=8 --target='172.29.0.1;172.29.0.2;172.29.0.3;172.29.0.4'
Phase 2: CPU-intensive tests
Preparing for testing
Note: We performed steps 1 through 8 on one of the OpenShift master nodes and step 9 on all DVD Store client nodes.
1. We created one 5GB DS3 dataset, using the DS3 tools, and compressed the CSV tables. We deleted an unneeded copy of the prod.csv
file in the orders directory.
2. We created two shared persistent volumes via storage claims. We used one volume for the SSD tests and the second for HDD tests.
They held the DS3 scripts and data that would be loaded into each application. We executed this command, using the template
create-aux-storage.yml (see Appendix C), setting the claim size to 11 GB, and the storage class to crs-ssd and then crs-hdd.
oc create -f create-aux-storage.yml
From each volume, we performed steps 3 to 6.
3. From each persistent-volume’s metadata, we identified the name of the Gluster volume and temporarily mounted it on one OpenShift app
node.
4. We modified the DS3 database load SQL files to reference data tables in the directory GB5.
5. We created the directory GB5 and copied the DS3 data directories from step 1 to it.
6. We copied the script mysqlds3_create_all.zip.sh (see Appendix C) to this volume and unmounted it.
7. We created the app instances needed for this test run by running the script deploy-ds3-apps.sh, which references the template
ds3-web-mysql-on-openshift3.yml (see Appendix C for both files). We modified the script to set the number of app instances in
the loop.
8. After checking the logs from the MySQL pods to confirm that the app instances were ready, we ran the script reset-ds3.sh (see
Appendix C) to load the DS3 data. Note: You can see the progress of the data loading by checking the MySQL status on each pod via
remote command.
9. We opened one PowerShell window on each DVD Store client VM needed for this run. (We needed one DVD Store client VM for every eight
app instances.)
Executing the test
Note: The following step performs a single test run. We conducted three runs.
1. To start the test, we simultaneously ran the following command, with the targets set to the URLs assigned automatically by OpenShift for
the apps used by that client. For example, on DVD Store client 1, using four apps, we ran
.ds3webdriver.exe --db_size=5GB --run_time=10 --warmup_time=5
--think_time=0.3 --n_threads=6 --virt_dir='
--target= target='ds3-mysql-web-1-bbb.apps.dpl.local;ds3-mysql-web-2-bbb.apps.dpl.local;ds3-mysqlweb-
3-bbb.apps.dpl.local;ds3-mysql-web-4-bbb.apps.dpl.local' 2>&1 >> ds3_test-01_001.txt
22. Performance of persistent apps on Container-Native Storage for Red Hat OpenShift Container Platform March 2018 | 22
Appendix C: Scripts
mysqlds2_create_all.zip.sh
#!/bin/bash
# directories for fifos and networked data
zip=/tmp/zip
dat=/tmp/data
mysql -u root < ${dat}/grants.sql
export MYSQL_PWD=web
# create DB and schema
mysql -u web < ${dat}/mysqlds2_create_db.sql
mysql -u web < ${dat}/mysqlds2_create_ind.sql
mysql -u web < ${dat}/mysqlds2_create_sp.sql
# create and prime fifos
mkdir -p ${zip}/GB20/{cust,orders,prod}
for file in ${dat}/GB20/*/*.csv.gz ; do
pipe=${file%.gz}
pipe=${zip}/${pipe#$dat}
mkfifo --mode=0660 ${pipe}
gzip -d < ${file} > ${pipe} &
done
printf "Starting data load at "; date
cd ${zip}
mysql -u web < ${dat}/mysqlds2_load_cust.sql
mysql -u web < ${dat}/mysqlds2_load_orders.sql
mysql -u web < ${dat}/mysqlds2_load_orderlines.sql
mysql -u web < ${dat}/mysqlds2_load_cust_hist.sql
mysql -u web < ${dat}/mysqlds2_load_prod.sql
mysql -u web < ${dat}/mysqlds2_load_inv.sql
wait
printf "Ending data load at "; date
# clean up fifios
rm -f ${zip}/GB20/*/*.csv 2>&1 > /dev/null
rmdir -p ${zip}/GB20/* 2>&1 > /dev/null
grants.sql
GRANT ALL ON *.* TO 'web'@'%' IDENTIFIED by 'web';
GRANT ALL ON *.* TO 'web'@'localhost' IDENTIFIED by 'web';
23. Performance of persistent apps on Container-Native Storage for Red Hat OpenShift Container Platform March 2018 | 23
mysqlds3_create_all.zip.sh
#!/bin/bash
# directories for fifos and netwworked data
zip=/tmp/zip
dat=/tmp/data
# create DB and schema
mysql -u root < ${dat}/mysqlds3_create_db.sql
mysql -u root -e 'SET GLOBAL sql_mode = "NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION" '
export MYSQL_PWD=toomanysecrets
mysql -u ds3 DS3 < ${dat}/mysqlds3_create_ind.sql
mysql -u ds3 DS3 < ${dat}/mysqlds3_create_sp.sql
# create and prime fifos
mkdir -p ${zip}/GB5/{cust,membership,orders,prod,reviews}
for file in ${dat}/GB5/*/*.csv.gz ; do
pipe=${file%.gz}
pipe=${zip}/${pipe#$dat}
mkfifo --mode=0660 ${pipe}
gzip -d < ${file} > ${pipe} &
done
printf "Starting data load at "; date
cd ${zip}
mysql -u ds3 DS3 < ${dat}/mysqlds3_load_cust.sql
mysql -u ds3 DS3 < ${dat}/mysqlds3_load_orders.sql
mysql -u ds3 DS3 < ${dat}/mysqlds3_load_orderlines.sql
mysql -u ds3 DS3 < ${dat}/mysqlds3_load_cust_hist.sql
mysql -u ds3 DS3 < ${dat}/mysqlds3_load_prod.sql
mysql -u ds3 DS3 < ${dat}/mysqlds3_load_inv.sql
mysql -u ds3 DS3 < ${dat}/mysqlds3_load_member.sql
mysql -u ds3 DS3 < ${dat}/mysqlds3_load_reviews.sql
mysql -u ds3 DS3 < ${dat}/mysqlds3_load_review_helpfulness.sql
wait
printf "Ending data load at "; date
# clean up fifios
rm -f ${zip}/GB5/*/*.csv 2>&1 > /dev/null
rmdir -p ${zip}/GB5/* 2>&1 > /dev/null
deploy-ds2-apps.sh
#!/bin/bash
oc delete template ds2-only-mysql-on-openshift3 --ignore-not-found=true
for num in {1..2} ; do
oc process -f ds2-only-mysql-on-openshift3.yml
-p APP_NAME=ds2-mysql-${num}
-p MYSQL_VOLUME_CAPACITY=40Gi
-p MYSQL_USER=web
-p MYSQL_PASSWORD=web
-p MYSQL_MEMORY_LIMIT=5Gi
-p EXTERNAL_IP=172.29.0.${num}
-p STORAGE_CLASS=crs-ssd
-p SAMPLE_DATA_PVC=sample-data
--labels app=ds2-mysql-${num} |
oc create -f -
sleep 1
done
exit
24. Performance of persistent apps on Container-Native Storage for Red Hat OpenShift Container Platform March 2018 | 24
deploy-ds3-apps.sh
#!/bin/bash
oc delete template ds3-web-mysql-on-openshift3 --ignore-not-found=true
for num in {1..128}; do
oc delete cm ds3-mysql-web-${num}-php-config >& /dev/null
oc process -f ds3-web-mysql-on-openshift3.yml
-p APP_NAME=ds3-mysql-web-${num}
-p MYSQL_VOLUME_CAPACITY=15Gi
-p MYSQL_MEMORY_LIMIT=5Gi
-p STORAGE_CLASS=crs-hdd
-p SAMPLE_DATA_PVC=sample-data-small
--labels app=ds3-mysql-web-${num} |
oc create -f -
sleep 1
done
exit
reset-ds2.sh
#!/bin/bash
job_limit () {
# Test for single positive integer input
if (( $# == 1 )) && [[ $1 =~ ^[1-9][0-9]*$ ]] ; then
# Check number of running jobs
joblist=($(jobs -rp))
while (( ${#joblist[*]} >= $1 )) ; do
# Wait for any job to finish
command='wait '${joblist[0]}
for job in ${joblist[@]:1} ; do
command+=' || wait '$job
done
eval $command
joblist=($(jobs -rp))
done
fi
}
for i in $(oc get pod | awk '/-database-/ {print $1}') ; do
oc rsh -T $i bash /tmp/data/mysqlds2_create_all.zip.sh < /dev/null &
job_limit 8
done
25. Performance of persistent apps on Container-Native Storage for Red Hat OpenShift Container Platform March 2018 | 25
reset-ds3.sh
#!/bin/bash
job_limit () {
# Test for single positive integer input
if (( $# == 1 )) && [[ $1 =~ ^[1-9][0-9]*$ ]] ; then
# Check number of running jobs
joblist=($(jobs -rp))
while (( ${#joblist[*]} >= $1 )) ; do
# Wait for any job to finish
command='wait '${joblist[0]}
for job in ${joblist[@]:1} ; do
command+=' || wait '$job
done
eval $command
joblist=($(jobs -rp))
done
fi
}
for i in $(oc get pod | awk '/-database-/ {print $1}') ; do
oc rsh -T $i bash /tmp/data/mysqlds3_create_all.zip.sh < /dev/null &
job_limit 8
done
create-aux-storage.yml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
# name: sample-data-small
name: sample-data
spec:
# storageClassName: crs-ssd
storageClassName: crs-hdd
accessModes:
- ReadWriteMany
resources:
requests:
# storage: 11Gi
storage: 20Gi
ds2-web-mysql-on-openshift3.yml
apiVersion: v1
kind: Template
metadata:
name: ds2-only-mysql-on-openshift3
annotations:
description: "DS2 MySQL on OpenShift 3"
tags: "instant-app"
objects:
- apiVersion: v1
kind: DeploymentConfig
metadata:
name: ${APP_NAME}-database
spec:
replicas: 1
selector:
deploymentconfig: ${APP_NAME}-database
template:
metadata:
labels:
deploymentconfig: ${APP_NAME}-database
spec:
27. Performance of persistent apps on Container-Native Storage for Red Hat OpenShift Container Platform March 2018 | 27
value: toomanysecrets
- description: The name assigned to all of the frontend objects defined in this template.
displayName: Name
name: APP_NAME
required: true
value: ds2-mysql
- description: The exposed hostname that will route to the DS2 app.
displayName: Application Hostname
name: APPLICATION_DOMAIN
- description: The size of the volume for MySQL data files.
displayName: MySQL Volume Capacity
name: MYSQL_VOLUME_CAPACITY
required: true
value: 10Gi
- description: Maximum amount of memory the MySQL container can use.
displayName: MySQL Memory Limit
name: MYSQL_MEMORY_LIMIT
required: true
value: 1Gi
- description: Maximum amount of memory the DS2 container can use.
displayName: DS2 App Memory Limit
name: DS3_MEMORY_LIMIT
required: true
value: 512Mi
- description: The StorageClass to use to request the volume
displayName: StorageClass for MySQL backend
name: STORAGE_CLASS
required: true
value: glusterfs-storage
- description: Existing PersistentVolumeClaim to mount to MySQL on /tmp/sampledata
displayName: Sample Data PVC
name: SAMPLE_DATA_PVC
required: true
value: sample-data
- description: The external IP the MySQL service should use (always port 3306).
displayName: External IP
name: EXTERNAL_IP
required: true
ds3-only-mysql-on-openshift3.yml
apiVersion: v1
kind: Template
metadata:
name: ds3-web-mysql-on-openshift3
annotations:
description: "DS3 Web interface and MySQL on OpenShift 3"
tags: "instant-app"
objects:
- apiVersion: v1
kind: ImageStream
metadata:
name: ${APP_NAME}
- apiVersion: v1
kind: BuildConfig
metadata:
name: ${APP_NAME}
spec:
triggers:
- type: ImageChange
- type: ConfigChange
source:
type: Git
git:
uri: https://github.com/dmesser/ds3-on-openshift.git
ref: no-data-files
contextDir: "ds3/mysqlds3/web/php5"
31. Performance of persistent apps on Container-Native Storage for Red Hat OpenShift Container Platform March 2018 | 31
spec:
ports:
- name: mysql
port: 3306
protocol: TCP
targetPort: 3306
selector:
deploymentconfig: ${APP_NAME}-database
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ${APP_NAME}-database-storage
spec:
storageClassName: ${STORAGE_CLASS}
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: ${MYSQL_VOLUME_CAPACITY}
parameters:
- name: MYSQL_PASSWORD
value: toomanysecrets
- description: The name assigned to all of the frontend objects defined in this template.
displayName: Name
name: APP_NAME
required: true
value: ds3-mysql-web
- description: The exposed hostname that will route to the DS3 app.
displayName: Application Hostname
name: APPLICATION_DOMAIN
- description: The size of the volume for MySQL data files.
displayName: MySQL Volume Capacity
name: MYSQL_VOLUME_CAPACITY
required: true
value: 10Gi
- description: Maximum amount of memory the MySQL container can use.
displayName: MySQL Memory Limit
name: MYSQL_MEMORY_LIMIT
required: true
value: 1Gi
- description: Maximum amount of memory the DS3 container can use.
displayName: DS3 App Memory Limit
name: DS3_MEMORY_LIMIT
required: true
value: 512Mi
- description: The StorageClass to use to request the volume
displayName: StorageClass for MySQL backend
name: STORAGE_CLASS
required: true
value: glusterfs-storage
- description: Existing PersistentVolumeClaim to mount to MySQL on /tmp/data
displayName: Sample Data PVC
name: SAMPLE_DATA_PVC
required: true
value: sample-data
32. Performance of persistent apps on Container-Native Storage for Red Hat OpenShift Container Platform March 2018 | 32
Appendix D: Detailed test results
In the charts that follow, we show the key CPU and IO performance data taken from representative nodes during our test runs. For the CNS
nodes, we found CPU usage and IO to the disks used by CNS to be most helpful metrics. For the OpenShift app nodes, we concentrated on
CPU usage. In all cases, we exclude a 2-minute warmup period at the beginning of each run and present the performance data at 10-second
intervals over the remainder of the run.
Note: The CPU usage charts use units based on one CPU so that usage can exceed 100 percent on multi-CPU nodes. For example, the
nominal maximum CPU usage for the 12-vCPU OpenShift app nodes is 1,200 percent. As a guide, we have included marks at the right of the
graphs to indicate the maximum.
Phase 1: IO-intensive database-only workload
Each of the graphs below use data from the 96-app-instance test.
0
200
400
600
800
1,000
1,200
1,400
1,600
2 3 4 5 6 7 8 9 10 11
CPUutilization(percentageofonevCPU)
vCPU usage on one CNS node
Time (minutes)
4 vCPUs HDD test 4 vCPUs SSD test 16 vCPUs HDD test 16 vCPUs SSD test
16 vCPUs
4 vCPUs
0
1,000
2,000
3,000
4,000
5,000
6,000
7,000
8,000
9,000
2 3 4 5 6 7 8 9 10 11
ReadIOPS
Read IOPS on one CNS node
Time (minutes)
4vCPU HDD test 4vCPU SSD test 16vCPU HDD test 16vCPU SSD test
33. Performance of persistent apps on Container-Native Storage for Red Hat OpenShift Container Platform March 2018 | 33
Phase 2: CPU-intensive, full-web app workload
Each of the graphs below use data from the 128-app-instance test.
0
200
400
600
800
1,000
1,200 12 vCPUs
3 4 5 6 7 8 9 10 11
CPUusage(percentageofonevCPU)
vCPU usage on one OpenShift app node
Time (minutes)
4 vCPUs HDD test 4 vCPUs SSD test 16 vCPUs HDD test 16 vCPUs SSD test
0
200
400
600
800
1,000
1,200
1,400
1,600
2 3 4 5 6 7 8 9 10 11 12 13 14 15
CPUusage(percentageofonevCPU)
vCPU usage on one CNS node
16 vCPUs
Time (minutes)
16 vCPUs HDD test 16 vCPUs SSD test
0
5,000
10,000
15,000
20,000
25,000
2 3 4 5 6 7 8 9 10 11
WriteIOPS
Write IOPS on one CNS node
Time (minutes)
4 vCPUs HDD test 4 vCPUs SSD test 16 vCPUs HDD test 16 vCPUs SSD test
34. Performance of persistent apps on Container-Native Storage for Red Hat OpenShift Container Platform March 2018 | 34
0
1,000
2,000
3,000
4,000
5,000
2 3 4 5 6 7 8 9 10 11 12 13 14 15
WriteIO
Write IOPS on one CNS node
Time (minutes)
16 vCPUs HDD test 16 vCPUs SSD test
0
200
400
600
800
1,000
1,200
2 3 4 5 6 7 8 9 10 11 12 13 14 15
CPUusage(percentageofonevCPU)
vCPU usage on one OpenShift app node
Time (minutes)
12 vCPUS
16 vCPUs HDD test 16 vCPUs SSD test
0
200
400
600
800
1,000
1,200
1,400
1,600
2 3 4 5 6 7 8 9 10 11 12 13 14 15
ReadIOPS
Read IOPS on one CNS node
Time (minutes)
16 vCPUs HDD test 16 vCPUs SSD test
35. Performance of persistent apps on Container-Native Storage for Red Hat OpenShift Container Platform March 2018 | 35
OPM and response time results
DS2 scale out: Total orders per minute
Test App instances
Number of vCPUs Drive type 1 2 8 32 96 128
4 HDD 11,347 19,707 44,226 99,262 79,150 73,541
SSD 12,612 23,738 73,939 223,441 297,010 238,396
16 HDD 12,473 24,566 57,860 89,489 71,738
SSD 12,537 24,629 75,732 258,926 495,000 536,220
DS2 scale out: Average response time (ms)
Test App instances
Number of vCPUs Drive type 1 2 8 32 96 128
4 HDD 24 30 68 141 571 823
SSD 19 22 31 49 135 174
16 HDD 20 20 46 103 629
SSD 19 20 30 39 71 91
DS3 scale out: Total orders per minute
Test App instances
Number of vCPUs Drive type 1 2 8 32 96 128
16 HDD 722 1,430 5,307 19,035 35,786 38,494
SSD 698 1,420 5,295 19,995 37,424 39,186
DS3 scale out: Average response time (ms)
Test App instances
Number of vCPUs Drive type 1 2 8 32 96 128
16 HDD 323 328 361 443 898 1,180
SSD 343 330 374 343 873 1,208
36. Performance of persistent apps on Container-Native Storage for Red Hat OpenShift Container Platform March 2018 | 36
Appendix E: Solution pricing
For both solutions, we include prices for the nine HPE ProLiant DL380 Gen9 servers, HGST Ultrastar drives for CNS storage, and the VMware
and Red Hat software we ran on the solution. The two solutions included the same software and compute servers. They differed in the
configuration and costs of the storage servers. VMware software prices include license cost plus three-year support and Red Hat software
prices include three-year subscriptions. Hardware prices do not include rack or networking costs, taxes, or shipping. We gathered this cost
data in late December 2017 and early January 2018.
HGST Ultrastar He10
hard
drive configuration
HGST Ultrastar SS200 solid-
state drive configuration
Hardware
Six HPE ProLiant DL380 Gen9 small form factor servers (base configuration
with one SanDisk solid-state drive for operating system)
$142,410 $142,410
Three HPE ProLiant DL380 Gen9 large form factor servers (base
configuration with one SanDisk solid-state drive for operating system)
$69,522 $69,522
15 Ultrastar SS200 1.92TB solid-state drives (five for each of three HPE
ProLiant DL380 Gen9 small form factor servers)
$23,040
36 Ultrastar He10
10TB hard drives (12 for each of three HPE ProLiant
DL380 Gen9 large form factor servers)
$15,123
Hardware total $227,055 $234,972
Software
Three-year VMware software licensing and support for nine servers $120,598 $120,598
Three-year Red Hat software licensing and support for nine servers $290,700 $290,700
Software total $411,298 $411,298
Total solution cost $638,353 $646,270
Software costs
The two solutions have the same software costs. We include costs for the following:
• 18 licenses with three years of production support for VMware vSphere Enterprise Plus, one license per processor for the nine two-
processor servers in the solution
• One license and three years of standard support for one instance of VMware vCenter Server to support the solution.
• Six three-year subscriptions for Red Hat OpenShift Container Platform, Premium (one license for each of the six two-processor servers
running the software)
• One three-year subscription covering three nodes for Red Hat Container Storage Add-On for OpenShift Container Platform Premium
The solution requires Red Hat Enterprise Linux, which is included with the OpenShift Container Platform subscription.
Prices for VMware software are from https://www.vmware.com/products/vsphere.html. Red Hat provided us with the subscription prices for
the OpenShift Container Platform software.
37. Performance of persistent apps on Container-Native Storage for Red Hat OpenShift Container Platform March 2018 | 37
Software pricing One-time
license
Annual
support or
subscription
Three-year
total
License count Three-year
solution totals
VMware
VMware vSphere Enterprise Plus with Production
Support
$3,495 $874 $6,117 18 $110,106
VMware vCenter Server Standard $5,995 $1,499 $10,492 1 $10,492
Red Hat
Red Hat OpenShift Container Platform, Premium $15,000 $42,750 6 $256,500
Red Hat Container Storage Add-On for OpenShift
Container Platform Premium (3 Nodes)
$12,000 $34,200 1 $34,200
Total $411,298
Hardware costs
A third-party IT service provider provided the list prices for the HPE ProLiant DL380 Gen9 servers. These configurations matched as closely as
possible the servers in our solution testbed but did not include the drives with which we tested. Red Hat provided the list prices for the five
HGST Ultrastar SS200 1.92TB drives we added to each of the three storage servers in the solid-state drive configuration and for the 12 HGST
Ultrastar He10
10TB drives we added to each of the three storage servers in the hard drive configuration.
We searched online for the price of the SanDisk Lightning Ascend Gen. II SAS solid-state drive that served as the OS drive for all servers. The
solid-state drive configuration we priced differed slightly from our tested configuration in that we tested with a small form factor chassis that
supported 24 solid-state drives but, because we used at most six of those slots, we instead priced a chassis that supports eight solid-state drives.
Six HPE ProLiant DL380 Gen9 small form factor servers
The two solutions used the same six SFF servers, each configured with a single SanDisk Lightning Ascend Gen. II SAS solid-state drive to
serve as the OS drive. We estimate the cost of those six servers, not including the OS drive, at $22,385 based on list price quotes.
Qty Product description
1 HPE ProLiant DL380 Gen9 8SFF configure-to-order server
2 HPE DL380 Gen9 Intel Xeon E5-2697Av4 (2.6GHz/16-core/40MB/145W) processors
8 HPE 32GB (1x32GB) Dual Rank x4 DDR4-2400 CAS-17-17-17 Registered Memory Kit
1 HPE H240ar 12Gb 2-ports Int FIO Smart Host Bus Adapter
1 HPE InfiniBand FDR/Ethernet 10Gb/40Gb 2-port 544+FLR-QSFP Adapter
2 HPE 800W Flex Slot Platinum Hot Plug Power Supply Kit
1 HPE iLO Advanced including 3yr 24x7 Tech Support and Updates 1-server LTU
1 HPE 2U Small Form Factor Easy Install Rail Kit
1 HPE 3Y Foundation Care NBD SVC
1 HPE ProLiant DL380 Gen9 Support
We added $1,350 for the OS drive.1
The total list price for the compute servers was $23,735 each, $142,410 for all six.
1 https://www.disctech.com/SanDisk-SDLTODKM-800G-5CA1-800GB-SAS-SSD
38. Performance of persistent apps on Container-Native Storage for Red Hat OpenShift Container Platform March 2018 | 38
Principled Technologies is a registered trademark of Principled Technologies, Inc.
All other product names are the trademarks of their respective owners.
DISCLAIMER OF WARRANTIES; LIMITATION OF LIABILITY:
Principled Technologies, Inc. has made reasonable efforts to ensure the accuracy and validity of its testing, however, Principled Technologies, Inc. specifically disclaims
any warranty, expressed or implied, relating to the test results and analysis, their accuracy, completeness or quality, including any implied warranty of fitness for any
particular purpose. All persons or entities relying on the results of any testing do so at their own risk, and agree that Principled Technologies, Inc., its employees and its
subcontractors shall have no liability whatsoever from any claim of loss or damage on account of any alleged error or defect in any testing procedure or result.
In no event shall Principled Technologies, Inc. be liable for indirect, special, incidental, or consequential damages in connection with its testing, even if advised of the
possibility of such damages. In no event shall Principled Technologies, Inc.’s liability, including for direct damages, exceed the amounts paid in connection with Principled
Technologies, Inc.’s testing. Customer’s sole and exclusive remedies are as set forth herein.
This project was commissioned by Red Hat.
Principled
Technologies®
Facts matter.®Principled
Technologies®
Facts matter.®
Three HPE ProLiant DL380 Gen9 large form factor servers
We tested with three HPE ProLiant DL380 Gen9 LFF servers that supported a SanDisk Lightning Ascend Gen. II SAS solid-state drive to serve
as the OS drive. A third-party IT service provider quoted $22,385 per server, not including the drives.
Qty Product description
1 HPE ProLiant DL380 Gen9 12LFF Configure-to-order Server
2 HPE DL380 Gen9 Intel Xeon E5-2697v4 (2.3GHz/18-core/45MB/145W) Processors
8 HPE 32GB (1x32GB) Dual Rank x4 DDR4-2400 CAS-17-17-17 Registered Memory Kit
1 HPE H240ar 12Gb 2-ports Int FIO Smart Host Bus Adapter
1 HPE InfiniBand FDR/Ethernet 10Gb/40Gb 2-port 544+FLR-QSFP Adapter
2 HPE 800W Flex Slot Platinum Hot Plug Power Supply Kit
1 HPE iLO Advanced including 3yr 24x7 Tech Support and Updates 1-server LTU
1 HPE DL380 Gen9 12LFF SAS Cable Kit
1 HPE 2U Large Form Factor Easy Install Rail Kit
1 HPE 3Y Foundation Care NBD SVC
1 HPE ProLiant DL380 Gen9 Support
HGST Ultrastar SS200 solid-state drive configuration
We added five Ultrastar SS200 1.92TB drives to each of the three storage servers for this configuration. Western Digital provided the $1,536-per-
drive list price for the Ultrastar SS200 1.92TB solid-state drives. The drives for all three servers added $23,040 to the price of the configuration.
HGST Ultrastar He10
hard drive configuration
Western Digital provided the $366 per unit list price for the Ultrastar He10
10TB hard drives that we tested with. To support the drives, we
also added an HPE Smart Array P440ar/2GB FBWC 12Gb 2-ports Int FIO SAS Controller to each of the three storage servers, priced at $649.
These components for all three servers added $15,123 to the price of the configuration.