The Dell EMC PowerMax 8000 outperformed a competitor's storage array in several key areas:
- It delivered 25% more IOPS on an OLTP workload simulation.
- It stored compressed data more efficiently, requiring 44% less storage capacity after migrating compressible data and achieving a higher 2.3:1 compression ratio.
- Admin tasks like provisioning new storage took half the time (45% fewer steps and 51% less time) using the PowerMax management interface compared to the competitor.
Store data more efficiently and increase I/O performance with lower latency w...Principled Technologies
Compared to an array from another vendor, the PowerMax 8000 offered a better inline data reduction ratio and better performance during simulated OLTP and data extraction workloads
OLTP with Dell EqualLogic hybrid arrays: A comparative study with an industry...Principled Technologies
The effectiveness of your OLTP database environment can depend to an enormous degree on the storage system you select. We compared a database server solution using the Dell EqualLogic PS6210XS with a one using competing industry-leading SAN storage.
The EqualLogic PS6210XS solution was overwhelmingly superior in all areas we tested. It delivered twice the performance with half the response time, and used a fraction of the power.
These factors make it clear that any business that relies on its database servers and wants to get the greatest return on its storage investment must consider the Dell EqualLogic PS6210XS.
Handle transaction workloads and data mart loads with better performancePrincipled Technologies
Database work is a big deal—in terms of its importance to your company, and the sheer magnitude of the work. Our tests with the Dell EMC PowerEdge R930 server and Unity 400F All-Flash storage array demonstrated that it could perform comparably to an HPE ProLiant DL380 Gen9 server and 3PAR array during OLTP workloads, with a better compression ratio (3.2-to-1 vs. 1.3-to-1). For loading large sets of data, the Dell EMC Unity finished 22 percent faster than the HPE 3PAR, which can result in less hassle for the administrator in charge of data marts. When running both OLTP and data mart workloads in tandem, the Unity array outperformed the HPE 3PAR in terms of orders processed per minute by 29 percent. For additional product information concerning the Unity 400F storage array, visit DellEMC.com/Unity.
White Paper: Hadoop on EMC Isilon Scale-out NAS EMC
This White Paper details how EMC Isilon can be used to support an enterprise Hadoop data analytics workflow. Core architectural components are covered as well as how an enterprise can gain reliable business insight quickly and efficiently while maintaining simplicity to meet the storage requirements of an evolving Big Data analytics workflow.
Slow performance and unavailable critical applications can impinge a company’s progress. You can apply patches and updates to improve application quality and user experience, but these changes need to be tested in resource-intensive environments before deployment. Keeping these applications accessing data is vital, too, as on-premises events can put availability at risk.
Our Dell EMC VMAX 250F and PowerEdge server solution supported test/dev environments and production database applications simultaneously without affecting the production applications’ performance. As we added VMs designed for test/dev environments, the production workload maintained an acceptable level of IOPS and achieved an average storage latency of less than a millisecond. The solution also kept data highly available with no downtime and no performance drop when we initiated a lost host connection for the primary storage. To run critical database applications of your company, consider the Dell EMC VMAX 250F for your datacenter.
Store data more efficiently and increase I/O performance with lower latency w...Principled Technologies
Compared to an array from another vendor, the PowerMax 8000 offered a better inline data reduction ratio and better performance during simulated OLTP and data extraction workloads
OLTP with Dell EqualLogic hybrid arrays: A comparative study with an industry...Principled Technologies
The effectiveness of your OLTP database environment can depend to an enormous degree on the storage system you select. We compared a database server solution using the Dell EqualLogic PS6210XS with a one using competing industry-leading SAN storage.
The EqualLogic PS6210XS solution was overwhelmingly superior in all areas we tested. It delivered twice the performance with half the response time, and used a fraction of the power.
These factors make it clear that any business that relies on its database servers and wants to get the greatest return on its storage investment must consider the Dell EqualLogic PS6210XS.
Handle transaction workloads and data mart loads with better performancePrincipled Technologies
Database work is a big deal—in terms of its importance to your company, and the sheer magnitude of the work. Our tests with the Dell EMC PowerEdge R930 server and Unity 400F All-Flash storage array demonstrated that it could perform comparably to an HPE ProLiant DL380 Gen9 server and 3PAR array during OLTP workloads, with a better compression ratio (3.2-to-1 vs. 1.3-to-1). For loading large sets of data, the Dell EMC Unity finished 22 percent faster than the HPE 3PAR, which can result in less hassle for the administrator in charge of data marts. When running both OLTP and data mart workloads in tandem, the Unity array outperformed the HPE 3PAR in terms of orders processed per minute by 29 percent. For additional product information concerning the Unity 400F storage array, visit DellEMC.com/Unity.
White Paper: Hadoop on EMC Isilon Scale-out NAS EMC
This White Paper details how EMC Isilon can be used to support an enterprise Hadoop data analytics workflow. Core architectural components are covered as well as how an enterprise can gain reliable business insight quickly and efficiently while maintaining simplicity to meet the storage requirements of an evolving Big Data analytics workflow.
Slow performance and unavailable critical applications can impinge a company’s progress. You can apply patches and updates to improve application quality and user experience, but these changes need to be tested in resource-intensive environments before deployment. Keeping these applications accessing data is vital, too, as on-premises events can put availability at risk.
Our Dell EMC VMAX 250F and PowerEdge server solution supported test/dev environments and production database applications simultaneously without affecting the production applications’ performance. As we added VMs designed for test/dev environments, the production workload maintained an acceptable level of IOPS and achieved an average storage latency of less than a millisecond. The solution also kept data highly available with no downtime and no performance drop when we initiated a lost host connection for the primary storage. To run critical database applications of your company, consider the Dell EMC VMAX 250F for your datacenter.
Scaling Oracle 12c database performance with EMC XtremIO storage in a Databas...Principled Technologies
Oracle single instance database VMs need plenty of storage capacity and performance to handle increased workload demands placed on them by users. Whether your organization uses DBaaS or traditional Oracle 12c instances, consider the reliable performance and scaling flexibility that the EMC XtremIO storage array can offer. We found IOPS levels stayed consistent as we scaled up to eight Oracle single instance VMs and scaled by an average of 14,700 IOPS for each VM (totaling 118,067). In addition, we found that the inline deduplication, compression, and thin provisioning capabilities on the XtremIO array resulted in an overall efficiency ratio of 51 to 1 and a data reduction ratio of 14.6 to 1. With this level of consistent performance, users can expect great performance to meet high demand for IOPS in a DBaaS environment.
Ibm db2 analytics accelerator high availability and disaster recoverybupbechanhgmail
This document discusses high availability (HA) and disaster recovery (DR) strategies for IBM DB2 Analytics Accelerator. It describes built-in HA capabilities of the accelerator like redundant Netezza performance server hosts, S-Blades, networking, and disk arrays. It also discusses ways to integrate the accelerator into existing HA architectures, including workload balancing across multiple accelerators, maintaining consistent data across accelerators, and using incremental updates. The goal is to provide the desired recovery time objectives (RTOs) and recovery point objectives (RPOs) for analytical query workloads processed by the accelerator.
Back up deduplicated data in less time with the Dell DR6000 Disk Backup Appli...Principled Technologies
Backing up data is a key component in data protection. However, long backup windows can cause headaches for IT and users while slowing down the network. We found that using source-side deduplication and Rapid CIFS technology to back up data to the Dell DR6000 Disk Backup Appliance was faster—with the average rate of data backup at 8.99 TB per hour. The backup to the DR6000 completed in two-thirds the time that the backup to the industry-leading deduplication appliance completed. Backing up to the DR6000 consumed less than one-sixth the bandwidth needed to back up to the industry-leading deduplication appliance. In addition, the DR6000 needed less rack space and cost a third less than the competition. The solution to lengthy backup windows is clear: Save time and network bandwidth with source-side deduplication built into the Dell DR6000 Disk Backup Appliance.
1. The document discusses distributed databases, which involve spreading a single logical database across multiple physical locations connected by a network.
2. Key aspects covered include definitions of distributed and decentralized databases, reasons for using distributed databases, options for distributing data including homogeneous/heterogeneous systems and data replication/partitioning approaches.
3. The document also outlines objectives of distributed databases like location and failure transparency and challenges related to maintaining data integrity and performance across distributed systems.
Watch your transactional database performance climb with Intel Optane DC pers...Principled Technologies
The document discusses testing of Intel Optane DC persistent memory in a Dell EMC PowerEdge R740xd server. It found that using Intel Optane DC persistent memory delivered over 2 times the transactions per minute of a configuration with two NVMe drives and over 11 times the transactions per minute of a configuration with 12 SATA SSDs. Intel Optane DC persistent memory can significantly improve transactional database performance over traditional storage solutions like SSDs.
Offer faster access to critical data and achieve greater inline data reductio...Principled Technologies
Compared to a solution from another vendor (“Vendor B”), the PowerStore 7000T delivered a better inline data reduction ratio and better performance during simulated OLTP and other I/O workloads
Watch your transactional database performance climb with Intel Optane DC pers...Principled Technologies
Dell EMC PowerEdge R740xd servers with Intel Optane DC persistent memory handled more transactions per minute than configurations with NAND flash NVMe drives or SATA SSDs
Moving your legacy database workloads to the Dell PowerEdge R930 can help you realize the benefits of consolidation, which can include savings in management costs, power usage, and cable management costs. More importantly, the licensing costs of the database application itself may be reduced by the consolidation effort. In addition to these benefits, greater database transactions per minute can keep your orders flowing smoothly.
We found that the Dell PowerEdge R930, powered by the Intel Xeon processor E7 v3 series, could consolidate three legacy servers running four Oracle Database 12c VMs each. The Dell PowerEdge R930 outperformed the legacy server with 4.4 times the overall database performance, delivering an average of 47.1 percent more performance per VM. By consolidating that many legacy servers, you can save up to 67 percent in rack space, 25 percent in database licenses, and even reduce other operating costs to improve your bottom line.
The document discusses optimizing Oracle and Siebel applications on the Sun UltraSPARC T1 platform. It describes how Siebel's multi-threaded architecture is well-suited to the T1 processor's ability to run multiple threads in parallel. It provides examples of consolidating Siebel environments and optimizing performance through Solaris, Siebel, and Oracle database tuning. Metrics show Siebel performing well with low CPU utilization on T1 systems.
The document discusses several applications of parallel processing:
1. Airfoil design optimization utilizes parallel processors to simultaneously evaluate the aerodynamic performance of multiple airfoil geometries, greatly decreasing computational time.
2. Parallel databases can distribute queries across multiple processors, improving response time for online transactions and reducing time for complex queries through intra-query parallelism.
3. Weather forecasting uses parallel computers to quickly process huge volumes of environmental data through thousands of computational iterations needed to generate useful forecasts.
White Paper: Backup and Recovery of the EMC Greenplum Data Computing Applian...EMC
This White Paper explores backing up EMC Greenplum Data Computing Appliance data to Data Domain systems and how to effectively exploit Data DomainTs leading-edge technology.
Update your private cloud with 14th generation Dell EMC PowerEdge FC640 serve...Principled Technologies
The document discusses testing of Dell EMC PowerEdge FC640 servers running Apache Cassandra databases for private clouds. It finds that a solution using the 14th generation Dell EMC PowerEdge FC640 servers handled over 4.7 times the workload of a previous generation solution using Dell PowerEdge R710 servers, while using half the rack space. This allows supporting more user requests while potentially saving on datacenter expansion costs.
Upgrading to Windows Server 2019 on Dell EMC PowerEdge servers: A simple proc...Principled Technologies
Using Dell EMC PowerEdge R740xd servers with Intel Xeon Scalable processors, we upgraded from Windows Server 2016 and saw data compression ratios of up to 9.8:1 thanks to new Storage Spaces Direct features
IBM eX5 Workload Optimized x86 ServersCliff Kinard
Learn about how these IBM eX5 servers are purposely built for workloads. This presentation shows how IBM's pre-configured solutions can reduce deployment time from months to weeks while saving clients over $100,000 in installation and setup costs.
The document discusses how the Cast Iron integration platform can help maximize the value of integrating BigMachines applications. It summarizes Cast Iron's capabilities, including its speed and ease of integration across cloud and on-premise systems. Examples are given showing how Cast Iron delivered real-time integrations for a customer between Salesforce and Oracle in just days without any custom coding.
Dell PowerEdge M820 blades: Balancing performance, density, and high availabi...Principled Technologies
Finding a server that can deliver the right balance of high workload performance, density, and RAS features can help you meet both infrastructure and business goals at the same time.
In our tests, the single-width Dell PowerEdge M820 blade delivered 19.3 percent better Oracle Database 12c performance than the HP ProLiant BL680c G7 in half the space, meaning it could deliver 2.38 times more transactions per U. The value of the denser Dell PowerEdge M820 was clear in our cost analysis of the two systems. Because the Dell PowerEdge M820 takes up less space, you need fewer enclosures, less rack space, and can save on port costs. In our sample comparison of two performance-equivalent solutions, we found that the Dell PowerEdge M820 solution could save up to 42.1 percent compared to an HP ProLiant BL680c G7 solution. That’s money that you can use to buy even more servers for greater performance or to innovate elsewhere. We also found that the Dell PowerEdge M820 took high availability into account by utilizing key RAS features to help increase your workload uptime.
If you’re looking for a dense blade solution to lower costs with the power to handle your important workloads and keep them running, our study shows that the Dell PowerEdge M820 blade addresses all those concerns.
Workstation heat and power usage: Lenovo ThinkStation P500 vs. HP Z440 Workst...Principled Technologies
A workstation that runs coolly and uses less power is a great asset to workers and the companies they work for. In our tests, both when idle and when under load, the Lenovo ThinkStation P500 generally ran at lower surface temperatures and used less power than the HP Z440 Workstation. These findings show that the Lenovo ThinkStation P500 could meet the needs of those who want to provide a reliable, comfortable work environment while using less power.
Achieve up to 80% better throughput and increase storage efficiency with the ...Principled Technologies
Compared to a competing array, Dell’s high-end PowerMax 8500 storage array offered better I/O performance for simulated OLTP and data workloads, saved more space through more efficient data reduction, and performed snapshots with no performance impact
The document summarizes testing of a Dell EMC VMAX 250F all-flash storage array and Dell EMC PowerEdge servers to support both production and test/development Oracle Database 12c workloads. Key findings include:
1) The solution maintained low latency of less than 1 millisecond even when adding 7 test/dev database snapshots to the production workload.
2) The production workload saw less than 2% degradation in IOPS despite increasing overall storage IOPS by adding test/dev workloads.
3) Using SRDF/Metro replication, the solution provided uninterrupted access to data with no downtime or performance drop when one array was unavailable, ensuring high availability.
Scaling Oracle 12c database performance with EMC XtremIO storage in a Databas...Principled Technologies
Oracle single instance database VMs need plenty of storage capacity and performance to handle increased workload demands placed on them by users. Whether your organization uses DBaaS or traditional Oracle 12c instances, consider the reliable performance and scaling flexibility that the EMC XtremIO storage array can offer. We found IOPS levels stayed consistent as we scaled up to eight Oracle single instance VMs and scaled by an average of 14,700 IOPS for each VM (totaling 118,067). In addition, we found that the inline deduplication, compression, and thin provisioning capabilities on the XtremIO array resulted in an overall efficiency ratio of 51 to 1 and a data reduction ratio of 14.6 to 1. With this level of consistent performance, users can expect great performance to meet high demand for IOPS in a DBaaS environment.
Ibm db2 analytics accelerator high availability and disaster recoverybupbechanhgmail
This document discusses high availability (HA) and disaster recovery (DR) strategies for IBM DB2 Analytics Accelerator. It describes built-in HA capabilities of the accelerator like redundant Netezza performance server hosts, S-Blades, networking, and disk arrays. It also discusses ways to integrate the accelerator into existing HA architectures, including workload balancing across multiple accelerators, maintaining consistent data across accelerators, and using incremental updates. The goal is to provide the desired recovery time objectives (RTOs) and recovery point objectives (RPOs) for analytical query workloads processed by the accelerator.
Back up deduplicated data in less time with the Dell DR6000 Disk Backup Appli...Principled Technologies
Backing up data is a key component in data protection. However, long backup windows can cause headaches for IT and users while slowing down the network. We found that using source-side deduplication and Rapid CIFS technology to back up data to the Dell DR6000 Disk Backup Appliance was faster—with the average rate of data backup at 8.99 TB per hour. The backup to the DR6000 completed in two-thirds the time that the backup to the industry-leading deduplication appliance completed. Backing up to the DR6000 consumed less than one-sixth the bandwidth needed to back up to the industry-leading deduplication appliance. In addition, the DR6000 needed less rack space and cost a third less than the competition. The solution to lengthy backup windows is clear: Save time and network bandwidth with source-side deduplication built into the Dell DR6000 Disk Backup Appliance.
1. The document discusses distributed databases, which involve spreading a single logical database across multiple physical locations connected by a network.
2. Key aspects covered include definitions of distributed and decentralized databases, reasons for using distributed databases, options for distributing data including homogeneous/heterogeneous systems and data replication/partitioning approaches.
3. The document also outlines objectives of distributed databases like location and failure transparency and challenges related to maintaining data integrity and performance across distributed systems.
Watch your transactional database performance climb with Intel Optane DC pers...Principled Technologies
The document discusses testing of Intel Optane DC persistent memory in a Dell EMC PowerEdge R740xd server. It found that using Intel Optane DC persistent memory delivered over 2 times the transactions per minute of a configuration with two NVMe drives and over 11 times the transactions per minute of a configuration with 12 SATA SSDs. Intel Optane DC persistent memory can significantly improve transactional database performance over traditional storage solutions like SSDs.
Offer faster access to critical data and achieve greater inline data reductio...Principled Technologies
Compared to a solution from another vendor (“Vendor B”), the PowerStore 7000T delivered a better inline data reduction ratio and better performance during simulated OLTP and other I/O workloads
Watch your transactional database performance climb with Intel Optane DC pers...Principled Technologies
Dell EMC PowerEdge R740xd servers with Intel Optane DC persistent memory handled more transactions per minute than configurations with NAND flash NVMe drives or SATA SSDs
Moving your legacy database workloads to the Dell PowerEdge R930 can help you realize the benefits of consolidation, which can include savings in management costs, power usage, and cable management costs. More importantly, the licensing costs of the database application itself may be reduced by the consolidation effort. In addition to these benefits, greater database transactions per minute can keep your orders flowing smoothly.
We found that the Dell PowerEdge R930, powered by the Intel Xeon processor E7 v3 series, could consolidate three legacy servers running four Oracle Database 12c VMs each. The Dell PowerEdge R930 outperformed the legacy server with 4.4 times the overall database performance, delivering an average of 47.1 percent more performance per VM. By consolidating that many legacy servers, you can save up to 67 percent in rack space, 25 percent in database licenses, and even reduce other operating costs to improve your bottom line.
The document discusses optimizing Oracle and Siebel applications on the Sun UltraSPARC T1 platform. It describes how Siebel's multi-threaded architecture is well-suited to the T1 processor's ability to run multiple threads in parallel. It provides examples of consolidating Siebel environments and optimizing performance through Solaris, Siebel, and Oracle database tuning. Metrics show Siebel performing well with low CPU utilization on T1 systems.
The document discusses several applications of parallel processing:
1. Airfoil design optimization utilizes parallel processors to simultaneously evaluate the aerodynamic performance of multiple airfoil geometries, greatly decreasing computational time.
2. Parallel databases can distribute queries across multiple processors, improving response time for online transactions and reducing time for complex queries through intra-query parallelism.
3. Weather forecasting uses parallel computers to quickly process huge volumes of environmental data through thousands of computational iterations needed to generate useful forecasts.
White Paper: Backup and Recovery of the EMC Greenplum Data Computing Applian...EMC
This White Paper explores backing up EMC Greenplum Data Computing Appliance data to Data Domain systems and how to effectively exploit Data DomainTs leading-edge technology.
Update your private cloud with 14th generation Dell EMC PowerEdge FC640 serve...Principled Technologies
The document discusses testing of Dell EMC PowerEdge FC640 servers running Apache Cassandra databases for private clouds. It finds that a solution using the 14th generation Dell EMC PowerEdge FC640 servers handled over 4.7 times the workload of a previous generation solution using Dell PowerEdge R710 servers, while using half the rack space. This allows supporting more user requests while potentially saving on datacenter expansion costs.
Upgrading to Windows Server 2019 on Dell EMC PowerEdge servers: A simple proc...Principled Technologies
Using Dell EMC PowerEdge R740xd servers with Intel Xeon Scalable processors, we upgraded from Windows Server 2016 and saw data compression ratios of up to 9.8:1 thanks to new Storage Spaces Direct features
IBM eX5 Workload Optimized x86 ServersCliff Kinard
Learn about how these IBM eX5 servers are purposely built for workloads. This presentation shows how IBM's pre-configured solutions can reduce deployment time from months to weeks while saving clients over $100,000 in installation and setup costs.
The document discusses how the Cast Iron integration platform can help maximize the value of integrating BigMachines applications. It summarizes Cast Iron's capabilities, including its speed and ease of integration across cloud and on-premise systems. Examples are given showing how Cast Iron delivered real-time integrations for a customer between Salesforce and Oracle in just days without any custom coding.
Dell PowerEdge M820 blades: Balancing performance, density, and high availabi...Principled Technologies
Finding a server that can deliver the right balance of high workload performance, density, and RAS features can help you meet both infrastructure and business goals at the same time.
In our tests, the single-width Dell PowerEdge M820 blade delivered 19.3 percent better Oracle Database 12c performance than the HP ProLiant BL680c G7 in half the space, meaning it could deliver 2.38 times more transactions per U. The value of the denser Dell PowerEdge M820 was clear in our cost analysis of the two systems. Because the Dell PowerEdge M820 takes up less space, you need fewer enclosures, less rack space, and can save on port costs. In our sample comparison of two performance-equivalent solutions, we found that the Dell PowerEdge M820 solution could save up to 42.1 percent compared to an HP ProLiant BL680c G7 solution. That’s money that you can use to buy even more servers for greater performance or to innovate elsewhere. We also found that the Dell PowerEdge M820 took high availability into account by utilizing key RAS features to help increase your workload uptime.
If you’re looking for a dense blade solution to lower costs with the power to handle your important workloads and keep them running, our study shows that the Dell PowerEdge M820 blade addresses all those concerns.
Workstation heat and power usage: Lenovo ThinkStation P500 vs. HP Z440 Workst...Principled Technologies
A workstation that runs coolly and uses less power is a great asset to workers and the companies they work for. In our tests, both when idle and when under load, the Lenovo ThinkStation P500 generally ran at lower surface temperatures and used less power than the HP Z440 Workstation. These findings show that the Lenovo ThinkStation P500 could meet the needs of those who want to provide a reliable, comfortable work environment while using less power.
Achieve up to 80% better throughput and increase storage efficiency with the ...Principled Technologies
Compared to a competing array, Dell’s high-end PowerMax 8500 storage array offered better I/O performance for simulated OLTP and data workloads, saved more space through more efficient data reduction, and performed snapshots with no performance impact
The document summarizes testing of a Dell EMC VMAX 250F all-flash storage array and Dell EMC PowerEdge servers to support both production and test/development Oracle Database 12c workloads. Key findings include:
1) The solution maintained low latency of less than 1 millisecond even when adding 7 test/dev database snapshots to the production workload.
2) The production workload saw less than 2% degradation in IOPS despite increasing overall storage IOPS by adding test/dev workloads.
3) Using SRDF/Metro replication, the solution provided uninterrupted access to data with no downtime or performance drop when one array was unavailable, ensuring high availability.
A company’s success depends on critical application performance and availability. Upgrades and patches can improve application efficiency and user experience, but making the necessary changes requires resource intensive environments to test updates before deploying them. What’s more, these applications need to continue accessing data even in the event of an on-premises crisis.
Our Dell EMC VMAX 250F and PowerEdge server solution supported test/dev environments and production database applications simultaneously without affecting the production applications’ performance. Storage latency for the VMAX 250F peaked at a millisecond in our testing while IOPS stayed within an acceptable range. The solution also kept data highly available with no downtime or performance drop when we initiated a lost host connection for the primary storage. Consider the Dell EMC VMAX 250F array for your datacenter to support the critical database applications that drive your company.
Get higher performance for your MySQL databases with Dell APEX Private CloudPrincipled Technologies
Featuring 3rd Gen Intel Xeon Scalable processors, the Dell APEX Private Cloud solution processed more new orders per minute in a transactional database workload than a comparable AWS solution
When we compared the MySQL database performance of a four-server solution supporting 16 client VMs from Dell APEX Private Cloud to 16 similarly configured VMs on AWS using the TPROC-C workload from the HammerDB benchmark suite, the Dell APEX Private Cloud solution processed 25.4 percent more NOPM than the AWS solution. These results indicate that to maximize the performance of MySQL databases and the responsiveness of critical applications, the APEX private cloud solution is the better choice compared to a similarly configured solution running on AWS.
By upgrading from the legacy solution we tested to the new Intel processor-based Dell and VMware solution, you could do 18 times the work in the same amount of space. Imagine what that performance could mean to your business: Consolidate workloads from across your company, lower your power and cooling bills, and limit datacenter expansion in the future, all while maintaining a consistent user experience—the list of potential benefits is huge.
Try running DPACK, which can help you identify bottlenecks in your environment and inform you about your current performance needs. Then consider how the consolidation ratio we proved could be helpful for your company. The Intel processor-powered Dell PowerEdge R730 solution with VMware vSphere and Dell Storage SC4020, also powered by Intel, could be the right destination for your upgrade journey.
The document announces the new IBM zEnterprise EC12 hybrid computing system. Key points:
- It provides up to 50% more total system capacity and 101 configurable cores compared to prior models.
- Performance is improved through a new hexa-core 5.5GHz chip design and features like transactional execution.
- New capabilities include flash storage support, encryption, and pattern recognition analytics for system health.
- It supports various workloads and platforms through the zBX blade infrastructure and management tools.
SQL Server 2016 database performance on the Dell EMC PowerEdge FC630 QLogic 1...Principled Technologies
Upgrading the hardware running your SQL Server to a space-efficient modular Dell EMC modern environment can help your company achieve a great deal of database work in a small amount of space. With the Dell Express Flash technology, adding a caching solution such as Samsung AutoCache can make the environment even more efficient.
In the PT labs, we ran a mixed database workload on six Dell EMC PowerEdge FC630 servers, powered by Intel Xeon E5-2667 processors, in three PowerEdge FX2 enclosures. The solution included the QLogic QLE2692 16Gb FC adapter with StorFusion Technology, Dell EMC Storage SC9000 all-flash storage, and Dell EMC PowerEdge Express Flash NVMe Performance PCIe SSDs.
With no caching solution, the 36 SQL Server 2016 VMs on the six servers achieved a total of 431,839 orders per minute while an Oracle workload ran on 12 VMs. When we added a caching solution to accelerate the SQL database volumes, the performance across the 36 SQL Server 2016 VMs doubled to 871,580. These numbers show the power of server-side caching to alleviate pressure on the storage array allowing you to get even more out of the Dell EMC modern environment.
The document compares the performance of migrating virtual machines between chassis using VMware vMotion on three modular server solutions: Dell EMC PowerEdge MX, HPE Synergy, and Cisco UCS. Testing showed the Dell EMC solution moved VMs between chassis up to 42.3% faster than the other solutions, with throughput up to 1.7x higher and latency up to 73.0% lower. This allows maintenance to be completed more quickly and with less impact on workloads.
Get stronger SQL Server performance for less with Dell EMC PowerEdge R6515 cl...Principled Technologies
When it comes to hardware, getting greater performance often requires spending more. In our virtualized SQL Server 2019 testing of two current-generation servers in Hyper-V clusters, however, the less expensive option delivered stronger performance on our OLTP workload.
Dell APEX Private Cloud can deliver better OLTP performance in a Kubernetes e...Principled Technologies
Relative to a comparable AWS EC2 instance
In an increasingly data-driven world, the ability to quickly process online transactions and to be able to handle more orders could give your business a competitive edge. As the scale of your databases grow, it is important to evaluate which available cloud solution is right for you. In our tests, the Dell APEX Private Cloud solution processed 24.9 percent more OLTP transactions per minute than the AWS solution we compared it to. The Dell APEX Private Cloud instance also processed 24.7 percent more NOPM than the AWS solution. Based on these findings, organizations that process OLTP transactions in a Kubernetes cloud environment might consider choosing Dell APEX Private Cloud with VMware Tanzu.
Reap better SQL Server OLTP performance with next-generation Dell EMC PowerEd...Principled Technologies
The document summarizes testing that compared the performance of new Dell EMC PowerEdge MX750c servers to current Dell EMC PowerEdge MX740c servers. The testing found that:
- A cluster of the new servers achieved 36.1% more SQL Server transaction processing work (orders per minute) than a cluster of the current servers, while also lowering application response time by 8%.
- With the same number of virtual machines, the new server cluster achieved 18.7% more transactions than the current cluster and an 18.4% lower response time.
- By adding one more virtual machine per new server, the new cluster achieved a 36.1% higher transaction rate than the current cluster
This white paper provides an overview of EMC's data protection solutions for the data lake - an active repository to manage varied and complex Big Data workloads
The Dell EMC PowerEdge MX solution required 89.4 percent less admin time to deploy multiple server nodes and 15 fewer steps to update firmware on multiple systems
As our tests show, investing in the powerful new Dell PowerEdge R920 running Oracle VM Server 3.2.8 with Oracle Database 12c VMs achieves cost savings without compromising performance. In our testing, a single Dell PowerEdge R920 could perform five times the work of a single HP ProLiant DL385 G6 server; the costs to power and cool the Dell PowerEdge would be 43 percent less than the five servers it could replace. The three-year software licensing costs of the Dell PowerEdge R920 server would be 22 percent lower than the licensing costs for the five-server solution. These dramatic savings—which come out to $212,091 for our single test environment — could grow to millions of dollars in a larger consolidation effort.
Demartek Lenovo Storage S3200 i a mixed workload environment_2016-01Lenovo Data Center
This document evaluates the Lenovo S3200 storage array's ability to support multiple workloads simultaneously. Testing showed that while an all-HDD configuration met performance requirements, one application suffered high latency. Enabling SSD caching or tiering significantly improved performance for that application specifically, reducing latency by 70% and increasing bandwidth by up to 7x, without impacting other applications. The Lenovo S3200 is suitable for consolidating diverse workloads due to its flexibility to configure HDDs with SSDs for optimized performance tailored to each use case.
Ensure greater uptime and boost VMware vSAN cluster performance with the Del...Principled Technologies
The Dell EMC PowerEdge MX with VMware vSAN Ready Nodes delivered a 55.9% faster response time than a Cisco UCS solution and a 41.3% faster response time than an HPE Synergy solution
Symantec NetBackup 7.6 benchmark comparison: Data protection in a large-scale...Principled Technologies
The footprint of a VM can grow quickly in an enterprise environment and large-scale VM deployments in the thousands are common. As this number of deployed systems grows, so does the risk of failure. Critical failures can become unavoidable and offering data protection from a backup solution promotes business continuity. Elongated protection windows requiring multiple jobs of different types can create resource contention with production environments and may require valuable IT admin time, so a finite window for system backups can have plenty of importance.
In our hands-on SAN backup testing, the Symantec NetBackup Integrated Appliance running NetBackup 7.6 offered application protection to 1,000 VMs in 66.8 percent less time than Competitor “E” did. In addition, the Symantec NetBackup Integrated Appliance with NetBackup 7.6 created backup images that offered granular recovery without additional steps. These time and effort savings can scale as your VM footprint grows, allowing you to execute both system protection and user-friendly, simplified recovery.
Similar to The Dell EMC PowerMax 8000 outperformed another vendor's array on an OLTP-like workload (20)
Help skilled workers succeed with Dell Latitude 7030 and 7230 Rugged Extreme ...Principled Technologies
Instead of equipping consumer-grade tablets with rugged cases
Conclusion
In our hands-on testing, the Dell Latitude 7030 and 7230 Rugged Extreme Tablets showed that they are better equipped to help skilled workers than consumer-grade Apple iPad Pro and Samsung Galaxy Tab S9 tablets in multiple ways. They provide more built-in capabilities and features than the consumer-grade tablets we tested. And, while they were more expensive than the rugged-case fortified consumer-grade options we tested, their rugged claims were more than skin deep.
In our performance and durability tests, the Dell Latitude 7030 and 7230 Rugged Extreme Tablets performed better in demanding manufacturing, logistics, and field service environments than consumer-grade tablets with rugged cases. Both Rugged Extreme Tablets, with their greater thermal range, suffered less performance degradation in extreme temperatures, never failed and were merely scuffed after 26 hard drops, survived a 10 minute drenching with no ill effects, and were easier to view in direct sunlight than Apple iPad Pro and Samsung Galaxy Tab S9 tablets.
Bring ideas to life with the HP Z2 G9 Tower Workstation - InfographicPrincipled Technologies
We compared CPU performance and noise output of an HP Z2 G9 Tower Workstation in High Performance Mode to a similarly configured Dell Precision 3660 Tower Workstation in its out-of-box performance mode
Investing in GenAI: Cost‑benefit analysis of Dell on‑premises deployments vs....Principled Technologies
Conclusion
Diving into the world of GenAI has the potential to yield a great many benefits for your organization, but it first requires consideration for how best to implement those GenAI workloads. Whether your AI goals are to create a chatbot for online visitors, generate marketing materials, aid troubleshooting, or something else, implementing an AI solution requires careful planning and decision-making. A major decision is whether to host GenAI in the cloud or keep your data on premises. Traditional on-premises solutions can provide superior security and control, a substantial concern when dealing with large amounts of potentially sensitive data. But will supporting a GenAI solution on site be a drain on an organization’s IT budget?
In our research, we found that the value proposition is just the opposite: Hosting GenAI workloads on premises, either in a traditional Dell solution or using a managed Dell APEX pay-per-use solution, could significantly lower your GenAI costs over 3 years compared to hosting these workloads in the cloud. In fact, we found that a comparable AWS SageMaker solution would cost up to 3.8 times as much and an Azure ML solution would cost up to 3.6 times as much as GenAI on a Dell APEX pay-per-use solution. These results show that organizations looking to implement GenAI and reap the business benefits to come can find many advantages in an on-premises Dell solution, whether they opt to purchase and manage it themselves or choose a subscription-based Dell APEX pay-per-use solution. Choosing an on-premises Dell solution could save your organization significantly over hosting GenAI in the cloud, while giving you control over the security and privacy of your data as well as any updates and changes to the environment, and while ensuring your environment is managed consistently.
Workstations powered by Intel can play a vital role in CPU-intensive AI devel...Principled Technologies
In three AI development workflows, Intel processor-powered workstations delivered strong performance, without using their GPUs, making them a good choice for this part of the AI process
Conclusion
We executed three AI development workflows on tower workstations and mobile workstations from three vendors, with each workflow utilizing only the Intel CPU cores, and found that these platforms were suitable for carrying out various AI tasks. For two of the workflows, we learned that completing the tasks on the tower workstations took roughly half as much time as on the mobile workstations. This supports the idea that the tower workstations would be appropriate for a development environment for more complex models with a greater volume of data and that the mobile workstations would be well-suited for data scientists fine-tuning simpler models. In the third workflow, we explored tower workstation performance with different precision levels and learned that using 16-bit floating point precision allowed the workstations to execute the workflow in less time and also reduced memory usage dramatically. For all three AI workflows we executed, we consider the time the workstations needed to complete the tasks to be acceptable, and believe that these workstations can be appropriate, cost-effective choices for these kinds of activities.
Enable security features with no impact to OLTP performance with Dell PowerEd...Principled Technologies
Get comparable online transaction processing (OLTP) performance with or without enabling AMD Secure Memory Encryption and AMD Secure Encrypted Virtualization - Encrypted State
Conclusion
You’ve likely already implemented many security measures for your servers, which may include physical security for the data center, hardware-level security, and software-level security. With the cost of data breaches high and still growing, however, wise IT teams will consider what additional security measures they may be able to implement.
AMD SME and SEV-ES are technologies that are already available within your AMD processor-powered 16th Generation Dell PowerEdge servers—and in our testing, we saw that they can offer extra layers of security without affecting performance. We compared the online transaction processing performance of a Dell PowerEdge R7625 server, powered by AMD EPYC 9274F processors, with and without these two security features enabled. We found that enabling AMD Secure Memory Encryption and Secure Encrypted Virtualization-Encrypted State did not impact performance at all.
If your team is assessing areas where you might be able to enhance security—without paying a large performance cost—consider enabling AME SME and AMD SEV-ES in your Dell PowerEdge servers.
Improving energy efficiency in the data center: Endure higher temperatures wi...Principled Technologies
In high-temperature test scenarios, a Dell PowerEdge HS5620 server continued running an intensive workload without component warnings or failures, while a Supermicro SYS‑621C-TN12R server failed
Conclusion: Remain resilient in high temperatures with the Dell PowerEdge HS5620 to help increase efficiency
Increasing your data center’s temperature can help your organization make strides in energy efficiency and cooling cost savings. With servers that can hold up to these higher everyday temperatures—as well as high temperatures due to unforeseen circumstances—your business can continue to deliver the performance your apps and clients require.
When we ran an intensive floating-point workload on a Dell PowerEdge HS5620 and a Supermicro SYS-621CTN12R in three scenario types simulating typical operations at 25°C, a fan failure, and an HVAC malfunction, the Dell server experienced no component warnings or failures. In contrast, the Supermicro server experienced warnings in all three scenario types and experienced component failures in the latter two tests, rendering the system unusable. When we inspected and analyzed each system, we found that the Dell PowerEdge HS5620 server’s motherboard layout, fans, and chassis offered cooling design advantages.
For businesses aiming to meet sustainability goals by running hotter data centers, as well as those concerned with server cooling design, the Dell PowerEdge HS5620 is a strong contender to take on higher temperatures during day-to-day operations and unexpected malfunctions.
Dell APEX Cloud Platform for Red Hat OpenShift: An easily deployable and powe...Principled Technologies
The 4th Generation Intel Xeon Scalable processor‑powered solution deployed in less than two hours and ran a Kubernetes container-based generative AI workload effectively
Dell APEX Cloud Platform for Red Hat OpenShift: An easily deployable and powe...Principled Technologies
The 4th Generation Intel Xeon Scalable processor‑powered solution deployed in less than two hours and ran a generative AI workload effectively
Conclusion
The appeal of incorporating GenAI into your organization’s operations is likely great. Getting started with an efficient solution for your next LLM workload or application can seem daunting because of the changing hardware and software landscape, but Dell APEX Cloud Platform for Red Hat OpenShift powered by 4th Gen Intel Xeon Scalable processors could provide the solution you need. We started with a Dell Validated Design as a reference, and then went on to modify the deployment as necessary for our Llama 2 workload. The Dell APEX Cloud Platform for Red Hat OpenShift solution worked well for our LLM, and by using this deployment guide in conjunction with numerous Dell documents and some flexibility, you could be well on your way to innovating your next GenAI breakthrough.
Upgrade your cloud infrastructure with Dell PowerEdge R760 servers and VMware...Principled Technologies
Compared to a cluster of PowerEdge R750 servers running VMware Cloud Foundation (VCF)
For organizations running clusters of moderately configured, older Dell PowerEdge servers with a previous version of VCF, upgrading to better-configured modern servers can provide a significant performance boost and more.
Upgrade your cloud infrastructure with Dell PowerEdge R760 servers and VMware...Principled Technologies
Compared to a cluster of PowerEdge R750 servers running VMware Cloud Foundation 4.5
If your company is struggling with underperforming infrastructure, upgrading to 16th Generation Dell PowerEdge servers running VCF 5.1 could be just what you need to handle more database throughput and reduce vSAN latencies. As an additional benefit to IT admins, we also found that the embedded VMware Aria Operation adapter provided useful infrastructure insights.
Realize 2.1X the performance with 20% less power with AMD EPYC processor-back...Principled Technologies
Three AMD EPYC processor-based two-processor solutions outshined comparable Intel Xeon Scalable processor-based solutions by handling more Redis workload transactions and requests while consuming less power
Conclusion
Performance and energy efficiency are significant factors in processor selection for servers running data-intensive workloads, such as Redis. We compared the Redis performance and energy consumption of a server cluster in three AMD EPYC two-processor configurations against that of a server cluster in two Intel Xeon Scalable two-processor configurations. In each of our three test scenarios, the server cluster backed by AMD EPYC processors outperformed the server cluster backed by Intel Xeon Scalable processors. In addition, one of the AMD EPYC processor-based clusters consumed 20 percent less power than its Intel Xeon Scalable processor-based counterpart. Combining these measurements gave us power efficiency metrics that demonstrate how valuable AMD EPYC processor-based servers could be—you could see better performance per watt with these AMD EPYC processor-based server clusters and potentially get more from your Redis or other data intensive applications and workloads while reducing data center power costs.
Improve performance and gain room to grow by easily migrating to a modern Ope...Principled Technologies
We deployed this modern environment, then migrated database VMs from legacy servers and saw performance improvements that support consolidation
Conclusion
If your organization’s transactional databases are running on gear that is several years old, you have much to gain by upgrading to modern servers with new processors and networking components and an OpenShift environment. In our testing, a modern OpenShift environment with a cluster of three Dell PowerEdge R7615 servers with 4th Generation AMD EPYC processors and high-speed 100Gb Broadcom NICs outperformed a legacy environment with MySQL VMs running on a cluster of three Dell PowerEdge R7515 servers with 3rd Generation AMD EPYC processors and 25Gb Broadcom NICs. We also easily migrated a VM from the legacy environment to the modern environment, with only a few steps required to set up and less than ten minutes of hands-on time. The performance advantage of the modern servers would allow a company to reduce the number of servers necessary to perform a given amount of database work, thus lowering operational expenditures such as power and cooling and IT staff time for maintenance. The high-speed 100Gb Broadcom NICs in this solution also give companies better network performance and networking capacity to grow as they embrace emerging technologies such as AI that put great demands on networks.
Boost PC performance: How more available memory can improve productivityPrincipled Technologies
With more memory available, system performance of three Dell devices increased, which can translate to a better user experience
Conclusion
When your system has plenty of RAM to meet your needs, you can efficiently access the applications and data you need to finish projects and to-do lists without sacrificing time and focus. Our test results show that with more memory available, three Dell PCs delivered better performance and took less time to complete the Procyon Office Productivity benchmark. These advantages translate to users being able to complete workflows more quickly and multitask more easily. Whether you need the mobility of the Latitude 5440, the creative capabilities of the Precision 3470, or the high performance of the OptiPlex Tower Plus 7010, configuring your system with more RAM can help keep processes running smoothly, enabling you to do more without compromising performance.
Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...Principled Technologies
A Principled Technologies deployment guide
Conclusion
Deploying VMware Cloud Foundation 5.1 on next gen Dell PowerEdge servers brings together critical virtualization capabilities and high-performing hardware infrastructure. Relying on our hands-on experience, this deployment guide offers a comprehensive roadmap that can guide your organization through the seamless integration of advanced VMware cloud solutions with the performance and reliability of Dell PowerEdge servers. In addition to the deployment efficiency, the Cloud Foundation 5.1 and PowerEdge solution delivered strong performance while running a MySQL database workload. By leveraging VMware Cloud Foundation 5.1 and PowerEdge servers, you could help your organization embrace cloud computing with confidence, potentially unlocking a new level of agility, scalability, and efficiency in your data center operations.
Upgrade your cloud infrastructure with Dell PowerEdge R760 servers and VMware...Principled Technologies
Compared to a cluster of PowerEdge R750 servers running VMware Cloud Foundation 4.5
Conclusion
If your company is struggling with underperforming infrastructure, upgrading to 16th Generation Dell PowerEdge servers running VCF 5.1 could be just what you need to handle more database throughput and reduce vSAN latencies. We found that a Dell PowerEdge R760 server cluster running VCF 5.1 processed over 78 percent more TPM and 79 percent more NOPM than a Dell PowerEdge R750 server cluster running VCF 4.5. It’s also worth noting that the PowerEdge R750 cluster bottlenecked on vSAN storage, with max write latency at 8.9ms. For reference, the PowerEdge R760 cluster clocked in at 3.8ms max write latency. This higher latency is due in part to the single disk group per host on the moderately configured PowerEdge R750 cluster, while the better-configured PowerEdge R760 cluster supported four disk groups per host. As an additional benefit to IT admins, we also found that the embedded VMware Aria Operation adapter provided useful infrastructure insights.
Based on our research using publicly available materials, it appears that Dell supports nine of the ten PC security features we investigated, HP supports six of them, and Lenovo supports three features.
Increase security, sustainability, and efficiency with robust Dell server man...Principled Technologies
Compared to the Supermicro management portfolio
Conclusion
Choosing a vendor for server purchases is about more than just the hardware platform. Decision-makers must also consider more long-term concerns, including system/data security, energy efficiency, and ease of management. These concerns make the systems management tools a vendor offers as important as the hardware.
We investigated the features and capabilities of server management tools from Dell and Supermicro, comparing Dell iDRAC9 against Supermicro IPMI for embedded server management and Dell OpenManage Enterprise and CloudIQ against Supermicro Server Manager for one-to-many device and console management and monitoring. We found that the Dell management tools provided more comprehensive security, sustainability, and management/monitoring features and capabilities than Supermicro servers did. In addition, Dell tools automated more tasks to ease server management, resulting in significant time savings for administrators versus having to do the same tasks manually with Supermicro tools.
When making a server purchase, a vendor’s associated management products are critical to protect data, support a more sustainable environment, and to ease the maintenance of systems. Our tests and research showed that the Dell management portfolio for PowerEdge servers offered more features to help organizations meet these goals than the comparable Supermicro management products.
Increase security, sustainability, and efficiency with robust Dell server man...Principled Technologies
Compared to the Supermicro management portfolio
Conclusion
Choosing a vendor for server purchases is about more than just the hardware platform. Decision-makers must also consider more long-term concerns, including system/data security, energy efficiency, and ease of management. These concerns make the systems management tools a vendor offers as important as the hardware.
We investigated the features and capabilities of server management tools from Dell and Supermicro, comparing Dell iDRAC9 against Supermicro IPMI for embedded server management and Dell OpenManage Enterprise and CloudIQ against Supermicro Server Manager for one-to-many device and console management and monitoring. We found that the Dell management tools provided more comprehensive security, sustainability, and management/monitoring features and capabilities than Supermicro servers did. In addition, Dell tools automated more tasks to ease server management, resulting in significant time savings for administrators versus having to do the same tasks manually with Supermicro tools.
When making a server purchase, a vendor’s associated management products are critical to protect data, support a more sustainable environment, and to ease the maintenance of systems. Our tests and research showed that the Dell management portfolio for PowerEdge servers offered more features to help organizations meet these goals than the comparable Supermicro management products.
Scale up your storage with higher-performing Dell APEX Block Storage for AWS ...Principled Technologies
In our tests, Dell APEX Block Storage for AWS outperformed similarly configured solutions from Vendor A, achieving more IOPS, better throughput, and more consistent performance on both NVMe-supported configurations and configurations backed by Elastic Block Store (EBS) alone.
Dell APEX Block Storage for AWS supports a full NVMe backed configuration, but Vendor A doesn’t—its solution uses EBS for storage capacity and NVMe as an extended read cache—which means APEX Block Storage for AWS can deliver faster storage performance.
Scale up your storage with higher-performing Dell APEX Block Storage for AWSPrincipled Technologies
Dell APEX Block Storage for AWS offered stronger and more consistent storage performance for better business agility than a Vendor A solution
Conclusion
Enterprises desiring the flexibility and convenience of the cloud for their block storage workloads can find fast-performing solutions with the enterprise storage features they’re used to in on-premises infrastructure by selecting Dell APEX Block Storage for AWS.
Our hands-on tests showed that compared to the Vendor A solution, Dell APEX Block Storage for AWS offered stronger, more consistent storage performance in both NVMe-supported and EBS-backed configurations. Using NVMe-supported configurations, Dell APEX Block Storage for AWS achieved 4.7x the random read IOPS and 5.1x the throughput on sequential read operations per node vs. Vendor A. In our EBS-backed comparison, Dell APEX Block Storage for AWS offered 2.2x the throughput per node on sequential read operations vs. Vendor A.
Plus, the ability to scale beyond three nodes—up to 512 storage nodes with capacity of up to 8 PBs—enables Dell APEX Block Storage for AWS to help ensure performance and capacity as your team plans for the future.
AppSec PNW: Android and iOS Application Security with MobSFAjin Abraham
Mobile Security Framework - MobSF is a free and open source automated mobile application security testing environment designed to help security engineers, researchers, developers, and penetration testers to identify security vulnerabilities, malicious behaviours and privacy concerns in mobile applications using static and dynamic analysis. It supports all the popular mobile application binaries and source code formats built for Android and iOS devices. In addition to automated security assessment, it also offers an interactive testing environment to build and execute scenario based test/fuzz cases against the application.
This talk covers:
Using MobSF for static analysis of mobile applications.
Interactive dynamic security assessment of Android and iOS applications.
Solving Mobile app CTF challenges.
Reverse engineering and runtime analysis of Mobile malware.
How to shift left and integrate MobSF/mobsfscan SAST and DAST in your build pipeline.
"Choosing proper type of scaling", Olena SyrotaFwdays
Imagine an IoT processing system that is already quite mature and production-ready and for which client coverage is growing and scaling and performance aspects are life and death questions. The system has Redis, MongoDB, and stream processing based on ksqldb. In this talk, firstly, we will analyze scaling approaches and then select the proper ones for our system.
Getting the Most Out of ScyllaDB Monitoring: ShareChat's TipsScyllaDB
ScyllaDB monitoring provides a lot of useful information. But sometimes it’s not easy to find the root of the problem if something is wrong or even estimate the remaining capacity by the load on the cluster. This talk shares our team's practical tips on: 1) How to find the root of the problem by metrics if ScyllaDB is slow 2) How to interpret the load and plan capacity for the future 3) Compaction strategies and how to choose the right one 4) Important metrics which aren’t available in the default monitoring setup.
In our second session, we shall learn all about the main features and fundamentals of UiPath Studio that enable us to use the building blocks for any automation project.
📕 Detailed agenda:
Variables and Datatypes
Workflow Layouts
Arguments
Control Flows and Loops
Conditional Statements
💻 Extra training through UiPath Academy:
Variables, Constants, and Arguments in Studio
Control Flow in Studio
"$10 thousand per minute of downtime: architecture, queues, streaming and fin...Fwdays
Direct losses from downtime in 1 minute = $5-$10 thousand dollars. Reputation is priceless.
As part of the talk, we will consider the architectural strategies necessary for the development of highly loaded fintech solutions. We will focus on using queues and streaming to efficiently work and manage large amounts of data in real-time and to minimize latency.
We will focus special attention on the architectural patterns used in the design of the fintech system, microservices and event-driven architecture, which ensure scalability, fault tolerance, and consistency of the entire system.
"NATO Hackathon Winner: AI-Powered Drug Search", Taras KlobaFwdays
This is a session that details how PostgreSQL's features and Azure AI Services can be effectively used to significantly enhance the search functionality in any application.
In this session, we'll share insights on how we used PostgreSQL to facilitate precise searches across multiple fields in our mobile application. The techniques include using LIKE and ILIKE operators and integrating a trigram-based search to handle potential misspellings, thereby increasing the search accuracy.
We'll also discuss how the azure_ai extension on PostgreSQL databases in Azure and Azure AI Services were utilized to create vectors from user input, a feature beneficial when users wish to find specific items based on text prompts. While our application's case study involves a drug search, the techniques and principles shared in this session can be adapted to improve search functionality in a wide range of applications. Join us to learn how PostgreSQL and Azure AI can be harnessed to enhance your application's search capability.
inQuba Webinar Mastering Customer Journey Management with Dr Graham HillLizaNolte
HERE IS YOUR WEBINAR CONTENT! 'Mastering Customer Journey Management with Dr. Graham Hill'. We hope you find the webinar recording both insightful and enjoyable.
In this webinar, we explored essential aspects of Customer Journey Management and personalization. Here’s a summary of the key insights and topics discussed:
Key Takeaways:
Understanding the Customer Journey: Dr. Hill emphasized the importance of mapping and understanding the complete customer journey to identify touchpoints and opportunities for improvement.
Personalization Strategies: We discussed how to leverage data and insights to create personalized experiences that resonate with customers.
Technology Integration: Insights were shared on how inQuba’s advanced technology can streamline customer interactions and drive operational efficiency.
What is an RPA CoE? Session 2 – CoE RolesDianaGray10
In this session, we will review the players involved in the CoE and how each role impacts opportunities.
Topics covered:
• What roles are essential?
• What place in the automation journey does each role play?
Speaker:
Chris Bolin, Senior Intelligent Automation Architect Anika Systems
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Keywords: AI, Containeres, Kubernetes, Cloud Native
Event Link: https://meine.doag.org/events/cloudland/2024/agenda/#agendaId.4211
GlobalLogic Java Community Webinar #18 “How to Improve Web Application Perfor...GlobalLogic Ukraine
Під час доповіді відповімо на питання, навіщо потрібно підвищувати продуктивність аплікації і які є найефективніші способи для цього. А також поговоримо про те, що таке кеш, які його види бувають та, основне — як знайти performance bottleneck?
Відео та деталі заходу: https://bit.ly/45tILxj
Introducing BoxLang : A new JVM language for productivity and modularity!Ortus Solutions, Corp
Just like life, our code must adapt to the ever changing world we live in. From one day coding for the web, to the next for our tablets or APIs or for running serverless applications. Multi-runtime development is the future of coding, the future is to be dynamic. Let us introduce you to BoxLang.
Dynamic. Modular. Productive.
BoxLang redefines development with its dynamic nature, empowering developers to craft expressive and functional code effortlessly. Its modular architecture prioritizes flexibility, allowing for seamless integration into existing ecosystems.
Interoperability at its Core
With 100% interoperability with Java, BoxLang seamlessly bridges the gap between traditional and modern development paradigms, unlocking new possibilities for innovation and collaboration.
Multi-Runtime
From the tiny 2m operating system binary to running on our pure Java web server, CommandBox, Jakarta EE, AWS Lambda, Microsoft Functions, Web Assembly, Android and more. BoxLang has been designed to enhance and adapt according to it's runnable runtime.
The Fusion of Modernity and Tradition
Experience the fusion of modern features inspired by CFML, Node, Ruby, Kotlin, Java, and Clojure, combined with the familiarity of Java bytecode compilation, making BoxLang a language of choice for forward-thinking developers.
Empowering Transition with Transpiler Support
Transitioning from CFML to BoxLang is seamless with our JIT transpiler, facilitating smooth migration and preserving existing code investments.
Unlocking Creativity with IDE Tools
Unleash your creativity with powerful IDE tools tailored for BoxLang, providing an intuitive development experience and streamlining your workflow. Join us as we embark on a journey to redefine JVM development. Welcome to the era of BoxLang.
Connector Corner: Seamlessly power UiPath Apps, GenAI with prebuilt connectorsDianaGray10
Join us to learn how UiPath Apps can directly and easily interact with prebuilt connectors via Integration Service--including Salesforce, ServiceNow, Open GenAI, and more.
The best part is you can achieve this without building a custom workflow! Say goodbye to the hassle of using separate automations to call APIs. By seamlessly integrating within App Studio, you can now easily streamline your workflow, while gaining direct access to our Connector Catalog of popular applications.
We’ll discuss and demo the benefits of UiPath Apps and connectors including:
Creating a compelling user experience for any software, without the limitations of APIs.
Accelerating the app creation process, saving time and effort
Enjoying high-performance CRUD (create, read, update, delete) operations, for
seamless data management.
Speakers:
Russell Alfeche, Technology Leader, RPA at qBotic and UiPath MVP
Charlie Greenberg, host
[OReilly Superstream] Occupy the Space: A grassroots guide to engineering (an...Jason Yip
The typical problem in product engineering is not bad strategy, so much as “no strategy”. This leads to confusion, lack of motivation, and incoherent action. The next time you look for a strategy and find an empty space, instead of waiting for it to be filled, I will show you how to fill it in yourself. If you’re wrong, it forces a correction. If you’re right, it helps create focus. I’ll share how I’ve approached this in the past, both what works and lessons for what didn’t work so well.
Lee Barnes - Path to Becoming an Effective Test Automation Engineer.pdfleebarnesutopia
So… you want to become a Test Automation Engineer (or hire and develop one)? While there’s quite a bit of information available about important technical and tool skills to master, there’s not enough discussion around the path to becoming an effective Test Automation Engineer that knows how to add VALUE. In my experience this had led to a proliferation of engineers who are proficient with tools and building frameworks but have skill and knowledge gaps, especially in software testing, that reduce the value they deliver with test automation.
In this talk, Lee will share his lessons learned from over 30 years of working with, and mentoring, hundreds of Test Automation Engineers. Whether you’re looking to get started in test automation or just want to improve your trade, this talk will give you a solid foundation and roadmap for ensuring your test automation efforts continuously add value. This talk is equally valuable for both aspiring Test Automation Engineers and those managing them! All attendees will take away a set of key foundational knowledge and a high-level learning path for leveling up test automation skills and ensuring they add value to their organizations.
Lee Barnes - Path to Becoming an Effective Test Automation Engineer.pdf
The Dell EMC PowerMax 8000 outperformed another vendor's array on an OLTP-like workload
1. The Dell EMC PowerMax 8000
outperformed another vendor’s
array on an OLTP-like workload
It also stored data more efficiently,
leaving room for growth
When selecting enterprise-class storage, companies seek a
platform that delivers strong application performance while using
capacity efficiently. They also look for an underlying architecture
that offers flexibility as their needs evolve, and a graphical user
interface (GUI) that simplifies routine tasks.
At Principled Technologies, we conducted hands-on testing of two
storage platforms for the enterprise data center, the Dell EMC™
PowerMax 8000 and a storage array from a competitor that we
will refer to as Vendor A. Like the PowerMax 8000, the Vendor A
competitor array is an All-Flash offering in the high-end storage
market that uses Non-Volatile Memory Express (NVMe) storage. Our
test tool simulated input/output operations of an online transaction
processing (OLTP) workload, and we then used the same tool to
simulate a migration of a compressible data set to a new array. We
also explored the architectural differences of the two platforms and
used each one’s GUI to perform an administrative task.
In all of the areas we investigated, the Dell EMC PowerMax 8000
platform had strong advantages over the platform from Vendor A.
25% more IOPS
on an OLTP-type
workload*
Strong performance
44% less storage
required after migrating
compressible data*
Store data more efficiently
45% fewer steps and
51% less admin time
to provision new storage*
Faster and simpler
*compared to the array of Vendor A
The Dell EMC PowerMax 8000 outperformed another vendor’s array on an OLTP-like workload February 2019
A Principled Technologies report: Hands-on testing. Real-world results.
2. How well does All-Flash Dell EMC PowerMax 8000 meet
enterprise storage requirements?
If you’re in the market for enterprise storage, you’re probably weighing a number of different
factors. Your platform must deliver strong application performance, use storage capacity efficiently
to keep ownership costs down and have room for growth, and minimize headaches and hassle for
your storage admins so they can focus on more important projects and initiatives. Each storage
platform has its own unique architecture that affects how well it addresses these needs. The
platforms vary widely, so it’s essential to do your homework.
We compared the Dell EMC PowerMax 8000 and the Vendor A array on a number of fronts. First,
we used the Vdbench storage benchmarking tool to conduct simulations that shed light on the
performance and storage efficiency capabilities of the two platforms:
• We executed an online transaction processing (OLTP)-type workload and measured input/
output operations per second (IOPS) and latency while maintaining a 2.3:1 compression ratio
• We simulated the migration of a previously compressible data set from an old array to a new
one to see how well each platform maintained the compression ratio.
Next, we looked into the design of the two platforms to assess how they could help or hinder
planning and everyday administration. Finally, we tested each platform’s GUI by using them to
perform the routine task of provisioning a new LUN.
Read on for the details of the advantages the Dell EMC PowerMax 8000 demonstrated over
the Vendor A array in all of these areas.
About the Dell EMC PowerMax array
Dell EMC PowerMax storage arrays utilize NVMe for customer data.
According to Dell EMC, “The NVMe-based PowerMax was specifically
created to fully unlock the bandwidth, IOPS, and latency performance
benefits that NVM media offers to host-based applications which are
unattainable using the current generation all-flash storage arrays.”1
To learn more about Dell EMC PowerMax, visit
DellEMC.com/PowerMax.
The Dell EMC PowerMax 8000 outperformed another vendor’s array on an OLTP-like workload February 2019 | 2
3. The Dell EMC PowerMax array
delivered greater performance
on an OLTP-like workload
Keeping critical business applications running at a fast clip is often the
most important job for a storage platform. To compare the performance
of the Dell EMC PowerMax platform and the Vendor A platform, we
used Vdbench to measure input/output on a workload similar to online
transaction processing.
Companies rely on OLTP systems for business-critical activities in a
variety of areas including finance, ecommerce, and customer relationship
management (CRM). They can’t afford to deliver a sluggish experience
to their users. To get a glimpse into how each storage platform would
perform on such applications, we used the OLTP-like workload in
Vdbench. (See Appendix E for the configuration files we used.)
As the chart below shows, the Dell EMC PowerMax 8000 array achieved
25 percent more IOPS than the Vendor A array. The Dell EMC PowerMax
solution achieved this win while maintaining a 2.3:1 data compression
ratio after a simulated migration, while the Vendor A array was unable to
compress all the migrated data.
25% more IOPS
on an OLTP-type
workload
Higher is better
OLTP-like IOPS
Dell EMC PowerMax 8000
275,301
Vendor A array
219,571
What do IOPS have to do with it?
When a storage array can handle more IOPS, that indicates its ability to support
periods of heavy activity. To measure the number of IOPS a solution is capable of
sustaining, Vdbench generates an I/O workload.
The Dell EMC PowerMax 8000 outperformed another vendor’s array on an OLTP-like workload February 2019 | 3
4. The Vendor A array fell short of the expected compression ratio,
but the Dell EMC PowerMax 8000 exceeded it
Because NVMe storage space is expensive, storage platforms typically employ strategies to compress data so that
less space is required. Compression ratios vary depending on the type of data, but consider a data set that totaled
60 TB and was 2:1 compressible. In this case, the compressible data would require roughly 30 TB of physical disk
capacity. The advantages for the company’s bottom line are obvious—their precious resources could go twice as far.
What happens when the time comes to refresh an older storage array and the company must migrate a
compressible data set to new hardware? Ideally, the new platform would maintain the compression ratio and the
data would take up roughly the same amount of space as it did prior to migration. However, this doesn’t always
happen. Even if the company’s storage administrators allowed for a certain amount of expansion in their capacity
planning, the resource consumption of the new platform could be an unpleasant surprise. The company could
find themselves on the hook for investments in storage capacity and could even have their data go completely
offline if the new platform’s usage hit 100 percent.
To gain insight into how our enterprise storage platforms would behave when migrating compressible data, we
simulated a migration using Vdbench. This simulation wrote 60 TB of 2:1 compressible data to the Dell EMC
PowerMax 8000 array and the Vendor A array, both of which had a capacity of about 48 TB. The chart above
presents the results.
The leftmost bar shows the volume of the compressible data set prior to the simulated migration. As the middle
bar shows, the Dell EMC PowerMax platform achieved a compression ratio of 2.3:1, which is better than the
expected 2:1 ratio (see the sidebar on the next page for an explanation). Because the migrated data set uses less
than 26 TB, more than 22 TB remains available in the array. As the rightmost bar shows, we saw a dramatically
different outcome in our simulated migration to the Vendor A platform. The data set used 46.48 TB, leaving less
than 2 TB free on the array and jeopardizing the integrity of the data.
48 TB
Before move
Lower is better
Capacity needed for a 2:1 compressible
data set of 60 TB after migration
Dell EMC
PowerMax 8000
2:1 compressible
data set
Vendor A
array
After move
48 TB
60 TB
25.8 TB
46.5 TB
Capacity used
Capacity available
44% less
storage
required after
migrating
compressible
data
The Dell EMC PowerMax 8000 outperformed another vendor’s array on an OLTP-like workload February 2019 | 4
5. Compression ratios
Dell EMC
PowerMax 8000
space used
2:1
compressible
data set
Vendor A
array space
used
~60 TB
46.5 TB
25.8 TB
2.3:1
ratio
1.3:1
ratio
The chart above presents the compression ratios for
the data we showed on the previous page in terms of
physical and logical space used. As it shows, the Dell
EMC PowerMax 8000 array achieved a ratio of 2.3:1
while the Vendor A array achieved a ratio of 1.3:1.
The Dell EMC PowerMax platform was able to write the
data in the simulated migration up to its peak rate and still
achieve the 2.3:1 compression ratio. On the Vendor A array,
we initially ran the simulated migration at 100 percent of its
peak rate but found that it reached its maximum capacity
and all storage LUNs went offline, effectively hanging the test
before completion. We subsequently throttled the array to
reduce speed by 75 percent. When operating at 25 percent
of maximum I/O rate, Vendor A was still unable to archive
the expected 2:1 data compression ratio using adaptive
compression. This is because when the system gets busy,
the Vendor A array turns off data reduction to speed up
access, whereas the PowerMax platform uses intelligent data
reduction to maintain data performance and efficiency.
Another limitation of Vendor A’s inline adaptive compression is
its inability to do background compression during maintenance
or when idle. Once it performs a migration, even if there is
compressible data on the array, the user or admin is stuck with
the data as is and can’t perform additional compression.
What these results could mean for your company
By achieving a 2.3:1 data compression ratio, the Dell
EMC PowerMax platform would reduce the amount of
space needed to store the data after the migration by 44
percent compared to the Vendor A platform, and would
leave a comfortable amount of space on the new array. If a
company were anticipating a 15 percent annual growth rate
in storage capacity needs, the PowerMax array could easily
meet their needs for years, but the Vendor A array would be
out of space in year one.
How the Dell EMC PowerMax
8000 exceeded the expected
compression ratio
In our testing, we used a Vdbench 2:1
compressible data set of 60 TB, which you
would expect to take up 30 TB of physical
capacity after compression. However, as we
note above, the Dell EMC PowerMax 8000
needed only 25.8 TB of physical capacity. This
was possible due to the different compression
algorithms involved in the testing. Vdbench
uses one algorithm (LZJB) to create compressible
data and the Dell EMC PowerMax 8000 uses
a different algorithm (DEFLATE). Because
DEFLATE has a higher compression ratio than
LZJB, the Dell EMC PowerMax 8000 achieved
a compression ratio of 2.3:1.
The Dell EMC PowerMax 8000 outperformed another vendor’s array on an OLTP-like workload February 2019 | 5
6. How the shared architecture of the Dell EMC PowerMax platform can reduce upfront
planning time and let your company respond to fluctuating needs in real time
While the storage platforms we tested both have two controllers and approximately the same number of disks
and raw storage capacity, they differ dramatically in terms of how these resources interact. Simply put, the Dell
EMC PowerMax platform treats all of the disks as a single pool, which means both of the controllers can access
any of the disks at any time. This eliminates the need for upfront estimating and planning, and offers a great deal
of flexibility as storage needs fluctuate.
To ensure load balancing of controller resources, the Vendor A platform divides the disks into two or more primary
pools, each of which is linked exclusively to one of the controllers. It subdivides each pool into volumes, then creates
one or more LUNs within each volume. This federated approach could require more effort in both planning and
ongoing maintenance. In contrast, the PowerMax architecture is truly active active, allowing full load balancing of
drives across both controllers.
For example, say a storage admin needed to migrate a workload to balance I/O resources across disks or
controllers using the Vendor A platform. They would have to identify the LUNs the workload was running on,
identify the parent volumes of the LUNs, and then execute one or more volume copies from one pool/controller
to another. This manual process would require real-time alerting, planning, available capacity, and administrator
time. On the Dell EMC PowerMax platform, none of this would be necessary. Anecdotally, after Vdbench
simulated our migration, both of the pools on the Vendor A array were at greater than 95 percent capacity
utilization because of the compression loss. This indicates that in a real-world situation, there would be insufficient
space on either pool to perform the kind of volume move we describe above. This could introduce the risk of an
outage or data loss as a result.
Vendor A array disk ownership architecture
AggregateAggregate
Controller 2Controller 1
Volumes
LUNs
Dell EMC PowerMax 8000 disk ownership architecture
Controller 1 Controller 2
Volumes
The Dell EMC PowerMax 8000 outperformed another vendor’s array on an OLTP-like workload February 2019 | 6
7. With the Unisphere for PowerMax management tool, the routine
task of provisioning a new LUN took half the time it did on the
Vendor A platform
If you’re a storage administrator, spending some of your time at work performing routine tasks is unavoidable.
However, the right tools can simplify and shorten these tasks, freeing time for you to focus on higher-value
activities. To learn about the GUI management tools of the two storage platforms we investigated, we
provisioned a single LUN on each and measured the time and steps required.
As the charts below illustrate, executing this task with the Unisphere for PowerMax tool took six steps and 30
seconds. That’s roughly half of the eleven steps and 62 seconds the same task required using the management
tool of the Vendor A array. If your administrators could cut the time they devote to routine tasks in half, think of
what they could accomplish.
45% fewer steps
51% less admin time
Lower is better Min:sec | Lower is better
Dell EMC PowerMax 8000
Steps to provision new LUN Time to provision new LUN
6 0:30
1:02
Vendor A array
Dell EMC PowerMax 8000
Vendor A array
11
About Unisphere for PowerMax
Dell EMC positions Unisphere for PowerMax as an intuitive management interface that lets IT managers
spend less time provisioning, managing, and monitoring PowerMax storage assets. According to Dell EMC,
“Unisphere delivers the simplification, flexibility, and automation that are key requirements to accelerate the
transformation to the modern data center. For customers who frequently build up and tear down storage
configurations, Unisphere®
for PowerMax makes reconfiguring the array even easier by reducing the number
of steps required to delete and repurpose volumes.”2
The Dell EMC PowerMax 8000 outperformed another vendor’s array on an OLTP-like workload February 2019 | 7
8. Conclusion
Our testing indicated several areas where the Dell EMC PowerMax 8000 platform offered advantages over the
Vendor A array. The Dell EMC PowerMax array delivered 25 percent more IOPS on an OLTP-like workload. In a
simulated migration of a compressible data set, the Dell EMC PowerMax exceeded the expected compression
ratio while the Vendor A array failed to meet it. In contrast to the federated architecture of Vendor A, the Dell
EMC PowerMax architecture lets both controllers access every disk, which could increase flexibility and reduce
upfront planning and ongoing maintenance. Finally, the Unisphere for PowerMax management tool let us
execute a routine task in half the time we needed on the Vendor A array. Each of these individual advantages can
directly benefit your enterprise. Together, they make a very strong case for the Dell EMC PowerMax 8000.
1 Dell EMC, “The Dell EMC PowerMax Family Overview,” accessed January 2, 2019,
https://www.dellemc.com/resources/en-us/asset/white-papers/products/storage/h17118_dell_emc_powermax_family_overview.pdf.
2 Dell EMC, “Dell EMC PowerMax Family,” accessed January 2, 2019,
https://www.dellemc.com/resources/en-us/asset/data-sheets/products/storage/h16891-powermax-family-ds.pdf.
To learn more about
Dell EMC PowerMax, visit
DellEMC.com/PowerMax.
The Dell EMC PowerMax 8000 outperformed another vendor’s array on an OLTP-like workload February 2019 | 8
9. We concluded our hands-on testing on December 6, 2018. During testing, we determined the appropriate
hardware and software configurations and applied updates as they became available. The results in this report
reflect configurations that we finalized on September 27, 2018 or earlier. Unavoidably, these configurations may
not represent the latest versions available when this report appears.
Appendix A: Our results
The table below presents our findings in detail.
Dell EMC™
PowerMax 8000 Vendor A array Percentage difference
Space savings
Space needed for 2:1 compressible data
set of 60 TB after migration
25.8 TB 46.5 TB 44% less space needed
Performance
IOPS on OLTP-like workload 275,301 219,571 25% more IOPS
Management task
Admin steps to provision one new host
(see page 14 for steps)
6 11 45% fewer steps
Admin time to provision one new host
(min:sec)
0:30 1:02 51% less time
The Dell EMC PowerMax 8000 outperformed another vendor’s array on an OLTP-like workload February 2019 | 9
10. Appendix B: System configuration information
The table below presents detailed information on the systems we tested.
Storage
Storage configuration information Dell EMC PowerMax 8000 Vendor A array
Operating system 5978.221.221 9.4
Number of storage shelves 2 2
Total drives 33 36
Drive vendor and model number HGST Ultrastar SN200 Samsung®
PM1723a 1.92TB SSD
Drive size (TB) 1.92 1.92
Usable capacity 48 TB 48 TB
Servers
Server configuration information 12 x Dell EMC PowerEdge R740 servers
BIOS name and version Dell EMC PowerEdge™
R740 1.4.9
Non-default BIOS settings Virtualization enabled
Operating system name and version/build number VMware ESXi™
6.5.0 build-10175896
Date of last OS updates/patches applied 10/1/2018
Power management policy Performance
Processor
Number of processors 2
Vendor and model Intel®
Xeon®
Platinum 8168
Core count (per processor) 24
Core frequency (GHz) 2.70
Memory module(s)
Total memory in system (GB) 256
Number of memory modules 8
Vendor and model Samsung M393A4k40CB2-CTD
Size (GB) 32
Type DDR4
Speed (MHz) 2,666
Speed running in the server (MHz) 2,666
The Dell EMC PowerMax 8000 outperformed another vendor’s array on an OLTP-like workload February 2019 | 10
11. Server configuration information 12 x Dell EMC PowerEdge R740 servers
Storage controller
Vendor and model Dell EMC PERC H740p
Cache size (GB) 8
Firmware version 50.3.0-1512
Driver version 7.7.03.18.00
Local storage
Number of drives 2
Drive vendor and model Dell/Toshiba PGNY6
Drive size (GB) 120
Drive information (speed, interface, type) 6Gb SATA, SSD
Network adapter
Vendor and model Intel Ethernet Converged Network Adapter X520-2
Number and type of ports 2 x 10GbE
Driver version 4.1.1.1-iov
Storage adapter
Vendor and model QLogic QLE2742 32Gb FC Adapter
Number and type of ports 2 x 32Gb
Firmware version 8.07.80
Power supplies
Vendor and model Dell EMC DD1100E-S0
Number of power supplies 2
Wattage of each (W) 1,100
The Dell EMC PowerMax 8000 outperformed another vendor’s array on an OLTP-like workload February 2019 | 11
12. Appendix C: Testbed diagrams
Dell EMC PowerMax platform
Dell EMC Connectrix
DS-6620B switch
24x 16 Gbps FC
Top-of-rack 1Gbps
Ethernet switch
12 x Dell EMC
PowerEdge R740 servers
each with:
Dell EMC PowerMax 8000
RDM
2.6TB
LUN
RDM
2.6TB
LUN
CentOS
RDM
2.6TB
LUN
RDM
2.6TB
LUN
CentOS
1 Gbps Ethernet 16 Gbps Fibre Channel
The Dell EMC PowerMax 8000 outperformed another vendor’s array on an OLTP-like workload February 2019 | 12
13. Vendor A platform
Dell EMC Connectrix
DS-6620B switch
Top-of-rack 1Gbps
Ethernet switch
12 x Dell EMC
PowerEdge R740 servers
each with:
RDM
2.6TB
LUN
RDM
2.6TB
LUN
CentOS
RDM
2.6TB
LUN
RDM
2.6TB
LUN
CentOS
1 Gbps Ethernet 16 Gbps Fibre Channel
32 Gbps Fibre Channel
Vendor A array
16x 32 Gbps FC
The Dell EMC PowerMax 8000 outperformed another vendor’s array on an OLTP-like workload February 2019 | 13
14. Appendix D: How we tested
We created 48, thin provisioned, 2.6TB LUNs for the Dell EMC PowerMax 8000. We installed 12 Dell EMC PowerEdge R740 servers with
vSphere 6.5. Each server had the following:
• Four LUNs mapped and zoned for exclusive use
• One dual-port 32Gb QLogic adapter
• Two VMs, with each VM having exclusive control of its two respective LUNs as RDMs (Raw Device Mapping). Each VM was installed
with CentOS, Java, and Vdbench version 50407
We ran coordinated Vdbench jobs across all servers and storage on the Dell EMC PowerMax and then repeated the process on the Vendor
A array using the same servers and infrastructure. We used publicly available best practices from Dell EMC and Vendor A for pathing, queue
depth, and other storage-specific settings.
The remainder of this section provides the steps we followed to configure the environment.
Configuring and provisioning storage
We zoned and optimized server paths as per vendor instructions to maximize HBA throughput. For PowerMax and the similarly configured
AF (All-Flash Array), we created 12 storage groups and included their respective HBA WWNNs. For the Vendor A array, which had 16 x 32
Gbps ports on the array to connect to 24 server HBA ports, we zoned three server HBAs for every two initiator ports to balance the paths.
For the PowerMax 8000, we had 24 16Gbps ports, so we zoned one server HBA to each of the array initiator ports.
Creating a single volume and mapping on the Vendor A array
1. Log onto the administrative portal as admin.
2. Click StorageVolumesCreate.
3. Select the appropriate SVM, and click Select.
4. At the Create LUN wizard screen, click Next.
5. Name the LUN “Select VMware,” and click Next.
6. Select Create a new flexible volume, and select the appropriate aggregate. Click Next.
7. Name the volume. For size, type 2.6 TB, and click Create.
8. Select the server you wish to attach, and click Next.
9. Leave Manage Storage Quality of Service unselected, and click Next.
10. Review the summary, and click Next.
11. Click Finish.
Repeat steps 2 through 11 as necessary to create and map 48 LUNs and 48 volumes.
Creating volumes and mapping on the Dell EMC PowerMax array
1. Open Unisphere, and log in as smc.
2. Click hosts. Select host1, and click Provision storage.
3. For the storage group name, type host 1. For the number of volumes, type 4. For size, type 2.6 TB. Click Next.
4. Select the appropriate port group, and click Next.
5. Click Run Now.
6. Click Close.
Repeat steps 2 through 6 as necessary to create and map 48 LUNs and 48 volumes.
The Dell EMC PowerMax 8000 outperformed another vendor’s array on an OLTP-like workload February 2019 | 14
15. Installing VMware vSphere Hypervisor (ESXi) 6.5
We installed and configured 12 servers under test and a single server for infrastructure. For each of the 13 hosts, we installed VMware
vSphere®
Hypervisor (ESXi) 6.5, accepting defaults and naming each server appropriately. We used the following steps for installation:
1. Attach the installation media to the server.
2. Boot the server.
3. At the VMware Installer screen, press Enter.
4. At the EULA screen, to Accept and Continue, press F11.
5. Under Storage Devices, select the appropriate disk, and press Enter.
6. Select US as the keyboard layout, and press Enter.
7. Enter a root password twice, and press Enter.
8. To start the installation, press F11.
9. To reboot the server, remove the installation media, and press Enter.
10. After the server reboots, press F2, and enter root credentials.
11. Select Configure Management Network, and press Enter.
12. Select IPv4 Configuration, and enter the desired configuration details. Press Enter.
13. Select DNS Configuration, and enter the Primary DNS Server. Press Enter.
14. Press Esc, and type Y to accept changes.
Deploying the VMware vCenter Server 6.5 Appliance
Once we installed ESXi on the 13 hosts, we created a vCenter Server Appliance on our infrastructure server and added the 12 hosts under
test to the vCenter.
1. On a Windows client, open the installation media folder.
2. Select vcsa-ui-installer, and right-click the installer application.
3. Click Run as Administrator.
4. Click Yes.
5. In the Appliance 6.5 Installer window, click Install.
6. At the Introduction, click Next.
7. Accept the terms of the license agreement, and click Next.
8. Select vCenter Server with an Embedded Platform Services Controller, and click Next.
9. Enter the IP address for the ESXi target server. Enter the username, the password, and click Next.
10. To accept the certificate, click Yes.
11. Enter and confirm a root password for the appliance, and click Next.
12. Select the deployment size (we selected Tiny and the default storage size), and click Next.
13. Check the box to enable thin disk mode, and click Next.
14. Enter the appropriate network information (including IP address of the application, subnet, gateway, and DNS), and click Next.
15. Review all information, and click Finish.
16. To move on to stage two of deployment, click Continue.
17. At the introduction, click Next.
18. Enter the name of the NTP servers for synchronization, enable SSH, and click Next.
19. Enter a domain name, password, and site name, and click Next.
20. Click Next for CEIP.
21. Review the stage two settings, and click Finish.
22. After setup completes, click Close.
23. Using the credentials you created in step 19, log onto http://vcenterip:8443.
24. On the left-hand pane, right-click the vCenter server, and select New Datacenter.
25. Right-click Datacenter, and select New Cluster.
26. Enter a Cluster name, and Click OK.
27. To create the additional cluster, repeat steps 25 and 26.
28. Right-click the newly created cluster, and add all six ESXi servers to each cluster.
The Dell EMC PowerMax 8000 outperformed another vendor’s array on an OLTP-like workload February 2019 | 15
16. Creating the client (Vdbench slave) VMs
We created a single master image and cloned it to make additional 24 client VMs. We installed the master VM on our infrastructure server.
The master VM orchestrated Vdbench testing and collected data from the 24 client (slave) VMs.
1. Navigate to the vSphere web client.
2. Log in as administrator@vsphere.local.
3. Select Create a new virtual machine.
4. Choose Custom, and click Next.
5. Name the virtual machine master, and click Next.
6. Select the host, and click Next.
7. Select the appropriate storage, and click Next.
8. Select Compatibility ESXi 6.5 and later, and click Next.
9. Select Linux for the Guest OS family, and choose Linux Centos7 for the OS version.
10. For customized hardware, edit the following values:
yy CPU: Four virtual processor sockets, and one core per virtual socket
yy Memory: 16GB RAM (click Reserve all guest memory)
yy New Hard disk: 30 GB
yy New Network: Select VMXNET 3, connect to the VM network, and click Next
yy Leave all other options as default
11. Click Finish.
12. Connect the VM virtual CD-ROM to the CentOS installation disk.
13. Start the VM.
14. Right-click the VM, and select Open Console.
15. At the splash screen, select Install CentOS 7, and press Enter.
16. Choose your desired language, and click Continue. We chose English (United States).
17. At the Installation Summary screen, configure the date and time to match your time zone.
18. Set the Software Selection to Minimal Install.
19. Set the Installation Destination to Automatic partitioning.
20. Configure the network and host name for your testing network.
21. Click Begin Installation.
22. During the installation process, set the root password. We elected not to create another user for this setup.
23. Once the installation completes, disconnect the installation media, and click Reboot.
Configuring the master VM
1. Log into CentOS as root.
2. Download and unpack JAVA ver 1.8.0_181 to /opt.
3. Download and unpack vdbench50407 to the root directory.
4. Edit the /etc/hosts file as indicated in The files we used in our testing. You should edit the IP addresses to reflect your own scheme.
5. Type ssh-keygen
6. At the prompt that reads “Enter file in which to save the key (/root/.ssh/id_rsa):” press Enter.
7. At the prompt that reads “Enter passphrase (empty for no passphrase):” press Enter.
8. Confirm the empty passphrase.
9. Type ssh-copy-id -f -i /root/.ssh/id_rsa.pub root@master
10. Type yes to accept, and confirm with the root password.
11. To disable the firewall, type systemctl disable firewalld
12. To stop the firewall, type systemctl stop firewalld
The Dell EMC PowerMax 8000 outperformed another vendor’s array on an OLTP-like workload February 2019 | 16
17. Cloning the master VM to create 24 client VMs
1. Log into the vSphere client as administrator@vsphere.local.
2. Navigate to HomeCustomization Specification Manager.
3. Click the icon for Create a new Specification.
4. Select Linux for target OS, and name the specification client. Click Next.
5. Select the radio button next to Use the virtual machine name. For domain name, type test.local and click Next.
6. Select the appropriate time zone, and click Next.
7. Select the appropriate network configuration, and click Next.
8. Enter the appropriate DNS, and click Next. (Note: The /etc/hosts file will override this.)
9. To create the custom specification, click Finish.
10. Select the VM named master, right-click, and select CloneClone to virtual machine.
11. Name the new VM client01, and click Next.
12. Select host1, and click Next.
13. For storage, for Format, select Thin Provisioned. For Datastore, select the server’s local datastore, and click Next.
14. Check the boxes next to the following:
yy Customize the operating system
yy Power on the virtual machine after creation
15. Click Next.
16. For customized specification, select Client, and click Next.
17. To deploy, click Finish.
18. Repeat steps 10 through 16 to create two VMs per host for a total of 24 clients.
Installing the PowerPath virtual appliance
To ensure optimization for the PowerMax 8000 and licensing for PowerPath, we installed the PowerPath virtual appliance on our
infrastructure server.
1. Download EMCPower.PPMA-2.3.0.00.00-43.ova from DellEMC.com.
2. Log into the vSphere client, and select the infrastructure server.
3. Right-click the infrastructure server, and select Deploy OVF template.
4. Browse to where you saved the EMCPower.PPMA-2.3.0.00.00-43.ova file in step 1.
5. Click Next.
6. Select the appropriate location, and click Next.
7. Select the appropriate server, and click Next.
8. Accept the license agreement, and click Next.
9. Select a storage location, and click Next.
10. Select a network, and click Next.
11. Enter TCP/IP information, and click Next.
12. To deploy, click Finish.
13. When deployment has completed, power on the VM.
14. Open a browser and navigate to https://<ipaddress from step 10>
15. Log in as root.
16. Navigate to InventoryvCenter Servers.
17. Click Add vCenter, enter the appropriate address and credentials, and click Register Plugin.
18. Log out.
The Dell EMC PowerMax 8000 outperformed another vendor’s array on an OLTP-like workload February 2019 | 17
18. Deploying Virtual Storage Console (VSC) 7.2 for VMware vSphere
To ensure optimization for the Vendor A array, we installed the VSC virtual appliance on our infrastructure server.
1. Download unified-virtual-appliance-for-vsc-vp-sra-7.2.ova from Vendor A.
2. Log into vCenter Server.
3. Right-click Datacenter.
4. Click Deploy OVF Template.
5. Select Local file, and click Browse.
6. Select unified-virtual-appliance-for-vsc-vp-sra-7.2.ova, and click Open.
7. Click Next.
8. At the Select name and location screen, click Next.
9. Select the infrastructure host, and click Next.
10. At the Review details screen, click Next.
11. At the Accept license agreement screen, click Accept, and click Next.
12. At the Select storage screen, choose a datastore to host the appliance, and click Next.
13. At the Select network screen, select the destination network, and click Next.
14. At the Customize template screen, enter the administrator password, the NTP server, the vCenter IP, and the vCenter credential and
network settings, and click Next.
15. At the Ready to complete screen, click Finish.
16. Power on the VSC appliance.
17. Right-click the appliance, and click Install VMware tools.
18. To confirm vCenter plugin registration details, navigate to https://<appliance_ip>:8143/Register.html.
Configuring the Virtual Storage Console for VMware vSphere
1. Log into the vCenter server.
2. Click HomeVirtual Storage Console.
3. On the left-hand pane, select Storage Systems.
4. Click Add, and enter the storage array IP and credentials.
Optimizing ESXi hosts using the Virtual Storage Console
1. Log into the vCenter server.
2. Click HomeVirtual Storage Console.
3. Under Host Systems, click Edit Settings.
4. Select all the appropriate hosts.
5. Select the HBA/CAN, MPIO, and NFS settings checkboxes.
6. Click OK.
Attaching storage LUNs to VMs via Raw Device Mapping (RDM)
Each host has access to four 2.6TB LUNs per vendor. We toggled access to each vendor based on the active zone setting on the Fiber
Channel switch. We attached two LUNs directly to each VM via RDMs. Each time we switched vendors, we removed the old RDMs and
attached new ones.
1. Log onto the vSphere client.
2. Select client01, and right-click Edit settings.
3. Click New deviceRDM disk, and click Add.
4. Select a fiber-attached LUN, and click OK.
5. Repeat steps 3 and 4 so the VM has two 2.6TB RDMs, and click OK.
6. Repeat steps 2 through 5 on all 24 client VMs.
The Dell EMC PowerMax 8000 outperformed another vendor’s array on an OLTP-like workload February 2019 | 18
19. How we used the Vdbench workloads
We ran Vdbench across all clients. We used several Vdbench workloads:
• Migrate. Running this workload simulates moving from an old array to a new one. We used it to create 60 TB of data with a 2.3:1
compression ratio. After running the migrate workload, we confirmed the compression ratios on the storage dashboard before continuing.
• Online transaction processing (OLTP). This workload puts stress on storage with different read and write patterns. Realistically, there
will be some variance from enterprise to enterprise. However, this workload has I/O patterns that are similar to those in a typical
enterprise running real-world OLTP.
• Clear cache. We ran clear cache between runs to eliminate leftover cache hits and to ensure reproducible results.
Note: Before we began testing, we created three configuration files: migrate.cfg, OLTP.cfg, and clearcache.cfg. See the next page for our
specific code.
Running the tests
1. Log onto master as root.
2. Run the Migrate job using the following code:
- ./vdbench -f migrate.cfg -o migrate_out
3. Run the clearcache job using the following code:
- ./vdbench -f clearcache.cfg
4. Run the OLTP job using the following code:
- ./vdbench –f oltp2a.cfg –o OLTP_out
Detaching storage RDMs from VMs
To switch storage vendors, we re-cabled our storage platform and changed zone sets. We rebooted all 12 hosts to ensure the correct paths.
We then removed the old RDMs on each VM and attached new LUNs as RDMs.
1. Log onto the vSphere client.
2. Select all 12 hosts, and right-click PowerReboot.
a. Wait until the 12 servers reboot.
3. On Host 1, select the first client VM, and right-click Edit settings.
4. Click Manage other disks.
5. Click the delete icon next to disk 3, and click Close.
6. Click the delete icon next to disk 2.
7. Click New deviceRDM disk, and click Add.
8. Select the first new fibre-attached LUN and click OK.
9. Click New deviceRDM disk, and click Add.
10. Select the second new fiber-attached LUN.
11. Click OK, and click Finish.
12. Repeat steps 3 and 10 on the second VM so the ESXi host has two VMs, both with the old RDMs deleted and the new RDMs added.
13. Repeat steps 3 through 11 on hosts 2 through 12, ensuring that you have reconfigured all 24 client VMs with new RDMs.
The Dell EMC PowerMax 8000 outperformed another vendor’s array on an OLTP-like workload February 2019 | 19
20. Appendix E: The configuration files we used in our testing
Hosts file
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.0.200 master
192.168.0.201 client01
192.168.0.202 client02
//continue to 24 clients
Vdbench configuration files
Each of the four configuration files had the same hd and sd variables:
messagescan=no
compratio=2
hd=default,vdbench=/vdbench-kit03.PT/vd,user=root,shell=ssh,jvms=1
hd=client01,system=192.#.#.201
hd=client02,system=192.#.#.202
//continue to 24 clients
sd=sd_0,host=client01,lun=/dev/sdb,openflags=o_direct
sd=sd_1,host=client01,lun=/dev/sdc,openflags=o_direct
sd=sd_2,host=client02,lun=/dev/sdc,openflags=o_direct
sd=sd_3,host=client02,lun=/dev/sdb,openflags=o_direct
// continue to 48 LUNs and 24 clients
flushcache.cfg
wd=wd_CACHEFLUSH_RRM,sd=*,rdpct=100,xfersize=128k,range=(0,2)
rd=rd_CACHEFLUSH,wd=wd_CACHEFLUSH_RRM,iorate=max,elapsed=14400,interval=60,maxdata=1024g
migrate.cfg
wd=wd_MIGRATE_SW,sd=*,seekpct=eof
rd=rd_MIGRATE,wd=wd_MIGRATE_SW,elapsed=144000,interval=30,forxfersize=(128k),forrdpct=(0),
forthreads=(1),iorate=10000,maxdata=60000g
OLTP.cfg
wd=wd_OLTP2A_RRH,sd=*,rdpct=100,xfersize=8K,skew=20,range=(1728m,1760m)
wd=wd_OLTP2A_RM,sd=*,rdpct=100,xfersize=8k,skew=45,range=(2,6)
wd=wd_OLTP2A_RW,sd=*,rdpct=0,xfersize=8K,skew=15,range=(2,6)
wd=wd_OLTP2A_SR,sd=*,rdpct=100,seekpct=seqnz,range=(2,6),xfersize=128K,skew=10
wd=wd_OLTP2A_SW,sd=*,rdpct=0,seekpct=seqnz,range=(2,6),xfersize=128K,skew=10
rd=rd_OLTP2A,wd=wd_OLTP2A_*,iorate=curve,curve=(10-100,10),elapsed=300,interval=30,warmup=120,forhita
rea=(20m),threads=16
The Dell EMC PowerMax 8000 outperformed another vendor’s array on an OLTP-like workload February 2019 | 20
21. Principled Technologies is a registered trademark of Principled Technologies, Inc.
All other product names are the trademarks of their respective owners.
DISCLAIMER OF WARRANTIES; LIMITATION OF LIABILITY:
Principled Technologies, Inc. has made reasonable efforts to ensure the accuracy and validity of its testing, however, Principled Technologies, Inc. specifically disclaims
any warranty, expressed or implied, relating to the test results and analysis, their accuracy, completeness or quality, including any implied warranty of fitness for any
particular purpose. All persons or entities relying on the results of any testing do so at their own risk, and agree that Principled Technologies, Inc., its employees and its
subcontractors shall have no liability whatsoever from any claim of loss or damage on account of any alleged error or defect in any testing procedure or result.
In no event shall Principled Technologies, Inc. be liable for indirect, special, incidental, or consequential damages in connection with its testing, even if advised of the
possibility of such damages. In no event shall Principled Technologies, Inc.’s liability, including for direct damages, exceed the amounts paid in connection with Principled
Technologies, Inc.’s testing. Customer’s sole and exclusive remedies are as set forth herein.
This project was commissioned by Dell EMC.
Principled
Technologies®
Facts matter.®Principled
Technologies®
Facts matter.®
The Dell EMC PowerMax 8000 outperformed another vendor’s array on an OLTP-like workload February 2019 | 21