Database work is a big deal—in terms of its importance to your company, and the sheer magnitude of the work. Our tests with the Dell EMC PowerEdge R930 server and Unity 400F All-Flash storage array demonstrated that it could perform comparably to an HPE ProLiant DL380 Gen9 server and 3PAR array during OLTP workloads, with a better compression ratio (3.2-to-1 vs. 1.3-to-1). For loading large sets of data, the Dell EMC Unity finished 22 percent faster than the HPE 3PAR, which can result in less hassle for the administrator in charge of data marts. When running both OLTP and data mart workloads in tandem, the Unity array outperformed the HPE 3PAR in terms of orders processed per minute by 29 percent. For additional product information concerning the Unity 400F storage array, visit DellEMC.com/Unity.
Workstation heat and power usage: Lenovo ThinkStation P500 vs. HP Z440 Workst...Principled Technologies
A workstation that runs coolly and uses less power is a great asset to workers and the companies they work for. In our tests, both when idle and when under load, the Lenovo ThinkStation P500 generally ran at lower surface temperatures and used less power than the HP Z440 Workstation. These findings show that the Lenovo ThinkStation P500 could meet the needs of those who want to provide a reliable, comfortable work environment while using less power.
Maximizing Oracle Database performance with Intel SSD DC P3600 Series NVMe SS...Principled Technologies
If your organization runs critical, high-demand databases in environments such as Oracle Database, strong performance is not an option: it’s a must-have. Additionally, getting that necessary strong performance out of a single server can be essential for running a space and cost-efficient datacenter. In the Principled Technologies labs, we found that the Dell PowerEdge R930 offered strong performance for such transactional databases when configured with SATA SSDs. When we upgraded the servers to Intel SSD DC P3600 Series NVMe SSDs, performance doubled, increasing by 2.17 times, or 117 percent. If your datacenter needs a new powerhouse server, purchasing your Dell PowerEdge R930 with Intel NVMe SSDs for a cost increase of only 18 percent can double the performance you get from each server. This increases what your infrastructure can do within the same amount of space and lets you ultimately save money that would otherwise be spent purchasing additional servers and software.
Store data more efficiently and increase I/O performance with lower latency w...Principled Technologies
Compared to an array from another vendor, the PowerMax 8000 offered a better inline data reduction ratio and better performance during simulated OLTP and data extraction workloads
Demartek Lenovo Storage S3200 i a mixed workload environment_2016-01Lenovo Data Center
This document evaluates the Lenovo S3200 storage array's ability to support multiple workloads simultaneously. Testing showed that while an all-HDD configuration met performance requirements, one application suffered high latency. Enabling SSD caching or tiering significantly improved performance for that application specifically, reducing latency by 70% and increasing bandwidth by up to 7x, without impacting other applications. The Lenovo S3200 is suitable for consolidating diverse workloads due to its flexibility to configure HDDs with SSDs for optimized performance tailored to each use case.
Get higher transaction throughput and better price/performance with an Amazon...Principled Technologies
In addition, the EBS gp3-backed EC2 r5b.16xlarge instance delivered a lower average transaction latency to offer more consistent transactional database performance than two Microsoft Azure E64ds_v4 VM configurations
The document provides an overview of a reference design using Lenovo servers and storage, Brocade fibre channel networking, and Emulex fibre channel host bus adapters. It summarizes the key components and features, including the Lenovo Storage S3200 array, Brocade 6505/6510 fibre channel switches, Lenovo System x3550 M5 server, and Emulex 16Gb fibre channel HBAs. It also provides guidance on sizing the servers, network, and storage capacity for a virtualized environment based on analyzing current usage and allowing for future growth.
Workstation heat and power usage: Lenovo ThinkStation P500 vs. HP Z440 Workst...Principled Technologies
A workstation that runs coolly and uses less power is a great asset to workers and the companies they work for. In our tests, both when idle and when under load, the Lenovo ThinkStation P500 generally ran at lower surface temperatures and used less power than the HP Z440 Workstation. These findings show that the Lenovo ThinkStation P500 could meet the needs of those who want to provide a reliable, comfortable work environment while using less power.
Maximizing Oracle Database performance with Intel SSD DC P3600 Series NVMe SS...Principled Technologies
If your organization runs critical, high-demand databases in environments such as Oracle Database, strong performance is not an option: it’s a must-have. Additionally, getting that necessary strong performance out of a single server can be essential for running a space and cost-efficient datacenter. In the Principled Technologies labs, we found that the Dell PowerEdge R930 offered strong performance for such transactional databases when configured with SATA SSDs. When we upgraded the servers to Intel SSD DC P3600 Series NVMe SSDs, performance doubled, increasing by 2.17 times, or 117 percent. If your datacenter needs a new powerhouse server, purchasing your Dell PowerEdge R930 with Intel NVMe SSDs for a cost increase of only 18 percent can double the performance you get from each server. This increases what your infrastructure can do within the same amount of space and lets you ultimately save money that would otherwise be spent purchasing additional servers and software.
Store data more efficiently and increase I/O performance with lower latency w...Principled Technologies
Compared to an array from another vendor, the PowerMax 8000 offered a better inline data reduction ratio and better performance during simulated OLTP and data extraction workloads
Demartek Lenovo Storage S3200 i a mixed workload environment_2016-01Lenovo Data Center
This document evaluates the Lenovo S3200 storage array's ability to support multiple workloads simultaneously. Testing showed that while an all-HDD configuration met performance requirements, one application suffered high latency. Enabling SSD caching or tiering significantly improved performance for that application specifically, reducing latency by 70% and increasing bandwidth by up to 7x, without impacting other applications. The Lenovo S3200 is suitable for consolidating diverse workloads due to its flexibility to configure HDDs with SSDs for optimized performance tailored to each use case.
Get higher transaction throughput and better price/performance with an Amazon...Principled Technologies
In addition, the EBS gp3-backed EC2 r5b.16xlarge instance delivered a lower average transaction latency to offer more consistent transactional database performance than two Microsoft Azure E64ds_v4 VM configurations
The document provides an overview of a reference design using Lenovo servers and storage, Brocade fibre channel networking, and Emulex fibre channel host bus adapters. It summarizes the key components and features, including the Lenovo Storage S3200 array, Brocade 6505/6510 fibre channel switches, Lenovo System x3550 M5 server, and Emulex 16Gb fibre channel HBAs. It also provides guidance on sizing the servers, network, and storage capacity for a virtualized environment based on analyzing current usage and allowing for future growth.
The Dell EMC PowerMax 8000 outperformed another vendor's array on an OLTP-lik...Principled Technologies
The Dell EMC PowerMax 8000 outperformed a competitor's storage array in several key areas:
- It delivered 25% more IOPS on an OLTP workload simulation.
- It stored compressed data more efficiently, requiring 44% less storage capacity after migrating compressible data and achieving a higher 2.3:1 compression ratio.
- Admin tasks like provisioning new storage took half the time (45% fewer steps and 51% less time) using the PowerMax management interface compared to the competitor.
The Dell PowerEdge R920 could replace nine older Oracle database servers and provide nine times the performance while reducing power consumption by 64% and lowering software licensing costs by 17%. Testing showed the R920 running Oracle Database 12c with pluggable databases provided significant savings over three years in power, cooling, and licensing costs compared to running the databases on nine older servers.
A single-socket Dell EMC PowerEdge R7515 solution delivered better value on a...Principled Technologies
If your company is running important business applications in VMware vSAN clusters of servers that are several years old, chances are good that you’re considering upgrading to newer hardware. Our testing demonstrated that our clusters of single-socket Dell EMC PowerEdge R7515 servers and clusters of dual-socket HPE ProLiant DL380 Gen10 servers could both improve upon the database performance of a legacy cluster with five-year-old servers by more than 50 percent, with the Dell EMC cluster achieving 93.4 percent of the performance of the HPE cluster.
This presentation provides an introduction to the current activities leading to software architectures and methodologies for new NVM technologies, including the activities of the SNIA Non-Volatile Memory (NVM) Technical Working Group. This session includes a review and discussion of the impacts of the SNIA NVM Programming Model (NPM). We will preview the current work on new technologies, including remote access, high availability, clustering, atomic transactions, error management, and current methodologies for dealing with NVM.
Learn how upcoming changes in the persistent memory market will affect deployments of in-memory computing and traditional applications. Using software innovations from SanDisk and the broad portfolio of flash storage hardware options, customers and developers can optimize applications for “flash extended memory”, the intersection of in-memory computing and persistent memory technologies.
Watch your transactional database performance climb with Intel Optane DC pers...Principled Technologies
Dell EMC PowerEdge R740xd servers with Intel Optane DC persistent memory handled more transactions per minute than configurations with NAND flash NVMe drives or SATA SSDs
By upgrading from the legacy solution we tested to the new Intel processor-based Dell and VMware solution, you could do 18 times the work in the same amount of space. Imagine what that performance could mean to your business: Consolidate workloads from across your company, lower your power and cooling bills, and limit datacenter expansion in the future, all while maintaining a consistent user experience—the list of potential benefits is huge.
Try running DPACK, which can help you identify bottlenecks in your environment and inform you about your current performance needs. Then consider how the consolidation ratio we proved could be helpful for your company. The Intel processor-powered Dell PowerEdge R730 solution with VMware vSphere and Dell Storage SC4020, also powered by Intel, could be the right destination for your upgrade journey.
Demartek evaluated the Lenovo S3200 SAN supporting multiple workloads and saw tremendous results. Read this report and find out why the S3200 should be considered for your SAN deployments!
Adding Intel Optane DC SSDs to an HPE ProLiant DL380 Gen10 server cluster improved response times by 26% and produced 52% more input/output operations per second while using one fewer server
Boosting performance with the Dell Acceleration Appliance for DatabasesPrincipled Technologies
If your business is expanding and you need to support more users accessing your databases, it’s time to act. Upgrading your database infrastructure with a flash storage-based solution is a smart way to improve performance without adding more servers or taking up very much rack space, which comes at a premium. The Dell Acceleration Appliance for Databases addresses this by providing strong performance when combined with your existing infrastructure or on its own.
We found that adding a highly available DAAD solution to our database application provided up to 3.01 times the Oracle Database 12c performance, which can make a big difference to your bottom line. Additionally, the DAAD delivered 3.14 times the database performance when replacing traditional storage completely, which could enable your infrastructure to keep up with your growing business’ needs.
Moving your legacy database workloads to the Dell PowerEdge R930 can help you realize the benefits of consolidation, which can include savings in management costs, power usage, and cable management costs. More importantly, the licensing costs of the database application itself may be reduced by the consolidation effort. In addition to these benefits, greater database transactions per minute can keep your orders flowing smoothly.
We found that the Dell PowerEdge R930, powered by the Intel Xeon processor E7 v3 series, could consolidate three legacy servers running four Oracle Database 12c VMs each. The Dell PowerEdge R930 outperformed the legacy server with 4.4 times the overall database performance, delivering an average of 47.1 percent more performance per VM. By consolidating that many legacy servers, you can save up to 67 percent in rack space, 25 percent in database licenses, and even reduce other operating costs to improve your bottom line.
OLTP with Dell EqualLogic hybrid arrays: A comparative study with an industry...Principled Technologies
The effectiveness of your OLTP database environment can depend to an enormous degree on the storage system you select. We compared a database server solution using the Dell EqualLogic PS6210XS with a one using competing industry-leading SAN storage.
The EqualLogic PS6210XS solution was overwhelmingly superior in all areas we tested. It delivered twice the performance with half the response time, and used a fraction of the power.
These factors make it clear that any business that relies on its database servers and wants to get the greatest return on its storage investment must consider the Dell EqualLogic PS6210XS.
Hardware upgrades to improve database, SharePoint, Exchange, and file server ...Principled Technologies
Legacy tower servers that cannot meet workload demands can restrict business growth. By upgrading to the Dell PowerEdge T630, you can obtain immediate benefits for current IT performance needs and implement upgrades that will expand server capabilities to help meet future demands. We found that replacing a legacy server with the new Dell PowerEdge T630 tower server offered up to 97.9 percent lower workload latency, 131.9 percent more IOPS, and 421.9 percent more OPM when running the same workload. With component upgrades, the PowerEdge T630 supported more Exchange, SharePoint, and file server users, and more database VM instances. Help ensure that your applications have sufficient hardware resources to keep up with the needs of today and the future by choosing to upgrade to the new Dell PowerEdge T630 tower server.
Whether you’re looking for the highest possible performance per rack unit or the strongest RAS-enabled server to run your mission-critical databases, Dell has a server to meet your needs. Factors such as performance per rack, expansion capabilities, and flash storage options will also drive your server decision.
In our hands-on tests, we found that the Dell PowerEdge R820 server could handle up to 382,397 database orders per minute and had 73.6 percent greater performance per U of rack space than the R910.
The Dell PowerEdge R910 processed 440,475 OPM. Its high number of logical processors, maximum expansion capabilities, and support for RAS technologies make the Dell PowerEdge R910 an excellent choice for your mission-critical data center applications.
The document discusses using Dell EMC Isilon all-flash storage for SAS GRID workloads. It describes a test of the Isilon F810 node with hardware-accelerated compression using a multi-user SAS analytics workload. The testing focused on performance, scalability, compression benefits, deduplication savings, and cost when running the workload on an Isilon cluster with up to 12 grid nodes and comparing results with and without enabling various compression options.
20+ Million Records a Second - Running Kafka on Isilon F800 Boni Bruno
The document summarizes performance test results for running Apache Kafka with Dell EMC Isilon F800 All-Flash NAS storage compared to direct attached storage. In the first test, a single producer was able to write 50 million 100 byte records to a topic with no replication at a rate of over 1.2 million records/second on direct attached storage and over 1.4 million records/second on the Isilon storage. Subsequent tests showed the Isilon storage able to handle multiple producers and consumers at rates of over 20 million records/second, with lower latency than direct attached storage. The Isilon storage was also able to withstand stress testing at high throughput levels.
Watch your transactional database performance climb with Intel Optane DC pers...Principled Technologies
The document discusses testing of Intel Optane DC persistent memory in a Dell EMC PowerEdge R740xd server. It found that using Intel Optane DC persistent memory delivered over 2 times the transactions per minute of a configuration with two NVMe drives and over 11 times the transactions per minute of a configuration with 12 SATA SSDs. Intel Optane DC persistent memory can significantly improve transactional database performance over traditional storage solutions like SSDs.
Achieve up to 80% better throughput and increase storage efficiency with the ...Principled Technologies
Compared to a competing array, Dell’s high-end PowerMax 8500 storage array offered better I/O performance for simulated OLTP and data workloads, saved more space through more efficient data reduction, and performed snapshots with no performance impact
Offer faster access to critical data and achieve greater inline data reductio...Principled Technologies
Compared to a solution from another vendor (“Vendor B”), the PowerStore 7000T delivered a better inline data reduction ratio and better performance during simulated OLTP and other I/O workloads
The Dell EMC PowerMax 8000 outperformed another vendor's array on an OLTP-lik...Principled Technologies
The Dell EMC PowerMax 8000 outperformed a competitor's storage array in several key areas:
- It delivered 25% more IOPS on an OLTP workload simulation.
- It stored compressed data more efficiently, requiring 44% less storage capacity after migrating compressible data and achieving a higher 2.3:1 compression ratio.
- Admin tasks like provisioning new storage took half the time (45% fewer steps and 51% less time) using the PowerMax management interface compared to the competitor.
The Dell PowerEdge R920 could replace nine older Oracle database servers and provide nine times the performance while reducing power consumption by 64% and lowering software licensing costs by 17%. Testing showed the R920 running Oracle Database 12c with pluggable databases provided significant savings over three years in power, cooling, and licensing costs compared to running the databases on nine older servers.
A single-socket Dell EMC PowerEdge R7515 solution delivered better value on a...Principled Technologies
If your company is running important business applications in VMware vSAN clusters of servers that are several years old, chances are good that you’re considering upgrading to newer hardware. Our testing demonstrated that our clusters of single-socket Dell EMC PowerEdge R7515 servers and clusters of dual-socket HPE ProLiant DL380 Gen10 servers could both improve upon the database performance of a legacy cluster with five-year-old servers by more than 50 percent, with the Dell EMC cluster achieving 93.4 percent of the performance of the HPE cluster.
This presentation provides an introduction to the current activities leading to software architectures and methodologies for new NVM technologies, including the activities of the SNIA Non-Volatile Memory (NVM) Technical Working Group. This session includes a review and discussion of the impacts of the SNIA NVM Programming Model (NPM). We will preview the current work on new technologies, including remote access, high availability, clustering, atomic transactions, error management, and current methodologies for dealing with NVM.
Learn how upcoming changes in the persistent memory market will affect deployments of in-memory computing and traditional applications. Using software innovations from SanDisk and the broad portfolio of flash storage hardware options, customers and developers can optimize applications for “flash extended memory”, the intersection of in-memory computing and persistent memory technologies.
Watch your transactional database performance climb with Intel Optane DC pers...Principled Technologies
Dell EMC PowerEdge R740xd servers with Intel Optane DC persistent memory handled more transactions per minute than configurations with NAND flash NVMe drives or SATA SSDs
By upgrading from the legacy solution we tested to the new Intel processor-based Dell and VMware solution, you could do 18 times the work in the same amount of space. Imagine what that performance could mean to your business: Consolidate workloads from across your company, lower your power and cooling bills, and limit datacenter expansion in the future, all while maintaining a consistent user experience—the list of potential benefits is huge.
Try running DPACK, which can help you identify bottlenecks in your environment and inform you about your current performance needs. Then consider how the consolidation ratio we proved could be helpful for your company. The Intel processor-powered Dell PowerEdge R730 solution with VMware vSphere and Dell Storage SC4020, also powered by Intel, could be the right destination for your upgrade journey.
Demartek evaluated the Lenovo S3200 SAN supporting multiple workloads and saw tremendous results. Read this report and find out why the S3200 should be considered for your SAN deployments!
Adding Intel Optane DC SSDs to an HPE ProLiant DL380 Gen10 server cluster improved response times by 26% and produced 52% more input/output operations per second while using one fewer server
Boosting performance with the Dell Acceleration Appliance for DatabasesPrincipled Technologies
If your business is expanding and you need to support more users accessing your databases, it’s time to act. Upgrading your database infrastructure with a flash storage-based solution is a smart way to improve performance without adding more servers or taking up very much rack space, which comes at a premium. The Dell Acceleration Appliance for Databases addresses this by providing strong performance when combined with your existing infrastructure or on its own.
We found that adding a highly available DAAD solution to our database application provided up to 3.01 times the Oracle Database 12c performance, which can make a big difference to your bottom line. Additionally, the DAAD delivered 3.14 times the database performance when replacing traditional storage completely, which could enable your infrastructure to keep up with your growing business’ needs.
Moving your legacy database workloads to the Dell PowerEdge R930 can help you realize the benefits of consolidation, which can include savings in management costs, power usage, and cable management costs. More importantly, the licensing costs of the database application itself may be reduced by the consolidation effort. In addition to these benefits, greater database transactions per minute can keep your orders flowing smoothly.
We found that the Dell PowerEdge R930, powered by the Intel Xeon processor E7 v3 series, could consolidate three legacy servers running four Oracle Database 12c VMs each. The Dell PowerEdge R930 outperformed the legacy server with 4.4 times the overall database performance, delivering an average of 47.1 percent more performance per VM. By consolidating that many legacy servers, you can save up to 67 percent in rack space, 25 percent in database licenses, and even reduce other operating costs to improve your bottom line.
OLTP with Dell EqualLogic hybrid arrays: A comparative study with an industry...Principled Technologies
The effectiveness of your OLTP database environment can depend to an enormous degree on the storage system you select. We compared a database server solution using the Dell EqualLogic PS6210XS with a one using competing industry-leading SAN storage.
The EqualLogic PS6210XS solution was overwhelmingly superior in all areas we tested. It delivered twice the performance with half the response time, and used a fraction of the power.
These factors make it clear that any business that relies on its database servers and wants to get the greatest return on its storage investment must consider the Dell EqualLogic PS6210XS.
Hardware upgrades to improve database, SharePoint, Exchange, and file server ...Principled Technologies
Legacy tower servers that cannot meet workload demands can restrict business growth. By upgrading to the Dell PowerEdge T630, you can obtain immediate benefits for current IT performance needs and implement upgrades that will expand server capabilities to help meet future demands. We found that replacing a legacy server with the new Dell PowerEdge T630 tower server offered up to 97.9 percent lower workload latency, 131.9 percent more IOPS, and 421.9 percent more OPM when running the same workload. With component upgrades, the PowerEdge T630 supported more Exchange, SharePoint, and file server users, and more database VM instances. Help ensure that your applications have sufficient hardware resources to keep up with the needs of today and the future by choosing to upgrade to the new Dell PowerEdge T630 tower server.
Whether you’re looking for the highest possible performance per rack unit or the strongest RAS-enabled server to run your mission-critical databases, Dell has a server to meet your needs. Factors such as performance per rack, expansion capabilities, and flash storage options will also drive your server decision.
In our hands-on tests, we found that the Dell PowerEdge R820 server could handle up to 382,397 database orders per minute and had 73.6 percent greater performance per U of rack space than the R910.
The Dell PowerEdge R910 processed 440,475 OPM. Its high number of logical processors, maximum expansion capabilities, and support for RAS technologies make the Dell PowerEdge R910 an excellent choice for your mission-critical data center applications.
The document discusses using Dell EMC Isilon all-flash storage for SAS GRID workloads. It describes a test of the Isilon F810 node with hardware-accelerated compression using a multi-user SAS analytics workload. The testing focused on performance, scalability, compression benefits, deduplication savings, and cost when running the workload on an Isilon cluster with up to 12 grid nodes and comparing results with and without enabling various compression options.
20+ Million Records a Second - Running Kafka on Isilon F800 Boni Bruno
The document summarizes performance test results for running Apache Kafka with Dell EMC Isilon F800 All-Flash NAS storage compared to direct attached storage. In the first test, a single producer was able to write 50 million 100 byte records to a topic with no replication at a rate of over 1.2 million records/second on direct attached storage and over 1.4 million records/second on the Isilon storage. Subsequent tests showed the Isilon storage able to handle multiple producers and consumers at rates of over 20 million records/second, with lower latency than direct attached storage. The Isilon storage was also able to withstand stress testing at high throughput levels.
Watch your transactional database performance climb with Intel Optane DC pers...Principled Technologies
The document discusses testing of Intel Optane DC persistent memory in a Dell EMC PowerEdge R740xd server. It found that using Intel Optane DC persistent memory delivered over 2 times the transactions per minute of a configuration with two NVMe drives and over 11 times the transactions per minute of a configuration with 12 SATA SSDs. Intel Optane DC persistent memory can significantly improve transactional database performance over traditional storage solutions like SSDs.
Achieve up to 80% better throughput and increase storage efficiency with the ...Principled Technologies
Compared to a competing array, Dell’s high-end PowerMax 8500 storage array offered better I/O performance for simulated OLTP and data workloads, saved more space through more efficient data reduction, and performed snapshots with no performance impact
Offer faster access to critical data and achieve greater inline data reductio...Principled Technologies
Compared to a solution from another vendor (“Vendor B”), the PowerStore 7000T delivered a better inline data reduction ratio and better performance during simulated OLTP and other I/O workloads
Databases power your business, so the performance of the hardware behind your databases is crucial. In our datacenter, the Dell EMC VxRail P470F with VMware vSAN handled more orders per minute, delivered higher IOPS, responded more quickly, and utilized resources more efficiently than the HPE Hyper Converged 380 with HPE StoreVirtual VSA. Plus, scaling up and tuning performance for our specific workload was considerably easier with the Dell EMC VxRail solution than the HPE solution. These differences translate to real-world benefits. With the Dell EMC VxRail solution, you can achieve shorter wait times for customers and staff, which could increase sales and innovation at your company; reduce work for your IT staff, so they can spend more time on other critical tasks; and save space, potentially helping you delay capital expenditures. To find out more about the Dell EMC VxRail P470F, visit DellEMC.com/VxRail.
Upgrade to Dell EMC PowerEdge R6515 servers and gain better OLTP and VDI perf...Principled Technologies
Additionally, PowerEdge R6515 servers with 3rd Gen AMD EPYC processors could lower licensing costs and also empower your business to explore Kubernetes with VMware Tanzu
Drive new initiatives with a powerful Dell EMC, Nutanix, and Toshiba solution...Principled Technologies
A Dell EMC XC Series cluster featuring Nutanix software and powered by Toshiba PX05S SAS SSDs delivered strong database performance with a blend of structured and unstructured data
Run compute-intensive Apache Hadoop big data workloads faster with Dell EMC P...Principled Technologies
Efficiently running compute-heavy, Apache Hadoop big data workloads today might not translate to continued quality performance for growing data sets. Moving compute-intensive, Hadoop big data workloads to currentgeneration Dell EMC PowerEdge R640 servers powered by 2nd Generation Intel Xeon Scalable processors could allow your organization to better meet the data analysis challenges of today and have the resources to support growth. In our data center, a Hadoop cluster of PowerEdge R640 servers completed three big data workloads in less time by delivering greater throughput than a cluster of previous-generation Dell EMC PowerEdge R630 servers. Faster analysis of large data sets means getting insight into your organization, products, and services sooner, which could help your organization grow and beat its competition.
Upgrading to Windows Server 2019 on Dell EMC PowerEdge servers: A simple proc...Principled Technologies
Using Dell EMC PowerEdge R740xd servers with Intel Xeon Scalable processors, we upgraded from Windows Server 2016 and saw data compression ratios of up to 9.8:1 thanks to new Storage Spaces Direct features
Continuing to run a legacy infrastructure may be possible, but it isn’t optimal—not when new technologies like the Dell and Nutanix solution are available. By upgrading to this new hyperconverged infrastructure, you could do eight times the work of a legacy solution in just 6U and scale for more work by simply adding another node. What’s more, eliminating the need for centralized SAN storage means more space to grow, less hardware to manage, and the potential for lower power and cooling bills.
Take the first step on the path to an upgraded environment. Run DPACK in your own datacenter, and discover your performance requirements and potential bottlenecks. Then consider how the increased mixed workload performance from the hyperconverged, Intel processor-powered Dell and Nutanix solution could help your business thrive.
Complete data analysis faster with Google Cloud C3 high CPU instances enabled...Principled Technologies
Compared to N2 high CPU instances with previous-gen processors, these instances sped up query completion times
Conclusion
Data collected, but not analyzed, offers only potential, not insights, to an organization. Sorting and analyzing that data can turn rows and tables into meaningful insights about what customers want or which business initiatives should proceed. The faster an organization gets insights from data, the quicker its leaders can make decisions. In our tests, where both instance types used Google Cloud Hyperdisk Extreme storage and were configured similarly, Google Cloud C3 high CPU instances with 4th Gen Intel Xeon Scalable processors delivered 1.88x the query completion speeds of N2 high CPU instances with previous-gen processors. Selecting these newer C3 high CPU instances with Hyperdisk can help ensure that data analysis completes within targeted windows and give decision makers the insights they need to quickly adjust to the changing tides of business and realize success.
Drive new initiatives with a powerful Dell EMC, Nutanix, and Toshiba solutionPrincipled Technologies
A Dell EMC XC Series cluster featuring Nutanix software and powered by Toshiba PX05S SAS SSDs delivered strong database performance with a blend of structured and unstructured data
Cost analysis for acquisition of 250 terabytes of storage growing at 25% per year for five years. Products from EMC, NetAPP, NEC, Dot Hill were compared to a software defined storage solution based on SUSE Enterprise Storage software.
Realize 2.1X the performance with 20% less power with AMD EPYC processor-back...Principled Technologies
Three AMD EPYC processor-based two-processor solutions outshined comparable Intel Xeon Scalable processor-based solutions by handling more Redis workload transactions and requests while consuming less power
Conclusion
Performance and energy efficiency are significant factors in processor selection for servers running data-intensive workloads, such as Redis. We compared the Redis performance and energy consumption of a server cluster in three AMD EPYC two-processor configurations against that of a server cluster in two Intel Xeon Scalable two-processor configurations. In each of our three test scenarios, the server cluster backed by AMD EPYC processors outperformed the server cluster backed by Intel Xeon Scalable processors. In addition, one of the AMD EPYC processor-based clusters consumed 20 percent less power than its Intel Xeon Scalable processor-based counterpart. Combining these measurements gave us power efficiency metrics that demonstrate how valuable AMD EPYC processor-based servers could be—you could see better performance per watt with these AMD EPYC processor-based server clusters and potentially get more from your Redis or other data intensive applications and workloads while reducing data center power costs.
Dell APEX outperformed comparable Amazon EC2 instances on a decision-support ...Principled Technologies
A solution using Dell APEX Private Cloud and Dell APEX Data Storage Services Block generated data-driven insights earlier than Amazon EC2 c6i.16xlarge instances with Amazon Elastic Block Store (EBS) storage
For your company to reap the benefits of big data analysis can require a substantial investment, so it’s crucial to determine which platform can deliver the vital insights you need quickly. In our performance testing of two solutions with comparable specifications, we found that the Dell APEX solution took 13.6 percent less time to run a set of DSS queries on a 4TB database than a solution based on Amazon EC2 c6i.16xlarge instances. Throughput on the Dell APEX solution was also greater, with an advantage of 24.8 percent for read throughput and an advantage of 19.2 percent for write throughput. This increased performance could help your business by giving decision-makers access to critical information sooner.
The document provides information about the IBM PureData System for Analytics (Netezza). It discusses the components and architecture of the IBM PureData System models, including the N1001 and N2001 models. It explains the key hardware components like snippet blades, hosts, and storage arrays and how they work together using Netezza's Asymmetric Massively Parallel Processing architecture to optimize analytics workloads.
Comparing Dell Compellent network-attached storage to an industry-leading NAS...Principled Technologies
A flexible NAS solution addresses many organizational challenges from server backup to hosting production applications and databases. Advanced NAS solutions such as the Intel Xeon processor-based Dell Compellent FS8600 NAS provide flexibility and scalability, allowing various use options as well as drive options throughout its lifecycle. This scale and flexibility enables an organization to alleviate performance bottlenecks anywhere in the organization simply by reallocating or adding more disk resources.
We found that the Intel Xeon processor-based Dell Compellent FS8600 NAS solution backed up a small-file corpus up to 15.9 percent faster and a large-file corpus up to 17.1 percent faster than a similarly configured, industry-leading NAS solution. This means that selecting the Dell Compellent FS8600 NAS has the potential to help optimize an organization’s infrastructure.
Application Report: Big Data - Big Cluster InterconnectsIT Brand Pulse
As a leading analytics platform that runs on industry-standard hardware and integrates industry-standard database tools and applications, one of ParAccel’s biggest challenges is to architect and test hardware (servers, storage, interconnects) that make their software perform at its peak. In this case, they have achieved their mission to eliminate a cluster bottleneck by implementing 10GbE NICs to provide the bandwidth needed to-day, and well into the future.
Similar to Handle transaction workloads and data mart loads with better performance (20)
Help skilled workers succeed with Dell Latitude 7030 and 7230 Rugged Extreme ...Principled Technologies
Instead of equipping consumer-grade tablets with rugged cases
Conclusion
In our hands-on testing, the Dell Latitude 7030 and 7230 Rugged Extreme Tablets showed that they are better equipped to help skilled workers than consumer-grade Apple iPad Pro and Samsung Galaxy Tab S9 tablets in multiple ways. They provide more built-in capabilities and features than the consumer-grade tablets we tested. And, while they were more expensive than the rugged-case fortified consumer-grade options we tested, their rugged claims were more than skin deep.
In our performance and durability tests, the Dell Latitude 7030 and 7230 Rugged Extreme Tablets performed better in demanding manufacturing, logistics, and field service environments than consumer-grade tablets with rugged cases. Both Rugged Extreme Tablets, with their greater thermal range, suffered less performance degradation in extreme temperatures, never failed and were merely scuffed after 26 hard drops, survived a 10 minute drenching with no ill effects, and were easier to view in direct sunlight than Apple iPad Pro and Samsung Galaxy Tab S9 tablets.
Bring ideas to life with the HP Z2 G9 Tower Workstation - InfographicPrincipled Technologies
We compared CPU performance and noise output of an HP Z2 G9 Tower Workstation in High Performance Mode to a similarly configured Dell Precision 3660 Tower Workstation in its out-of-box performance mode
Investing in GenAI: Cost‑benefit analysis of Dell on‑premises deployments vs....Principled Technologies
Conclusion
Diving into the world of GenAI has the potential to yield a great many benefits for your organization, but it first requires consideration for how best to implement those GenAI workloads. Whether your AI goals are to create a chatbot for online visitors, generate marketing materials, aid troubleshooting, or something else, implementing an AI solution requires careful planning and decision-making. A major decision is whether to host GenAI in the cloud or keep your data on premises. Traditional on-premises solutions can provide superior security and control, a substantial concern when dealing with large amounts of potentially sensitive data. But will supporting a GenAI solution on site be a drain on an organization’s IT budget?
In our research, we found that the value proposition is just the opposite: Hosting GenAI workloads on premises, either in a traditional Dell solution or using a managed Dell APEX pay-per-use solution, could significantly lower your GenAI costs over 3 years compared to hosting these workloads in the cloud. In fact, we found that a comparable AWS SageMaker solution would cost up to 3.8 times as much and an Azure ML solution would cost up to 3.6 times as much as GenAI on a Dell APEX pay-per-use solution. These results show that organizations looking to implement GenAI and reap the business benefits to come can find many advantages in an on-premises Dell solution, whether they opt to purchase and manage it themselves or choose a subscription-based Dell APEX pay-per-use solution. Choosing an on-premises Dell solution could save your organization significantly over hosting GenAI in the cloud, while giving you control over the security and privacy of your data as well as any updates and changes to the environment, and while ensuring your environment is managed consistently.
Workstations powered by Intel can play a vital role in CPU-intensive AI devel...Principled Technologies
In three AI development workflows, Intel processor-powered workstations delivered strong performance, without using their GPUs, making them a good choice for this part of the AI process
Conclusion
We executed three AI development workflows on tower workstations and mobile workstations from three vendors, with each workflow utilizing only the Intel CPU cores, and found that these platforms were suitable for carrying out various AI tasks. For two of the workflows, we learned that completing the tasks on the tower workstations took roughly half as much time as on the mobile workstations. This supports the idea that the tower workstations would be appropriate for a development environment for more complex models with a greater volume of data and that the mobile workstations would be well-suited for data scientists fine-tuning simpler models. In the third workflow, we explored tower workstation performance with different precision levels and learned that using 16-bit floating point precision allowed the workstations to execute the workflow in less time and also reduced memory usage dramatically. For all three AI workflows we executed, we consider the time the workstations needed to complete the tasks to be acceptable, and believe that these workstations can be appropriate, cost-effective choices for these kinds of activities.
Enable security features with no impact to OLTP performance with Dell PowerEd...Principled Technologies
Get comparable online transaction processing (OLTP) performance with or without enabling AMD Secure Memory Encryption and AMD Secure Encrypted Virtualization - Encrypted State
Conclusion
You’ve likely already implemented many security measures for your servers, which may include physical security for the data center, hardware-level security, and software-level security. With the cost of data breaches high and still growing, however, wise IT teams will consider what additional security measures they may be able to implement.
AMD SME and SEV-ES are technologies that are already available within your AMD processor-powered 16th Generation Dell PowerEdge servers—and in our testing, we saw that they can offer extra layers of security without affecting performance. We compared the online transaction processing performance of a Dell PowerEdge R7625 server, powered by AMD EPYC 9274F processors, with and without these two security features enabled. We found that enabling AMD Secure Memory Encryption and Secure Encrypted Virtualization-Encrypted State did not impact performance at all.
If your team is assessing areas where you might be able to enhance security—without paying a large performance cost—consider enabling AME SME and AMD SEV-ES in your Dell PowerEdge servers.
Improving energy efficiency in the data center: Endure higher temperatures wi...Principled Technologies
In high-temperature test scenarios, a Dell PowerEdge HS5620 server continued running an intensive workload without component warnings or failures, while a Supermicro SYS‑621C-TN12R server failed
Conclusion: Remain resilient in high temperatures with the Dell PowerEdge HS5620 to help increase efficiency
Increasing your data center’s temperature can help your organization make strides in energy efficiency and cooling cost savings. With servers that can hold up to these higher everyday temperatures—as well as high temperatures due to unforeseen circumstances—your business can continue to deliver the performance your apps and clients require.
When we ran an intensive floating-point workload on a Dell PowerEdge HS5620 and a Supermicro SYS-621CTN12R in three scenario types simulating typical operations at 25°C, a fan failure, and an HVAC malfunction, the Dell server experienced no component warnings or failures. In contrast, the Supermicro server experienced warnings in all three scenario types and experienced component failures in the latter two tests, rendering the system unusable. When we inspected and analyzed each system, we found that the Dell PowerEdge HS5620 server’s motherboard layout, fans, and chassis offered cooling design advantages.
For businesses aiming to meet sustainability goals by running hotter data centers, as well as those concerned with server cooling design, the Dell PowerEdge HS5620 is a strong contender to take on higher temperatures during day-to-day operations and unexpected malfunctions.
Dell APEX Cloud Platform for Red Hat OpenShift: An easily deployable and powe...Principled Technologies
The 4th Generation Intel Xeon Scalable processor‑powered solution deployed in less than two hours and ran a Kubernetes container-based generative AI workload effectively
Dell APEX Cloud Platform for Red Hat OpenShift: An easily deployable and powe...Principled Technologies
The 4th Generation Intel Xeon Scalable processor‑powered solution deployed in less than two hours and ran a generative AI workload effectively
Conclusion
The appeal of incorporating GenAI into your organization’s operations is likely great. Getting started with an efficient solution for your next LLM workload or application can seem daunting because of the changing hardware and software landscape, but Dell APEX Cloud Platform for Red Hat OpenShift powered by 4th Gen Intel Xeon Scalable processors could provide the solution you need. We started with a Dell Validated Design as a reference, and then went on to modify the deployment as necessary for our Llama 2 workload. The Dell APEX Cloud Platform for Red Hat OpenShift solution worked well for our LLM, and by using this deployment guide in conjunction with numerous Dell documents and some flexibility, you could be well on your way to innovating your next GenAI breakthrough.
Upgrade your cloud infrastructure with Dell PowerEdge R760 servers and VMware...Principled Technologies
Compared to a cluster of PowerEdge R750 servers running VMware Cloud Foundation (VCF)
For organizations running clusters of moderately configured, older Dell PowerEdge servers with a previous version of VCF, upgrading to better-configured modern servers can provide a significant performance boost and more.
Upgrade your cloud infrastructure with Dell PowerEdge R760 servers and VMware...Principled Technologies
Compared to a cluster of PowerEdge R750 servers running VMware Cloud Foundation 4.5
If your company is struggling with underperforming infrastructure, upgrading to 16th Generation Dell PowerEdge servers running VCF 5.1 could be just what you need to handle more database throughput and reduce vSAN latencies. As an additional benefit to IT admins, we also found that the embedded VMware Aria Operation adapter provided useful infrastructure insights.
Improve performance and gain room to grow by easily migrating to a modern Ope...Principled Technologies
We deployed this modern environment, then migrated database VMs from legacy servers and saw performance improvements that support consolidation
Conclusion
If your organization’s transactional databases are running on gear that is several years old, you have much to gain by upgrading to modern servers with new processors and networking components and an OpenShift environment. In our testing, a modern OpenShift environment with a cluster of three Dell PowerEdge R7615 servers with 4th Generation AMD EPYC processors and high-speed 100Gb Broadcom NICs outperformed a legacy environment with MySQL VMs running on a cluster of three Dell PowerEdge R7515 servers with 3rd Generation AMD EPYC processors and 25Gb Broadcom NICs. We also easily migrated a VM from the legacy environment to the modern environment, with only a few steps required to set up and less than ten minutes of hands-on time. The performance advantage of the modern servers would allow a company to reduce the number of servers necessary to perform a given amount of database work, thus lowering operational expenditures such as power and cooling and IT staff time for maintenance. The high-speed 100Gb Broadcom NICs in this solution also give companies better network performance and networking capacity to grow as they embrace emerging technologies such as AI that put great demands on networks.
Boost PC performance: How more available memory can improve productivityPrincipled Technologies
With more memory available, system performance of three Dell devices increased, which can translate to a better user experience
Conclusion
When your system has plenty of RAM to meet your needs, you can efficiently access the applications and data you need to finish projects and to-do lists without sacrificing time and focus. Our test results show that with more memory available, three Dell PCs delivered better performance and took less time to complete the Procyon Office Productivity benchmark. These advantages translate to users being able to complete workflows more quickly and multitask more easily. Whether you need the mobility of the Latitude 5440, the creative capabilities of the Precision 3470, or the high performance of the OptiPlex Tower Plus 7010, configuring your system with more RAM can help keep processes running smoothly, enabling you to do more without compromising performance.
Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...Principled Technologies
A Principled Technologies deployment guide
Conclusion
Deploying VMware Cloud Foundation 5.1 on next gen Dell PowerEdge servers brings together critical virtualization capabilities and high-performing hardware infrastructure. Relying on our hands-on experience, this deployment guide offers a comprehensive roadmap that can guide your organization through the seamless integration of advanced VMware cloud solutions with the performance and reliability of Dell PowerEdge servers. In addition to the deployment efficiency, the Cloud Foundation 5.1 and PowerEdge solution delivered strong performance while running a MySQL database workload. By leveraging VMware Cloud Foundation 5.1 and PowerEdge servers, you could help your organization embrace cloud computing with confidence, potentially unlocking a new level of agility, scalability, and efficiency in your data center operations.
Upgrade your cloud infrastructure with Dell PowerEdge R760 servers and VMware...Principled Technologies
Compared to a cluster of PowerEdge R750 servers running VMware Cloud Foundation 4.5
Conclusion
If your company is struggling with underperforming infrastructure, upgrading to 16th Generation Dell PowerEdge servers running VCF 5.1 could be just what you need to handle more database throughput and reduce vSAN latencies. We found that a Dell PowerEdge R760 server cluster running VCF 5.1 processed over 78 percent more TPM and 79 percent more NOPM than a Dell PowerEdge R750 server cluster running VCF 4.5. It’s also worth noting that the PowerEdge R750 cluster bottlenecked on vSAN storage, with max write latency at 8.9ms. For reference, the PowerEdge R760 cluster clocked in at 3.8ms max write latency. This higher latency is due in part to the single disk group per host on the moderately configured PowerEdge R750 cluster, while the better-configured PowerEdge R760 cluster supported four disk groups per host. As an additional benefit to IT admins, we also found that the embedded VMware Aria Operation adapter provided useful infrastructure insights.
Based on our research using publicly available materials, it appears that Dell supports nine of the ten PC security features we investigated, HP supports six of them, and Lenovo supports three features.
Increase security, sustainability, and efficiency with robust Dell server man...Principled Technologies
Compared to the Supermicro management portfolio
Conclusion
Choosing a vendor for server purchases is about more than just the hardware platform. Decision-makers must also consider more long-term concerns, including system/data security, energy efficiency, and ease of management. These concerns make the systems management tools a vendor offers as important as the hardware.
We investigated the features and capabilities of server management tools from Dell and Supermicro, comparing Dell iDRAC9 against Supermicro IPMI for embedded server management and Dell OpenManage Enterprise and CloudIQ against Supermicro Server Manager for one-to-many device and console management and monitoring. We found that the Dell management tools provided more comprehensive security, sustainability, and management/monitoring features and capabilities than Supermicro servers did. In addition, Dell tools automated more tasks to ease server management, resulting in significant time savings for administrators versus having to do the same tasks manually with Supermicro tools.
When making a server purchase, a vendor’s associated management products are critical to protect data, support a more sustainable environment, and to ease the maintenance of systems. Our tests and research showed that the Dell management portfolio for PowerEdge servers offered more features to help organizations meet these goals than the comparable Supermicro management products.
Increase security, sustainability, and efficiency with robust Dell server man...Principled Technologies
Compared to the Supermicro management portfolio
Conclusion
Choosing a vendor for server purchases is about more than just the hardware platform. Decision-makers must also consider more long-term concerns, including system/data security, energy efficiency, and ease of management. These concerns make the systems management tools a vendor offers as important as the hardware.
We investigated the features and capabilities of server management tools from Dell and Supermicro, comparing Dell iDRAC9 against Supermicro IPMI for embedded server management and Dell OpenManage Enterprise and CloudIQ against Supermicro Server Manager for one-to-many device and console management and monitoring. We found that the Dell management tools provided more comprehensive security, sustainability, and management/monitoring features and capabilities than Supermicro servers did. In addition, Dell tools automated more tasks to ease server management, resulting in significant time savings for administrators versus having to do the same tasks manually with Supermicro tools.
When making a server purchase, a vendor’s associated management products are critical to protect data, support a more sustainable environment, and to ease the maintenance of systems. Our tests and research showed that the Dell management portfolio for PowerEdge servers offered more features to help organizations meet these goals than the comparable Supermicro management products.
Scale up your storage with higher-performing Dell APEX Block Storage for AWS ...Principled Technologies
In our tests, Dell APEX Block Storage for AWS outperformed similarly configured solutions from Vendor A, achieving more IOPS, better throughput, and more consistent performance on both NVMe-supported configurations and configurations backed by Elastic Block Store (EBS) alone.
Dell APEX Block Storage for AWS supports a full NVMe backed configuration, but Vendor A doesn’t—its solution uses EBS for storage capacity and NVMe as an extended read cache—which means APEX Block Storage for AWS can deliver faster storage performance.
Scale up your storage with higher-performing Dell APEX Block Storage for AWSPrincipled Technologies
Dell APEX Block Storage for AWS offered stronger and more consistent storage performance for better business agility than a Vendor A solution
Conclusion
Enterprises desiring the flexibility and convenience of the cloud for their block storage workloads can find fast-performing solutions with the enterprise storage features they’re used to in on-premises infrastructure by selecting Dell APEX Block Storage for AWS.
Our hands-on tests showed that compared to the Vendor A solution, Dell APEX Block Storage for AWS offered stronger, more consistent storage performance in both NVMe-supported and EBS-backed configurations. Using NVMe-supported configurations, Dell APEX Block Storage for AWS achieved 4.7x the random read IOPS and 5.1x the throughput on sequential read operations per node vs. Vendor A. In our EBS-backed comparison, Dell APEX Block Storage for AWS offered 2.2x the throughput per node on sequential read operations vs. Vendor A.
Plus, the ability to scale beyond three nodes—up to 512 storage nodes with capacity of up to 8 PBs—enables Dell APEX Block Storage for AWS to help ensure performance and capacity as your team plans for the future.
Get in and stay in the productivity zone with the HP Z2 G9 Tower WorkstationPrincipled Technologies
We compared CPU performance and noise output of an HP Z2 G9 Tower Workstation in High Performance Mode to Dell Precision 3660 and 5860 tower workstations in optimized performance modes
Conclusion
HP Z2 G9 Tower Workstation users can change the BIOS settings to dial in the performance mode that best suits their needs: High Performance Mode, Performance Mode, or Quiet Mode. In good
news for both creative and technical professionals, we found that an Intel Core i9-13900 processor-powered HP Z2 G9 Tower Workstation set to High Performance mode received higher CPU-based benchmark scores than both a similarly configured Dell Precision 3660 and a Dell Precision 5860 equipped with an Intel Xeon w5-2455x processor. Plus, the HP Z2 G9 Tower Workstation was quieter while running CPU-intensive Cinebench 2024 and SPECapc for Solidworks 2022 workloads than both Dell Precision tower workstations. This means HP Z2 G9 Tower Workstation users who prize performance over everything else can do so without sacrificing a quiet workspace.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
leewayhertz.com-AI in predictive maintenance Use cases technologies benefits ...alexjohnson7307
Predictive maintenance is a proactive approach that anticipates equipment failures before they happen. At the forefront of this innovative strategy is Artificial Intelligence (AI), which brings unprecedented precision and efficiency. AI in predictive maintenance is transforming industries by reducing downtime, minimizing costs, and enhancing productivity.
FREE A4 Cyber Security Awareness Posters-Social Engineering part 3Data Hops
Free A4 downloadable and printable Cyber Security, Social Engineering Safety and security Training Posters . Promote security awareness in the home or workplace. Lock them Out From training providers datahops.com
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Freshworks Rethinks NoSQL for Rapid Scaling & Cost-EfficiencyScyllaDB
Freshworks creates AI-boosted business software that helps employees work more efficiently and effectively. Managing data across multiple RDBMS and NoSQL databases was already a challenge at their current scale. To prepare for 10X growth, they knew it was time to rethink their database strategy. Learn how they architected a solution that would simplify scaling while keeping costs under control.
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
In the realm of cybersecurity, offensive security practices act as a critical shield. By simulating real-world attacks in a controlled environment, these techniques expose vulnerabilities before malicious actors can exploit them. This proactive approach allows manufacturers to identify and fix weaknesses, significantly enhancing system security.
This presentation delves into the development of a system designed to mimic Galileo's Open Service signal using software-defined radio (SDR) technology. We'll begin with a foundational overview of both Global Navigation Satellite Systems (GNSS) and the intricacies of digital signal processing.
The presentation culminates in a live demonstration. We'll showcase the manipulation of Galileo's Open Service pilot signal, simulating an attack on various software and hardware systems. This practical demonstration serves to highlight the potential consequences of unaddressed vulnerabilities, emphasizing the importance of offensive security practices in safeguarding critical infrastructure.
Skybuffer AI: Advanced Conversational and Generative AI Solution on SAP Busin...Tatiana Kojar
Skybuffer AI, built on the robust SAP Business Technology Platform (SAP BTP), is the latest and most advanced version of our AI development, reaffirming our commitment to delivering top-tier AI solutions. Skybuffer AI harnesses all the innovative capabilities of the SAP BTP in the AI domain, from Conversational AI to cutting-edge Generative AI and Retrieval-Augmented Generation (RAG). It also helps SAP customers safeguard their investments into SAP Conversational AI and ensure a seamless, one-click transition to SAP Business AI.
With Skybuffer AI, various AI models can be integrated into a single communication channel such as Microsoft Teams. This integration empowers business users with insights drawn from SAP backend systems, enterprise documents, and the expansive knowledge of Generative AI. And the best part of it is that it is all managed through our intuitive no-code Action Server interface, requiring no extensive coding knowledge and making the advanced AI accessible to more users.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/temporal-event-neural-networks-a-more-efficient-alternative-to-the-transformer-a-presentation-from-brainchip/
Chris Jones, Director of Product Management at BrainChip , presents the “Temporal Event Neural Networks: A More Efficient Alternative to the Transformer” tutorial at the May 2024 Embedded Vision Summit.
The expansion of AI services necessitates enhanced computational capabilities on edge devices. Temporal Event Neural Networks (TENNs), developed by BrainChip, represent a novel and highly efficient state-space network. TENNs demonstrate exceptional proficiency in handling multi-dimensional streaming data, facilitating advancements in object detection, action recognition, speech enhancement and language model/sequence generation. Through the utilization of polynomial-based continuous convolutions, TENNs streamline models, expedite training processes and significantly diminish memory requirements, achieving notable reductions of up to 50x in parameters and 5,000x in energy consumption compared to prevailing methodologies like transformers.
Integration with BrainChip’s Akida neuromorphic hardware IP further enhances TENNs’ capabilities, enabling the realization of highly capable, portable and passively cooled edge devices. This presentation delves into the technical innovations underlying TENNs, presents real-world benchmarks, and elucidates how this cutting-edge approach is positioned to revolutionize edge AI across diverse applications.
“Temporal Event Neural Networks: A More Efficient Alternative to the Transfor...
Handle transaction workloads and data mart loads with better performance
1. Handle transaction workloads and data mart loads with better performance June 2017 (Revised)
Handle transaction workloads
and data mart loads with better
performance
The Dell EMC Unity 400F All-Flash storage
array offered solid performance compared
to the HPE 3PAR 8400
When your company’s work demands a new storage array,
you have the opportunity to invest in a solution that can
support demanding workloads simultaneously—such as
online transaction processing (OLTP) and data mart loading.
At Principled Technologies, we compared Dell EMC™
PowerEdge™
R930 servers1
with the Dell EMC Unity 400F All-
Flash storage array to HPE ProLiant DL580 Gen9 servers with
the HPE 3PAR 8400 array in three hands-on tests to determine
how well each solution could serve a company during these
database-intensive tasks.
When we ran an OLTP workload and data mart load in
tandem, the Unity array performed better than the HPE 3PAR.
In our data mart load test, the Unity array allowed us to import
a large set of data in less time than with the 3PAR—an ability
that could help companies gather data for analysis in less
time. Finally, in our online transaction processing test, the
Unity array offered comparable database performance to
the 3PAR array, enabling database applications to process a
similar volume of customer orders.
Dell EMC Unity 400F
All-Flash storage
Keep work moving during
large data writes
Up to 29% more orders per
minute while also loading data
Save time on data imports
Load files into a data mart
up to 22% faster
A Principled Technologies report: Hands-on testing. Real-world results.
2. Handle transaction workloads and data mart loads with better performance June 2017 (Revised)
Keep your work going through intensive data mart loads
Writing files to a data mart database is a task usually left to
the late hours of the night, or one that involves separate
hardware. But what happens if your company needs to run
both online transaction and data mart workloads from the
same storage environment? With such heavy stress on a
system, you might expect a large dip in performance here—
large enough to force you to purchase completely separate
systems in order to get your work done in a reasonable time.
However, in our dual workload test, the Unity array was better
able to handle the added stress.
For this test, we measured how long it took for each array
to put data from many large text files into a single database
while simultaneously fulfilling customer orders as part of an
OLTP workload. The Unity array enabled the PowerEdge
R930 server to process an average of 96,976 orders per
minute—29 percent more than the 74,977 orders per minute
the ProLiant DL380 Gen9 processed with the 3PAR array.
Save time during large data mart writes
While it’s good to know how much stress your system can handle, you
may also choose to run your data mart and OLTP workloads separately.
Loading data from various sources into a data mart is a critical step
in gathering and organizing data for analysis, and this process often
involves loading from flat files. The Dell EMC Unity loaded three
terabytes’ worth of data into a data mart in 1 hour, 2 minutes—22
percent faster than the 1 hour, 16 minutes it took for the HPE array.
Saving time here can speed up the time it takes to start analyzing your
data, so you can get business insights sooner.
29% more orders per minute during the data load
Dell EMC 96,976
74,977HPE
22% faster
to import
a large
data set
Less time
is better
Dell EMC
1h 2m
1h 16m
HPE
What’s a data mart?
Your enterprise business collects
data from many different departments.
Whether it's sales, marketing, or research
and development, you will often want to
bundle data from these disparate sources
and load it into a single location for analysis
and reporting. The data mart is a convenient
place to store different departments’
information for processing and analysis.
3. Handle transaction workloads and data mart loads with better performance June 2017 (Revised)
Handle demanding online transaction processing (OLTP) work
Even without the added stress of a simultaneous data mart
workload, OLTP work is one of the most stressful types for a
storage array to handle—especially if it has to satisfy a large
volume of users simultaneously. We used an OLTP workload
benchmark called DVD Store 2 to test how well the Dell
EMC and HPE storage arrays could handle many simulated
users completing tasks such as browsing an online catalog
and making final purchases.
The solutions handled a comparable number of database
orders per minute (OPM); the Dell EMC Unity array
enabled the PowerEdge R930 server to process 112,725
OPM, whereas the HPE ProLiant DL380 Gen9 fulfilled
111,761 OPM with the 3PAR array. Being able to process a
large volume of customer requests in a timely manner can
result in a better experience for the end user.
DVD Store 2
We used a benchmark called DVD Store 2
to test each storage array’s capabilities while
handling OLTP workloads. DVD Store simulates
an online video marketplace, mimicking the
way thousands of users might shop in a real-
life scenario. The more user-initiated orders a
server can fulfill, the better its performance.
To learn more about DVD Store, visit
http://linux.dell.com/dvdstore.
Storage fabric using Brocade® Gen 6 hardware
In our tests, we used Connectrix® DS-6620B
switches, built on Brocade Gen 6 hardware,
known as the Dell EMC Connectrix B-Series Gen
6 Fibre Channel by Brocade. The Connectrix
B Series provides out-of-the box tools for SAN
monitoring, management, and diagnostics that
simplify administration and troubleshooting for
administrators.
Brocade Fibre offers Brocade Fabric Vision®
Technology, which can provide further visibility
into the storage network with monitoring and
diagnostic tools. With Monitoring and Alerting
Policy Suite (MAPS), admins can proactively
monitor the health of all connected storage using
policy-based monitoring.
Brocade offers another tool to simplify SAN
management for Gen 6 hardware: Connectrix
Manager Converged Network Edition (CMCNE).
This tool uses an intuitive GUI to help admins
automate repetitive tasks and further simplify SAN
fabric management in the datacenter.
To learn more about what Brocade Gen 6 has to
offer, visit www.brocade.com.
The value of compression
Both the Unity and the 3PAR arrays employ a variety of techniques to save
space so you can fit more data on your hardware. One of these techniques
is compression. Compression algorithms reduce the number of bits
needed to represent a set of data—the higher the compression ratio, the
more space this particular data reduction technique saves.
During our OLTP test, the Unity array achieved a compression ratio of 3.2-
to-1 on the database volumes, whereas the 3PAR array averaged a 1.3-to-
1 ratio. In our data mart loading test, the 3PAR achieved a ratio of 1.4-to-1
on the database volumes, whereas the Unity array got 1.3 to 1.
4. Handle transaction workloads and data mart loads with better performance June 2017 (Revised)
Conclusion
Database work is a big deal—in terms of its importance to your company, and the sheer magnitude of the work.
Our tests with the Dell EMC PowerEdge R930 server and Unity 400F All-Flash storage array demonstrated that it
could perform comparably to an HPE ProLiant DL380 Gen9 server and 3PAR array during OLTP workloads, with
a better compression ratio (3.2-to-1 vs. 1.3-to-1). For loading large sets of data, the Dell EMC Unity finished 22
percent faster than the HPE 3PAR, which can result in less hassle for the administrator in charge of data marts.
When running both OLTP and data mart workloads in tandem, the Unity array outperformed the HPE 3PAR in
terms of orders processed per minute by 29 percent. For additional product information concerning the Unity
400F storage array, visit DellEMC.com/Unity.
1 http://www.dell.com/en-us/work/shop/productdetails/poweredge-r930 http://www.principledtechnologies.com
5. Handle transaction workloads and data mart loads with better performance June 2017 (Revised)
On April 18, 2017, we finalized the hardware and software configurations we tested. Updates for current and
recently released hardware and software appear often, so unavoidably these configurations may not represent
the latest versions available when this report appears. For older systems, we chose configurations representative
of typical purchases of those systems. We concluded hands-on testing on May 4, 2017.
Appendix A: System configuration information
Servers under test
Server configuration information Dell EMC PowerEdge R930 HPE ProLiant DL580 Gen9
BIOS name and version Dell EMC 2.2.0 U17 v2.30
Non-default BIOS settings N/A N/A
Operating system name and
version/build number
VMware®
ESXi™
6.5.0 4564106 VMware ESXi 6.5.0 4564106
Date of last OS updates/patches
applied
03/31/2017 03/31/2017
Power management policy Maximum performance Maximum performance
Processor
Number of processors 4 4
Vendor and model Intel®
Xeon®
E7-8860 v4 Intel Xeon E7-8860 v4
Core count (per processor) 18 18
Core frequency (GHz) 2.20 2.20
Stepping B0 B0
Memory module(s)
Total memory in system (GB) 512 512
Number of memory modules 32 32
Vendor and model Samsung®
M393A2G40DB0-CPB Samsung M393A2G40DB0-CPB
Size (GB) 16 16
Type PC4-17000 PC4-17000
Speed (MHz) 2,133 2,133
Speed running in the server (MHz) 2,133 2,133
Local storage
Number of drives 2 2
Drive vendor and model Seagate®
ST300MM0006 Seagate ST300MM0006
Drive size (GB) 300 300
Drive information (speed, interface) 6Gbps, SAS 6Gbps, SAS
6. Handle transaction workloads and data mart loads with better performance June 2017 (Revised)
Server configuration information Dell EMC PowerEdge R930 HPE ProLiant DL580 Gen9
Network adapter
Vendor and model Intel Ethernet 10GbE 2P X520 Adapter HP Ethernet 10Gb 2-port 560SFP+
Number and type of ports 2 x 10GbE 2 x 10GbE
Driver version 17.5.10 0x8000088b
Fibre Channel HBA
Vendor and model Emulex LightPulse LPe31002-M6 2-Port Emulex LightPulse LPe31002-M6 2-Port
Number and type of ports 2 x 16Gb Fibre Channel 2 x 16Gb Fibre Channel
Firmware version 11.1.212.0 11.1.212.0
Power supplies
Vendor and model Dell EMC 0V1YJ6A00 HPE 656364-B21
Number of power supplies 2 2
Wattage of each (W) 900 1,200
Storage solution
Storage configuration information Dell EMC Unity 400F HPE 3PAR 8400
Controller firmware revision UnityOS 4.1.1.9121942 HPE 3PAR OS 3.3.1.215
Number of storage controllers 2 2
Number of storage shelves 2 4
Drive vendor and model number 24 x Dell EMC V4-2S6FXL-800 24 x HPE DOPM3840S5xnNMRI
Drive size (GB) 800 3,840
Drive information
(speed, interface, type)
12Gbps, SAS, SSD 12Gbps, SAS, SSD
7. Handle transaction workloads and data mart loads with better performance June 2017 (Revised)
Appendix B: How we tested
Our Dell EMC testbed consisted of two Dell EMC PowerEdge R930 servers. Our HPE testbed consisted of two HPE ProLiant DL580 Gen9
servers. On both testbeds, one server hosted each of the OLTP VMs, while the other hosted the data mart VM. We also used a two-socket,
2U server to host our client and infrastructure VMs. Each server had two 10Gb connections to a Dell Networking X1052 for network traffic.
Each server under test had two 16Gb Fibre connections to a redundant set of Dell EMC DS6620B Fibre Channel switches. Each array had two
16Gb connections to each fibre channel switch.
Configuring the Fibre Channel switches
We created Fibre Channel zones according to best practices for each all-flash array. We placed each server port in a zone with all storage
ports. We used two Dell EMC DS6620B Fibre Channel switches for redundancy and multipathing.
Creating the Fibre Channel zones
1. Log into the Brocade switch GUI.
2. Click ConfigureZone Admin
3. Click the Zone tab.
4. Click New Zone, and enter a zone name.
5. Select all storage WWNs and a single server port WWN.
6. Click the right arrow to add the WWNs to the zone.
7. Repeat steps 4-6 for all remaining server ports.
8. Click the Zone Config tab.
9. Click New Zone Config, and enter a zone configuration name.
10. Select all newly created zones.
11. Click the right arrow to add the zones to the zone config.
12. Click Zoning ActionsSave Config to save the changes.
13. Click Zoning ActionsEnable Config.
14. Click Yes.
Configuring the storage
On each storage array, we created a variety of LUNs to be used in testing. Each OLTP VM had its own LUN for the OS, eight separate LUNs
for SQL data (two for each SQL instance), and four separate LUNs for SQL log data (one for each SQL instance). The data mart VM had its
own LUN for the OS, eight separate LUNs for SQL data, and a single LUN for SQL logs. The data mart VM also had three PCIe NVMe SSDs
on the data mart host to hold the source data. We enabled compression on both arrays for all tests, so all SQL database and log LUNs were
compressed on both arrays.
Configuring the Unity 400F array
We created a single RAID 5 storage pool using all available drives. We created one LUN per drive (see Table 1 for drive counts) sized to fit
each drive. For each VM, we created a Consistency Group that contained all the LUNs necessary for each VM.
Creating the storage pool
1. Log into the Unity 400F array Unisphere GUI.
2. In the left-hand column, in the Storage category, click Pools.
3. To create a new storage pool, click the plus sign.
4. Enter a name and description, and click Next.
5. Choose Extreme Performance Tier, and check the box for Use FAST Cache.
6. Set the Extreme Performance Tier to RAID 5, and click Next.
7. Leave Create VMware Capability Profile for the Pool unchecked, and click Next.
8. Review your selection, and click Finish.
Creating the block LUNs
1. In the left-hand column, in the Storage category, click Block.
2. To create a new LUN, click the plus sign.
3. Select the number of LUNs to create, give them a name, choose the pool you created in the previous step, leave the Tiering Policy to
Extreme Performance Tier.
4. Check the box for Compression for all OLTP LUNs, or leave unchecked for data mart LUNs. Click Next.
8. Handle transaction workloads and data mart loads with better performance June 2017 (Revised)
5. You will configure access at the Consistency Group level, so leave this blank, and click Next.
6. Leave Enable Automatic Snapshot Creation unchecked, and click Next.
7. Leave Enable Replication unchecked, and click Next.
8. Review your selection, and click Finish.
9. Repeat steps 1 through 8 as necessary to create all LUNs for the environment.
Adding ESXi hosts
1. Complete setup for Installing VMware vCenter 6.5 before proceeding the with storage setup.
2. In the left-hand column, in the Access category, click VMware.
3. To add the vCenter server managing the hosts, click the plus sign.
4. Enter the IP address, username, and password for the vCenter server appliance, and click Find.
5. Check the boxes next to the hosts in the Dell EMC testbed, and click Next.
6. Verify the correct hosts have been selected, and click Finish.
Creating the Consistency Groups
1. In the left-hand column, in the Storage category, click Block.
2. Click the Consistency Groups tab.
3. To create a new Consistency Group, click the plus sign.
4. Add a name, and click Next.
5. Click the plus sign, click Add Existing LUNs to add the LUNs related to the Consistency Group, and click OK. Click Next.
6. Click the plus sign, add the host(s) to connect to, and click Next.
7. Leave Enable Automatic Snapshot Creation unchecked, and click Next.
8. Leave Enable Replication unchecked, and click Next.
9. Review your selection, and click Finish.
10. Repeat steps 1 through 9 until you have a Consistency Group for all three VMs on the servers under test (SUTs).
Configuring the HPE 3PAR 8400 array
We created a single RAID 5 CPG using all available drives. We created one LUN per drive (see Table 1 for drive counts) sized to fit each drive.
For each VM, we created a Virtual Volume Set that contained all the LUNs necessary to each VM.
Creating the storage pool
1. Log into the HPE 3PAR SSMC GUI.
2. At the top, click 3PAR StoreServ, and click Common Provisioning Groups under Block Persona.
3. In the left-hand column, under Common Provisioning Groups, click Create CPG.
4. Under the General heading, enter a name, and verify the correct system is selected in the System drop-down menu.
5. Under the Allocation Settings heading, ensure the Device type is set to SSD 100K.
6. Verify the RAID type is set to RAID 5.
7. Change Availability to Cage (Default)
8. Click Create.
Creating the Virtual Volumes
1. At the top, click 3PAR StoreServ, and click Virtual Volumes under Block Persona.
2. In the left-hand column, under Virtual Volumes, click Create virtual volume.
3. Under the General heading, enter a name for the VVol.
4. Verify the correct system is selected in the System drop-down menu.
5. Set the size of the VVol in the Size field, and click Create.
6. Repeat steps 1 through 5 as necessary to create all VVols for the environment.
Creating the Virtual Volume Sets
1. At the top, click 3PAR StoreServ, and click Virtual Volume Sets under Block Persona.
2. In the left-hand column, under Virtual Volume Sets, click Create virtual volume set.
3. Under the General heading, enter a name for the Virtual Volume Set.
4. Verify the correct system is selected in the System drop-down menu.
5. Click Add virtual volumes.
6. Select all the virtual volumes created for the first VM, and click Add.
7. Click Create.
8. Repeat steps 1 through 7 until you have a virtual volume set for all three VMs on the SUTs.
9. Handle transaction workloads and data mart loads with better performance June 2017 (Revised)
Adding the ESXi hosts
1. Complete setup for installing VMware vCenter 6.5 before proceeding with the storage setup.
2. At the top, click 3PAR StoreServ, and click Hosts under Block Persona.
3. In the left-hand column, under Hosts, click Create host.
4. Under the General heading, enter a name for the host in the Name field.
5. Verify the correct system is selected in the System drop-down menu.
6. Set the Host OS to VMware (ESXi)
7. Under the Paths heading, click Add FC.
8. In the Add FC window, select the WWNs associated with the first host, and click Add.
9. Click Create.
10. Repeat steps 1 through 9 to add the second host.
Creating the host sets
1. At the top, click 3PAR StoreServ, and click Host Sets under Block Persona.
2. In the left-hand column, under Host Sets, click Create host set.
3. Under the General heading, enter a name for the host set in the Name field.
4. Under the Host Set Members heading, click on Add hosts.
5. Select both hosts, and click Add.
6. Click Create.
Assigning Virtual Volume Sets to Host Sets
1. At the top, click 3PAR StoreServ, and click Virtual Volume Sets under Block Persona.
2. In the left-hand column, under Virtual Volume Sets, select the first VVol set.
3. On the right-hand side, click Actions, and click Export.
4. Under the Export To heading, click Add.
5. At the top of the window, select the Host set option.
6. Select the Host set, and click Add.
7. Leave the Export With section at default values, and click Export.
VM and storage summary
OLTP workload VMs Data mart workload VMs
vCPUs 32 64
Memory 64 GB 130 GB
OS drive 1x 40 GB 1x 60 GB
Database drives 8x 210 GB 8x 1 TB
Log drives 4x 110 GB 1x 550 GB
PCIe NVMe drives N/A 3x 1.8 TB
Table 1: VM and storage summary
Installing VMware vSphere 6.5
We installed the VMware vSphere 6.5 hypervisor to a local RAID 1 disk pair. The RAID 1 virtual disk was created using the BIOS utilities on
each server.
1. Boot the server to the installation media.
2. At the boot menu screen, choose the standard installer.
3. Press Enter.
4. Press F11 to accept the license terms.
5. Press Enter to install to the local virtual disk.
6. Press Enter to select the US Default keyboard layout.
7. Create a password for the root user, and press Enter.
8. Press F11 to begin the installation.
10. Handle transaction workloads and data mart loads with better performance June 2017 (Revised)
Installing VMware vCenter®
6.5
1. Mount the VCSA ISO to a Windows server that has connectivity to the target vCenter host.
2. Browse to <mount location>vcsa-ui-installerwin32 and run installer.exe.
3. Click Install.
4. Click Next.
5. Check I accept the terms of the license agreement, and click Next.
6. Leave vCenter Server with an Embedded Platform Services Controller checked, and click Next.
7. Enter the IP address and credentials for the target vCenter host, and click Next.
8. Enter a name and password for the VCSA appliance, and click Next.
9. Select Tiny for the deployment size, and click Next.
10. Select a datastore to store the appliance, and click Next.
11. Enter network settings for the appliance, and click Next.
12. Review the summary, and click Finish.
13. Once the deployment is complete, click Continue.
14. Click Next.
15. Select Synchronize time with NTP servers from the drop-down menu, and enter the IP address or hostnames of your NTP servers. Select
Enabled from the SSH drop-down menu. Click Next.
16. Enter a username and password for the vCenter SSO, and click Next.
17. Uncheck Join the VMware’s Customer Experience Improvement Program (CEIP), and click Next.
18. Review the summary, and click Finish.
Installing Microsoft®
Windows Server®
2016 Datacenter Edition
1. Boot the VM to the installation media.
2. Press any key when prompted to boot from DVD.
3. When the installation screen appears, leave language, time/currency format, and input method as default, and click Next.
4. Click Install now.
5. When the installation prompts you, enter the product key.
6. Check I accept the license terms, and click Next.
7. Click Custom: Install Windows only (advanced).
8. Select Windows Server 2016 Datacenter Edition (Desktop Experience), and click Next.
9. Select Drive 0 Unallocated Space, and click Next, at which point Windows begins automatically, and restarts automatically after
completing.
10. When the Settings page appears, fill in the Password and Reenter Password fields with the same password.
11. Log in with the password you set up previously.
Installing SQL Server®
2016
1. Prior to installing, add the .NET Framework 3.5 feature to the server.
2. Mount the installation DVD for SQL Server 2016.
3. Click Run SETUP.EXE. If Autoplay does not begin the installation, navigate to the SQL Server 2016 DVD, and double-click it.
4. In the left pane, click Installation.
5. Click New SQL Server stand-alone installation or add features to an existing installation.
6. Select the Enter the product key radio button, and enter the product key. Click Next.
7. Click the checkbox to accept the license terms, and click Next.
8. Click Use Microsoft Update to check for updates, and click Next.
9. Click Install to install the setup support files.
10. If there are no failures displayed, click Next.
11. At the Setup Role screen, choose SQL Server Feature Installation, and click Next.
12. At the Feature Selection screen, select Database Engine Services, Full-Text and Semantic Extractions for Search, Client Tools
Connectivity, Client Tools Backwards Compatibility. Click Next.
13. At the Installation Rules screen, after the check completes, click Next.
14. At the Instance configuration screen, leave the default selection of default instance, and click Next.
15. At the Server Configuration screen, choose NT ServiceSQLSERVERAGENT for SQL Server Agent, and choose NT Service
MSSQLSERVER for SQL Server Database Engine. Change the Startup Type to Automatic. Click Next.
16. At the Database Engine Configuration screen, select the authentication method you prefer. For our testing purposes, we selected Mixed
Mode.
17. Enter and confirm a password for the system administrator account.
18. Click Add Current user. This may take several seconds.
11. Handle transaction workloads and data mart loads with better performance June 2017 (Revised)
19. Click Next.
20. At the Error and usage reporting screen, click Next.
21. At the Installation Configuration Rules screen, check that there are no failures or relevant warnings, and click Next.
22. At the Ready to Install screen, click Install.
23. After installation completes, click Close.
24. Close the installation window.
25. Shut down the VM, and create a clone to be used for the data mart workload.
26. Power on the VM.
27. Repeat steps 2-26 for instances 2, 3, and 4, replacing the default instance name at the Instance configuration screen with
MSSQLSERVER2, MSSQLSERVER3, and MSSQLSERVER4 at step 14.
28. Shut down the VM, and create another clone to be used for the second OLTP workload.
Configuring the DVD Store 2 benchmark
Data generation overview
We generated our data using the Install.pl script included with DVD Store version 2.1 (DS2), providing the parameters for our 250GB
database size and the Microsoft SQL Server 2016 platform. We ran the Install.pl script on a utility system running Linux®. The Install.pl script
also generated the database schema.
After processing the data generation, we transferred the data files and schema creation files to a Windows-based system running SQL Server
2016. We built the 250GB database in SQL Server 2016, and then performed a full backup, storing the backup file on the C: drive for quick
access. We used that backup file to restore the server between test runs.
The only modifications we made to the schema creation scripts were the specified file sizes for our database. We explicitly set the file
sizes higher than necessary to ensure that no file-growth activity would affect the outputs of the test. Besides this file size modification, the
database schema was created and loaded according to the DVD Store documentation. Specifically, we followed the steps below:
1. We generated the data and created the database and file structure using database creation scripts in the DS2 download. We made size
modifications specific to our 250GB database and the appropriate changes to drive letters.
2. We transferred the files from our Linux data generation system to a Windows system running SQL Server.
3. We created database tables, stored procedures, and objects using the provided DVD Store scripts.
4. We set the database recovery model to bulk-logged to prevent excess logging.
5. We loaded the data we generated into the database. For data loading, we used the import wizard in SQL Server Management Studio.
Where necessary, we retained options from the original scripts, such as Enable Identity Insert.
6. We created indices, full-text catalogs, primary keys, and foreign keys using the database-creation scripts.
7. We updated statistics on each table according to database-creation scripts, which sample 18 percent of the table data.
8. On the SQL Server instance, we created a ds2user SQL Server login using the following Transact-SQL (T-SQL) script:
USE [master]
GO
CREATE LOGIN [ds2user] WITH PASSWORD=N’’,
DEFAULT_DATABASE=[master],
DEFAULT_LANGUAGE=[us_english],
CHECK_EXPIRATION=OFF,
CHECK_POLICY=OFF
GO
9. We set the database recovery model back to full.
10. We created the necessary full text index using SQL Server Management Studio.
11. We created a database user and mapped this user to the SQL Server login.
12. We then performed a full backup of the database. This backup allowed us to restore the databases to a pristine state relatively quickly
between tests.
Logical name Filegroup Initial size (MB)
Database files
primary PRIMARY 8
cust1 DS_CUST_FG 50,776
cust2 DS_CUST_FG 50,776
12. Handle transaction workloads and data mart loads with better performance June 2017 (Revised)
Logical name Filegroup Initial size (MB)
cust3 DS_CUST_FG 50,776
cust4 DS_CUST_FG 50,776
ind1 DS_IND_FG 20,118
ind2 DS_IND_FG 20,118
ind3 DS_IND_FG 20,118
ind4 DS_IND_FG 20,118
ds_misc DS_MISC_FG 392
orders1 DS_ORDERS 25,004
orders2 DS_ORDERS 25,004
orders3 DS_ORDERS 25,004
orders4 DS_ORDERS 25,004
Log files
ds_log Not applicable 90,152
Table 2: DS2 initial file size modifications
Configuring the database workload client
For our testing, we used a virtual client for the Microsoft SQL Server client. To create this client, we installed Windows Server 2012 R2,
assigned a static IP address, and installed .NET 3.5.
Running the DVD Store tests
We created a series of batch files, SQL scripts, and shell scripts to automate the complete test cycle. DVD Store outputs an orders-per-minute
metric, which is a running average calculated through the test. In this report, we report the last OPM reported by each client/target pair.
Each complete test cycle consisted of general steps:
1. Clean up prior outputs from the target system and the client driver system.
2. Drop the databases from the target.
3. Restore the databases on the target.
4. Shut down the target.
5. Reboot the host and client system.
6. Wait for a ping response from the server under test and the client system.
7. Let the test server idle for 10 minutes.
8. Start the DVD Store driver on the client.
We used the following DVD Store parameters for testing:
ds2sqlserverdriver.exe --target=<target_IP> --ramp_rate=10 --run_time=180 --n_threads=32 --db_size=250GB
--think_time=0.1 --detailed_view=Y --warmup_time=15 --report_rate=1 --csv_output=<drive path>
Configuring the bulk load data mart
We used HammerDB to generate TPCH-compliant source data at scale factor 3000, for a total of 3.31TB of raw data. The generated data
exists in the form of pipe-delimited text files, which we placed on NVMe PCIe SSDs for fast reads. We split the six largest tables into 32
separate files for parallel loading. Each chunk had its own table in SQL Server, for a total of 6x32 one-to-one streams. We used batch
scripting and SQLCMD to start 32 simultaneous SQL scripts. Each script contained BULK INSERT statements to load the corresponding
chunk for each table. For example, the 17th SQL script loaded ORDERS_17.txt into table ORDERS_17, and upon finishing, began loading
LINEITEM_17.txt into table LINEITEM_17, and so on through each table.
Generating the data
1. Download HammerDB v2.21, and run hammerdb.bat.
2. Click OptionsBenchmark.
3. Select the radio buttons for MSSQL Server and TPC-H, and click OK.
4. In the left pane, expand SQL ServerTPC-HDatagen, and double-click Options.
14. Handle transaction workloads and data mart loads with better performance June 2017 (Revised)
FILEGROWTH = 100MB),
FILEGROUP DATA_FG_25 (NAME = tpch3000_data_25, FILENAME = ‘F:tpchtpch_data_25.mdf’, SIZE = 112640MB,
FILEGROWTH = 100MB),
FILEGROUP DATA_FG_26 (NAME = tpch3000_data_26, FILENAME = ‘G:tpchtpch_data_26.mdf’, SIZE = 112640MB,
FILEGROWTH = 100MB),
FILEGROUP DATA_FG_27 (NAME = tpch3000_data_27, FILENAME = ‘H:tpchtpch_data_27.mdf’, SIZE = 112640MB,
FILEGROWTH = 100MB),
FILEGROUP DATA_FG_28 (NAME = tpch3000_data_28, FILENAME = ‘I:tpchtpch_data_28.mdf’, SIZE = 112640MB,
FILEGROWTH = 100MB),
FILEGROUP DATA_FG_29 (NAME = tpch3000_data_29, FILENAME = ‘J:tpchtpch_data_29.mdf’, SIZE = 112640MB,
FILEGROWTH = 100MB),
FILEGROUP DATA_FG_30 (NAME = tpch3000_data_30, FILENAME = ‘K:tpchtpch_data_30.mdf’, SIZE = 112640MB,
FILEGROWTH = 100MB),
FILEGROUP DATA_FG_31 (NAME = tpch3000_data_31, FILENAME = ‘L:tpchtpch_data_31.mdf’, SIZE = 112640MB,
FILEGROWTH = 100MB),
FILEGROUP DATA_FG_32 (NAME = tpch3000_data_32, FILENAME = ‘M:tpchtpch_data_32.mdf’, SIZE = 112640MB,
FILEGROWTH = 100MB)
LOG ON
( NAME = tpch3000_log,
FILENAME = ‘N:LOGtpch3000tpch3000_log.ldf’,
SIZE = 360000MB,
FILEGROWTH = 100MB)
GO
/*set db options*/
ALTER DATABASE tpch3000 SET RECOVERY SIMPLE
ALTER DATABASE tpch3000 SET AUTO_CREATE_STATISTICS OFF
ALTER DATABASE tpch3000 SET AUTO_UPDATE_STATISTICS OFF
ALTER DATABASE tpch3000 SET PAGE_VERIFY NONE
USE tpch3000
GO
create table CUSTOMER_1 ([c_custkey] [bigint] NOT NULL,[c_mktsegment] [char](10) NULL,[c_nationkey] [int]
NULL,[c_name] [varchar](25) NULL,[c_address] [varchar](40) NULL,[c_phone] [char](15) NULL,[c_acctbal]
[money] NULL,[c_comment] [varchar](118) NULL) on DATA_FG_01
create table CUSTOMER_2 ([c_custkey] [bigint] NOT NULL,[c_mktsegment] [char](10) NULL,[c_nationkey] [int]
NULL,[c_name] [varchar](25) NULL,[c_address] [varchar](40) NULL,[c_phone] [char](15) NULL,[c_acctbal]
[money] NULL,[c_comment] [varchar](118) NULL) on DATA_FG_02
…
create table CUSTOMER_32 ([c_custkey] [bigint] NOT NULL,[c_mktsegment] [char](10) NULL,[c_nationkey]
[int] NULL,[c_name] [varchar](25) NULL,[c_address] [varchar](40) NULL,[c_phone] [char](15) NULL,[c_
acctbal] [money] NULL,[c_comment] [varchar](118) NULL) on DATA_FG_32
create table LINEITEM_1 ([l_shipdate] [date] NULL,[l_orderkey] [bigint] NOT NULL,[l_discount] [money] NOT
NULL,[l_extendedprice] [money] NOT NULL,[l_suppkey] [int] NOT NULL,[l_quantity] [bigint] NOT NULL,[l_
returnflag] [char](1) NULL,[l_partkey] [bigint] NOT NULL,[l_linestatus] [char](1) NULL,[l_tax] [money] NOT
NULL,[l_commitdate] [date] NULL,[l_receiptdate] [date] NULL,[l_shipmode] [char](10) NULL,[l_linenumber]
[bigint] NOT NULL,[l_shipinstruct] [char](25) NULL,[l_comment] [varchar](44) NULL) on DATA_FG_01
create table LINEITEM_2 ([l_shipdate] [date] NULL,[l_orderkey] [bigint] NOT NULL,[l_discount] [money] NOT
NULL,[l_extendedprice] [money] NOT NULL,[l_suppkey] [int] NOT NULL,[l_quantity] [bigint] NOT NULL,[l_
returnflag] [char](1) NULL,[l_partkey] [bigint] NOT NULL,[l_linestatus] [char](1) NULL,[l_tax] [money] NOT
NULL,[l_commitdate] [date] NULL,[l_receiptdate] [date] NULL,[l_shipmode] [char](10) NULL,[l_linenumber]
[bigint] NOT NULL,[l_shipinstruct] [char](25) NULL,[l_comment] [varchar](44) NULL) on DATA_FG_02
…
create table LINEITEM_32 ([l_shipdate] [date] NULL,[l_orderkey] [bigint] NOT NULL,[l_discount] [money]
NOT NULL,[l_extendedprice] [money] NOT NULL,[l_suppkey] [int] NOT NULL,[l_quantity] [bigint] NOT NULL,[l_
returnflag] [char](1) NULL,[l_partkey] [bigint] NOT NULL,[l_linestatus] [char](1) NULL,[l_tax] [money] NOT
NULL,[l_commitdate] [date] NULL,[l_receiptdate] [date] NULL,[l_shipmode] [char](10) NULL,[l_linenumber]
[bigint] NOT NULL,[l_shipinstruct] [char](25) NULL,[l_comment] [varchar](44) NULL) on DATA_FG_32
create table ORDERS_1 ([o_orderdate] [date] NULL,[o_orderkey] [bigint] NOT NULL,[o_custkey] [bigint]
NOT NULL,[o_orderpriority] [char](15) NULL,[o_shippriority] [int] NULL,[o_clerk] [char](15) NULL,[o_
orderstatus] [char](1) NULL,[o_totalprice] [money] NULL,[o_comment] [varchar](79) NULL) on DATA_FG_01
create table ORDERS_2 ([o_orderdate] [date] NULL,[o_orderkey] [bigint] NOT NULL,[o_custkey] [bigint]
NOT NULL,[o_orderpriority] [char](15) NULL,[o_shippriority] [int] NULL,[o_clerk] [char](15) NULL,[o_
orderstatus] [char](1) NULL,[o_totalprice] [money] NULL,[o_comment] [varchar](79) NULL) on DATA_FG_02
…
create table ORDERS_32 ([o_orderdate] [date] NULL,[o_orderkey] [bigint] NOT NULL,[o_custkey] [bigint]
NOT NULL,[o_orderpriority] [char](15) NULL,[o_shippriority] [int] NULL,[o_clerk] [char](15) NULL,[o_
15. Handle transaction workloads and data mart loads with better performance June 2017 (Revised)
orderstatus] [char](1) NULL,[o_totalprice] [money] NULL,[o_comment] [varchar](79) NULL) on DATA_FG_32
create table PART_1 ([p_partkey] [bigint] NOT NULL,[p_type] [varchar](25) NULL,[p_size] [int] NULL,[p_
brand] [char](10) NULL,[p_name] [varchar](55) NULL,[p_container] [char](10) NULL,[p_mfgr] [char](25)
NULL,[p_retailprice] [money] NULL,[p_comment] [varchar](23) NULL) on DATA_FG_01
create table PART_2 ([p_partkey] [bigint] NOT NULL,[p_type] [varchar](25) NULL,[p_size] [int] NULL,[p_
brand] [char](10) NULL,[p_name] [varchar](55) NULL,[p_container] [char](10) NULL,[p_mfgr] [char](25)
NULL,[p_retailprice] [money] NULL,[p_comment] [varchar](23) NULL) on DATA_FG_02
…
create table PART_32 ([p_partkey] [bigint] NOT NULL,[p_type] [varchar](25) NULL,[p_size] [int] NULL,[p_
brand] [char](10) NULL,[p_name] [varchar](55) NULL,[p_container] [char](10) NULL,[p_mfgr] [char](25)
NULL,[p_retailprice] [money] NULL,[p_comment] [varchar](23) NULL) on DATA_FG_32
create table PARTSUPP_1 ([ps_partkey] [bigint] NOT NULL,[ps_suppkey] [int] NOT NULL,[ps_supplycost]
[money] NOT NULL,[ps_availqty] [int] NULL,[ps_comment] [varchar](199) NULL) on DATA_FG_01
create table PARTSUPP_2 ([ps_partkey] [bigint] NOT NULL,[ps_suppkey] [int] NOT NULL,[ps_supplycost]
[money] NOT NULL,[ps_availqty] [int] NULL,[ps_comment] [varchar](199) NULL) on DATA_FG_02
…
create table PARTSUPP_32 ([ps_partkey] [bigint] NOT NULL,[ps_suppkey] [int] NOT NULL,[ps_supplycost]
[money] NOT NULL,[ps_availqty] [int] NULL,[ps_comment] [varchar](199) NULL) on DATA_FG_32
create table SUPPLIER_1 ([s_suppkey] [int] NOT NULL,[s_nationkey] [int] NULL,[s_comment] [varchar]
(102) NULL,[s_name] [char](25) NULL,[s_address] [varchar](40) NULL,[s_phone] [char](15) NULL,[s_acctbal]
[money] NULL) on DATA_FG_01
create table SUPPLIER_2 ([s_suppkey] [int] NOT NULL,[s_nationkey] [int] NULL,[s_comment] [varchar]
(102) NULL,[s_name] [char](25) NULL,[s_address] [varchar](40) NULL,[s_phone] [char](15) NULL,[s_acctbal]
[money] NULL) on DATA_FG_02
…
create table SUPPLIER_32 ([s_suppkey] [int] NOT NULL,[s_nationkey] [int] NULL,[s_comment] [varchar]
(102) NULL,[s_name] [char](25) NULL,[s_address] [varchar](40) NULL,[s_phone] [char](15) NULL,[s_acctbal]
[money] NULL) on DATA_FG_32
16. Handle transaction workloads and data mart loads with better performance June 2017 (Revised)
Inserting the data into Microsoft SQL Server
We used 32 individual SQL scripts to create a BULK INSERT process on each filegroup. The first script is shown here as an example:
bulk insert tpch3000..CUSTOMER_1 from ‘O:CUSTOMER_1.tbl’ with
(TABLOCK,DATAFILETYPE=’char’,CODEPAGE=’raw’,FieldTerminator=’|’,BATCHSIZE=14062500)
bulk insert tpch3000..LINEITEM_1 from ‘O:LINEITEM_1.tbl’ with
(TABLOCK,DATAFILETYPE=’char’,CODEPAGE=’raw’,FieldTerminator=’|’,BATCHSIZE=562500000)
bulk insert tpch3000..ORDERS_1 from ‘O:ORDERS_1.tbl’ with
(TABLOCK,DATAFILETYPE=’char’,CODEPAGE=’raw’,FieldTerminator=’|’,BATCHSIZE=140625000)
bulk insert tpch3000..PART_1 from ‘O:PART_1.tbl’ with
(TABLOCK,DATAFILETYPE=’char’,CODEPAGE=’raw’,FieldTerminator=’|’,BATCHSIZE=18750000)
bulk insert tpch3000..PARTSUPP_1 from ‘O:PARTSUPP_1.tbl’ with
(TABLOCK,DATAFILETYPE=’char’,CODEPAGE=’raw’,FieldTerminator=’|’,BATCHSIZE=75000000)
bulk insert tpch3000..SUPPLIER_1 from ‘O:SUPPLIER_1.tbl’ with
(TABLOCK,DATAFILETYPE=’char’,CODEPAGE=’raw’,FieldTerminator=’|’,BATCHSIZE=937500)
Starting the SQL BULK INSERT scripts
We used Windows CMD and SQLCMD to start the 32 BULK INSERT scripts with CPU affinity:
start /node 0 /affinity 1 sqlcmd -S localhost -d tpch3000 -U sa -P ****** -i C:UsersAdministrator
Documentsgen_1.sql
start /node 0 /affinity 2 sqlcmd -S localhost -d tpch3000 -U sa -P ****** -i C:UsersAdministrator
Documentsgen_2.sql
start /node 0 /affinity 4 sqlcmd -S localhost -d tpch3000 -U sa -P ****** -i C:UsersAdministrator
Documentsgen_3.sql
start /node 0 /affinity 8 sqlcmd -S localhost -d tpch3000 -U sa -P ****** -i C:UsersAdministrator
Documentsgen_4.sql
start /node 0 /affinity 10 sqlcmd -S localhost -d tpch3000 -U sa -P ****** -i C:UsersAdministrator
Documentsgen_5.sql
start /node 0 /affinity 20 sqlcmd -S localhost -d tpch3000 -U sa -P ****** -i C:UsersAdministrator
Documentsgen_6.sql
start /node 0 /affinity 40 sqlcmd -S localhost -d tpch3000 -U sa -P ****** -i C:UsersAdministrator
Documentsgen_7.sql
start /node 0 /affinity 80 sqlcmd -S localhost -d tpch3000 -U sa -P ****** -i C:UsersAdministrator
Documentsgen_8.sql
start /node 0 /affinity 100 sqlcmd -S localhost -d tpch3000 -U sa -P ****** -i C:UsersAdministrator
Documentsgen_9.sql
start /node 0 /affinity 200 sqlcmd -S localhost -d tpch3000 -U sa -P ****** -i C:UsersAdministrator
Documentsgen_10.sql
start /node 0 /affinity 400 sqlcmd -S localhost -d tpch3000 -U sa -P ****** -i C:UsersAdministrator
Documentsgen_11.sql
start /node 0 /affinity 800 sqlcmd -S localhost -d tpch3000 -U sa -P ****** -i C:UsersAdministrator
Documentsgen_12.sql
start /node 0 /affinity 1000 sqlcmd -S localhost -d tpch3000 -U sa -P ****** -i C:UsersAdministrator
Documentsgen_13.sql
start /node 0 /affinity 2000 sqlcmd -S localhost -d tpch3000 -U sa -P ****** -i C:UsersAdministrator
Documentsgen_14.sql
start /node 0 /affinity 4000 sqlcmd -S localhost -d tpch3000 -U sa -P ****** -i C:UsersAdministrator
Documentsgen_15.sql
start /node 0 /affinity 8000 sqlcmd -S localhost -d tpch3000 -U sa -P ****** -i C:UsersAdministrator
Documentsgen_16.sql
start /node 1 /affinity 1 sqlcmd -S localhost -d tpch3000 -U sa -P ****** -i C:UsersAdministrator
Documentsgen_17.sql
start /node 1 /affinity 2 sqlcmd -S localhost -d tpch3000 -U sa -P ****** -i C:UsersAdministrator
Documentsgen_18.sql
start /node 1 /affinity 4 sqlcmd -S localhost -d tpch3000 -U sa -P ****** -i C:UsersAdministrator
Documentsgen_19.sql
start /node 1 /affinity 8 sqlcmd -S localhost -d tpch3000 -U sa -P ****** -i C:UsersAdministrator
Documentsgen_20.sql
start /node 1 /affinity 10 sqlcmd -S localhost -d tpch3000 -U sa -P ****** -i C:UsersAdministrator
Documentsgen_21.sql start /node 1 /affinity 20 sqlcmd -S localhost -d tpch3000 -U sa -P ****** -i C:
UsersAdministratorDocumentsgen_22.sql
start /node 1 /affinity 40 sqlcmd -S localhost -d tpch3000 -U sa -P ****** -i C:UsersAdministrator
Documentsgen_23.sql
start /node 1 /affinity 80 sqlcmd -S localhost -d tpch3000 -U sa -P ****** -i C:UsersAdministrator
17. Handle transaction workloads and data mart loads with better performance June 2017 (Revised)
Principled Technologies is a registered trademark of Principled Technologies, Inc.
All other product names are the trademarks of their respective owners.
DISCLAIMER OF WARRANTIES; LIMITATION OF LIABILITY:
Principled Technologies, Inc. has made reasonable efforts to ensure the accuracy and validity of its testing, however, Principled Technologies, Inc. specifically disclaims
any warranty, expressed or implied, relating to the test results and analysis, their accuracy, completeness or quality, including any implied warranty of fitness for any
particular purpose. All persons or entities relying on the results of any testing do so at their own risk, and agree that Principled Technologies, Inc., its employees and its
subcontractors shall have no liability whatsoever from any claim of loss or damage on account of any alleged error or defect in any testing procedure or result.
In no event shall Principled Technologies, Inc. be liable for indirect, special, incidental, or consequential damages in connection with its testing, even if advised of the
possibility of such damages. In no event shall Principled Technologies, Inc.’s liability, including for direct damages, exceed the amounts paid in connection with Principled
Technologies, Inc.’s testing. Customer’s sole and exclusive remedies are as set forth herein.
This project was commissioned by Dell EMC.
Principled
Technologies®
Facts matter.®Principled
Technologies®
Facts matter.®
Documentsgen_24.sql
start /node 1 /affinity 100 sqlcmd -S localhost -d tpch3000 -U sa -P ****** -i C:UsersAdministrator
Documentsgen_25.sql
start /node 1 /affinity 200 sqlcmd -S localhost -d tpch3000 -U sa -P ****** -i C:UsersAdministrator
Documentsgen_26.sql
start /node 1 /affinity 400 sqlcmd -S localhost -d tpch3000 -U sa -P ****** -i C:UsersAdministrator
Documentsgen_27.sql
start /node 1 /affinity 800 sqlcmd -S localhost -d tpch3000 -U sa -P ****** -i C:UsersAdministrator
Documentsgen_28.sql
start /node 1 /affinity 1000 sqlcmd -S localhost -d tpch3000 -U sa -P ****** -i C:UsersAdministrator
Documentsgen_29.sql
start /node 1 /affinity 2000 sqlcmd -S localhost -d tpch3000 -U sa -P ****** -i C:UsersAdministrator
Documentsgen_30.sql
start /node 1 /affinity 4000 sqlcmd -S localhost -d tpch3000 -U sa -P ****** -i C:UsersAdministrator
Documentsgen_31.sql
start /node 1 /affinity 8000 sqlcmd -S localhost -d tpch3000 -U sa -P ****** -i C:UsersAdministrator
Documentsgen_32.sql