In an enterprise environment, a data center VM footprint can grow quickly; large-scale deployments of thousands of virtual machines are becoming increasingly common. Risk of failure grows proportionally to the number of systems deployed and critical failures are unavoidable. Your ability to offer data protection from a backup solution is critical to business continuity. Elongated, inefficient protection windows can create resource contention with production environments, making it critical to execute system backup in a finite window of time.
The Veritas NetBackup Integrated Appliance running NetBackup 7.6 offered application protection to 1,000 VMs in 67.3 percent less time in SAN testing and used NetApp array-based snapshots to create recovery points in 54.1 percent less time than Competitor “V.” NetBackup was able to perform application-consistent backups at 1,000 VMs while Competitor “V” started to fail as the environment approached 300 VMs. Also, Competitor “V” was not able to complete a concurrent restore of 24 VMs while Veritas NetBackup was. The ability to complete backups and recoveries at scale are the most critical factor when determining the right solution for you. These time savings can scale as your VM footprint grows, allowing you to execute both system protection and user-friendly, simplified recovery.
Symantec NetBackup 7.6 benchmark comparison: Data protection in a large-scale...Principled Technologies
In an enterprise environment, a data center VM footprint can grow quickly; large-scale deployments of thousands of virtual machines are becoming increasingly common. Risk of failure grows proportionally to the number of systems deployed and critical failures are unavoidable. Your ability to offer data protection from a backup solution is critical to business continuity. Elongated, inefficient protection windows can create resource contention with production environments, therefore, it is critical to execute system backup in a finite window of time.
The Symantec NetBackup Integrated Appliance running NetBackup 7.6 offered application protection to 1,000 VMs in 80.3 percent less time in SAN testing and used NetApp array-based snapshots to create recovery points in 93.8 percent less time than Competitor “C.” In addition, the Symantec NetBackup Integrated Appliance with NetBackup 7.6 created backup images that offered granular recovery without additional steps and in a backup window 69.0 percent shorter than the backup window needed for Competitor “C.” These time savings can scale as your VM footprint grows, allowing you to execute both system protection and user-friendly, simplified recovery.
Veritas NetBackup benchmark comparison: Data protection in a large-scale virt...Principled Technologies
The footprint of a VM can grow quickly in an enterprise environment and large-scale VM deployments in the thousands are common. As this number of deployed systems grows, so does the risk of failure. Critical failures can become unavoidable and offering data protection from a backup solution promotes business continuity. Elongated protection windows requiring multiple jobs of different types can create resource contention with production environments and may require valuable IT admin time, so a finite window for system backups can have plenty of importance.
In our hands-on SAN backup testing, the Veritas NetBackup Integrated Appliance running NetBackup 7.6 offered application protection to 1,000 VMs in 66.8 percent less time than Competitor “E” did. In addition, the Veritas NetBackup Integrated Appliance with NetBackup 7.6 created backup images that offered granular recovery without additional steps. These time and effort savings can scale as your VM footprint grows, allowing you to execute both system protection and user-friendly, simplified recovery.
Proper resource allocation is critical to achieving top application performance in a virtualized environment. Resource contention degrades performance and underutilization can lead to costly server sprawl.
We found that adding VMTurbo to a VMware vSphere 5.5 cluster and following its reallocation recommendations gave our application performance a big boost. After reducing vCPU count, increasing memory allocation to active databases, and moving VMs to more responsive storage as VMTurbo directed, online transactions increased by 23.7 percent while latency dropped significantly. Avoid the pitfalls of poorly allocated VM resources and give your virtualized application every advantage by gaining control of your environment at every level.
Component upgrades from Intel and Dell can increase VM density and boost perf...Principled Technologies
As the needs of your business grow, so must the power of your server infrastructure. Rather than purchasing replacement servers with base configurations, consider upgrading key components to ensure you get the performance you need.
We found that upgrading to the Dell PowerEdge R730 with the Intel Xeon processor E5-2699 v3, Microsoft Windows Server 2012 R2 operating system, Intel SSD DC S3700 series drive, and Intel Ethernet CNA X520 series adapters supported an extra 16 VMs, 67 percent more VMs than the previous-generation Dell PowerEdge R720 solution.
When you purchase a server, wisely selecting these components offered by Dell and Intel can allow your business to hit the sweet spot of supporting all your users without breaking the bank. The option to upgrade server components can provide your infrastructure with room to grow in the future, as your business needs increase.
Finally, these select upgrades could translate to savings for your business—fewer servers you need to purchase now to meet performance demands and a longer lifespan for these servers as your business continues to grow.
Symantec NetBackup 7.6 benchmark comparison: Data protection in a large-scale...Principled Technologies
In an enterprise environment, a data center VM footprint can grow quickly; large-scale deployments of thousands of virtual machines are becoming increasingly common. Risk of failure grows proportionally to the number of systems deployed and critical failures are unavoidable. Your ability to offer data protection from a backup solution is critical to business continuity. Elongated, inefficient protection windows can create resource contention with production environments, therefore, it is critical to execute system backup in a finite window of time.
The Symantec NetBackup Integrated Appliance running NetBackup 7.6 offered application protection to 1,000 VMs in 80.3 percent less time in SAN testing and used NetApp array-based snapshots to create recovery points in 93.8 percent less time than Competitor “C.” In addition, the Symantec NetBackup Integrated Appliance with NetBackup 7.6 created backup images that offered granular recovery without additional steps and in a backup window 69.0 percent shorter than the backup window needed for Competitor “C.” These time savings can scale as your VM footprint grows, allowing you to execute both system protection and user-friendly, simplified recovery.
Veritas NetBackup benchmark comparison: Data protection in a large-scale virt...Principled Technologies
The footprint of a VM can grow quickly in an enterprise environment and large-scale VM deployments in the thousands are common. As this number of deployed systems grows, so does the risk of failure. Critical failures can become unavoidable and offering data protection from a backup solution promotes business continuity. Elongated protection windows requiring multiple jobs of different types can create resource contention with production environments and may require valuable IT admin time, so a finite window for system backups can have plenty of importance.
In our hands-on SAN backup testing, the Veritas NetBackup Integrated Appliance running NetBackup 7.6 offered application protection to 1,000 VMs in 66.8 percent less time than Competitor “E” did. In addition, the Veritas NetBackup Integrated Appliance with NetBackup 7.6 created backup images that offered granular recovery without additional steps. These time and effort savings can scale as your VM footprint grows, allowing you to execute both system protection and user-friendly, simplified recovery.
Proper resource allocation is critical to achieving top application performance in a virtualized environment. Resource contention degrades performance and underutilization can lead to costly server sprawl.
We found that adding VMTurbo to a VMware vSphere 5.5 cluster and following its reallocation recommendations gave our application performance a big boost. After reducing vCPU count, increasing memory allocation to active databases, and moving VMs to more responsive storage as VMTurbo directed, online transactions increased by 23.7 percent while latency dropped significantly. Avoid the pitfalls of poorly allocated VM resources and give your virtualized application every advantage by gaining control of your environment at every level.
Component upgrades from Intel and Dell can increase VM density and boost perf...Principled Technologies
As the needs of your business grow, so must the power of your server infrastructure. Rather than purchasing replacement servers with base configurations, consider upgrading key components to ensure you get the performance you need.
We found that upgrading to the Dell PowerEdge R730 with the Intel Xeon processor E5-2699 v3, Microsoft Windows Server 2012 R2 operating system, Intel SSD DC S3700 series drive, and Intel Ethernet CNA X520 series adapters supported an extra 16 VMs, 67 percent more VMs than the previous-generation Dell PowerEdge R720 solution.
When you purchase a server, wisely selecting these components offered by Dell and Intel can allow your business to hit the sweet spot of supporting all your users without breaking the bank. The option to upgrade server components can provide your infrastructure with room to grow in the future, as your business needs increase.
Finally, these select upgrades could translate to savings for your business—fewer servers you need to purchase now to meet performance demands and a longer lifespan for these servers as your business continues to grow.
The switching method you choose for your SBC environment can help determine performance and the experience that end-users have. We found that unifying switching with Cisco VM-FEX resulted in up to 29 percent lower latency than a solution using a traditional vSwitch when running a Citrix XenApp hosted shared desktop farm. Furthermore, the Cisco VM-FEX solution used up to 53 percent less CPU than the vSwitch solution did under extreme network conditions. In addition to these performance advantages, Cisco UCS Manager provides a central point of management and a simplified method to add vSphere hosts to the VM-FEX-enabled vSwitch, which can reduce management time and costs.
As our results show, switching to Cisco VM-FEX can provide your users with a more responsive environment.
A single-socket Dell EMC PowerEdge R7515 solution delivered better value on a...Principled Technologies
If your company is running important business applications in VMware vSAN clusters of servers that are several years old, chances are good that you’re considering upgrading to newer hardware. Our testing demonstrated that our clusters of single-socket Dell EMC PowerEdge R7515 servers and clusters of dual-socket HPE ProLiant DL380 Gen10 servers could both improve upon the database performance of a legacy cluster with five-year-old servers by more than 50 percent, with the Dell EMC cluster achieving 93.4 percent of the performance of the HPE cluster.
Reap better SQL Server OLTP performance with next-generation Dell EMC PowerEd...Principled Technologies
These new servers achieved up to 36.1 percent more OLTP database work than current-generation Dell EMC PowerEdge MX servers, while also lowering application response time
VMworld 2013: vSphere Data Protection 5.5 Advanced VMware Backup and Recovery...VMworld
VMworld Europe 2013
Mauricio Barra, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
Jeff Hunter, VMware
Prepare images for machine learning faster with servers powered by AMD EPYC 7...Principled Technologies
A server cluster with 3rd Gen AMD EPYC processors achieved higher throughput and took less time to prepare images for classification than a server cluster with 3rd Gen Intel Xeon Platinum 8380 processors
By automating high-touch, routine tasks, Dell EMC OME integrations and plugins empower IT admins to deliver effective and efficient systems management from a single console.
Slow performance and unavailable critical applications can impinge a company’s progress. You can apply patches and updates to improve application quality and user experience, but these changes need to be tested in resource-intensive environments before deployment. Keeping these applications accessing data is vital, too, as on-premises events can put availability at risk.
Our Dell EMC VMAX 250F and PowerEdge server solution supported test/dev environments and production database applications simultaneously without affecting the production applications’ performance. As we added VMs designed for test/dev environments, the production workload maintained an acceptable level of IOPS and achieved an average storage latency of less than a millisecond. The solution also kept data highly available with no downtime and no performance drop when we initiated a lost host connection for the primary storage. To run critical database applications of your company, consider the Dell EMC VMAX 250F for your datacenter.
Dell PowerEdge R920 running Oracle Database: Benefits of upgrading with NVMe ...Principled Technologies
Strong server performance is essential to companies running Oracle Database. The new Dell PowerEdge R920 provides strong performance in its base configuration with 24 SAS hard disks, but this performance gets an enormous boost when running the configuration containing NVMe Express Flash PCIe SSDs. In our testing, the upgraded configuration of the Dell PowerEdge R920 delivered 14.9 times the database performance of the base configuration. In addition, in testing the raw I/O throughput of the NVMe Express Flash PCIe SSDs, we saw as much as 192.8 times the IOPS as compared to the base configuration. Given that the storage subsystem is critical in servers and specifically database applications, the performance improvements offered by NVMe Express Flash PCIe SSDs can lead to great service improvements for your customers, making this upgrade a very wise investment.
AWS EC2 M6i instances with 3rd Gen Intel Xeon Scalable processors accelerated...Principled Technologies
At multiple instance sizes, M6i instances classified more frames per second than M5n instances with previous-gen processors or M6a instances with 3rd Gen AMD EPYC processors
Boosting performance with the Dell Acceleration Appliance for DatabasesPrincipled Technologies
If your business is expanding and you need to support more users accessing your databases, it’s time to act. Upgrading your database infrastructure with a flash storage-based solution is a smart way to improve performance without adding more servers or taking up very much rack space, which comes at a premium. The Dell Acceleration Appliance for Databases addresses this by providing strong performance when combined with your existing infrastructure or on its own.
We found that adding a highly available DAAD solution to our database application provided up to 3.01 times the Oracle Database 12c performance, which can make a big difference to your bottom line. Additionally, the DAAD delivered 3.14 times the database performance when replacing traditional storage completely, which could enable your infrastructure to keep up with your growing business’ needs.
Accelerate Your Signature Banking Applications with IBM Storage OfferingsPaula Koziol
Signature Users can cut application run and response times by as much as 50% by applying the latest IBM Storage offerings. Hear about an example Signature User’s experience and benefits with IBM Flash. Also, hear about IBM’s direction with the IBMi processor and answer questions you may have in upgrading your IT infrastructure. Current data growth, analytics, and real-time access needs have changed the storage landscape for our clients, particularly in banking. IBM’s multi-billion dollar investments in storage are making a significant impact on the speed, efficiency, and management of these needs. Offerings such as all-flash systems and software defined storage have especially become attractive to our banking clients who are both accelerating the speed of existing applications, such as core banking – or, creating new applications demanding real-time access, such as cybersecurity and cognitive in payments. Learn how others in the Financial Services industry are addressing core banking, payments, and risk & compliance applications using IBM Storage offerings. In addition to Signature, other core banking examples applying flash storage within Fiserv include: Premier, Precision, and XP2. Many of the same business benefits experienced within the banking industry could apply to you and your clients. Learn how you can easily implement these proven capabilities with your Signature application now.
TECHNICAL WHITE PAPER▸ NetBackup 7.6 Plugin for VMware vCenterSymantec
In NetBackup 7.6, the NetBackup plug-in for vCenter integrates with VMware’s vSphere Client user interface to provide new VMware virtual machine administration capabilities.
The plug-in enables VMware administrators…
▸ To monitor their Virtual machine backups directly from the VMware vSphere Client UI.
▸ To export virtual machine backup reports from the vSphere Client UI.
▸ Initiate full virtual machine recovery directly from a Recovery Portal in the vSphere Client UI.
Store data more efficiently and increase I/O performance with lower latency w...Principled Technologies
Compared to an array from another vendor, the PowerMax 8000 offered a better inline data reduction ratio and better performance during simulated OLTP and data extraction workloads
Historically backups have been defined and referenced by the hostname of the physical system being protected. This has worked well when the relationship between the physical host and the operating system was a direct, one to one relationship. Backup processing impact was limited to each physical client and the biggest concern was saturating the network with backup traffic. This was easily managed by limiting the number of simultaneous client backups via a simple setting within the NetBackup policy.
Virtual machine technologies have changed this physical hardware dynamic. Dozens of operating systems (virtual machines) can now reside on a single physical (ESX) host connected to a single storage LUN with network access through a single NIC. When using traditional policy configurations, backup processing randomly occurs with no regard to the physical location of each virtual machine. As backups progress, a subset of ESX servers can be heavily impacted with active backups while other ESX systems sit idly waiting for their virtual machines to be protected. The effect of this is that backups tend to be slower than they need to be and backup processing impact on the ESX servers tends to be random and lopsided. Standard backup policy definitions simply do not translate well into virtual environments.
The NetBackup Virtual machine Intelligent Policy (VIP) feature is designed to solve this problem and more. With Virtual machine Intelligent Policy, backup processing can be automatically load balanced across the entire virtual machine environment. No ESX server is unfairly taxed with excessive backup processing and backups can be significantly faster. Once configured, this load balancing automatically detects changes in the virtual machine environment and automatically compensates backup processing based on these changes. Virtual machine Intelligent Policy places virtual machine backups on autopilot.
Veritas NetBackup benchmark comparison: Data protection in a large-scale virt...Principled Technologies
In an enterprise environment, a data center VM footprint can grow quickly; large-scale deployments of thousands of virtual machines are becoming increasingly common. Risk of failure grows proportionally to the number of systems deployed and critical failures are unavoidable. Your ability to offer data protection from a backup solution is critical to business continuity. Elongated, inefficient protection windows can create resource contention with production environments, therefore, it is critical to execute system backup in a finite window of time.
The Veritas NetBackup Integrated Appliance running NetBackup 7.6 offered application protection to 1,000 VMs in 80.3 percent less time in SAN testing and used NetApp array-based snapshots to create recovery points in 93.8 percent less time than Competitor “C.” In addition, the Veritas NetBackup Integrated Appliance with NetBackup 7.6 created backup images that offered granular recovery without additional steps and in a backup window 69.0 percent shorter than the backup window needed for Competitor “C.” These time savings can scale as your VM footprint grows, allowing you to execute both system protection and user-friendly, simplified recovery.
Symantec NetBackup 7.6 benchmark comparison: Data protection in a large-scale...Principled Technologies
The footprint of a VM can grow quickly in an enterprise environment and large-scale VM deployments in the thousands are common. As this number of deployed systems grows, so does the risk of failure. Critical failures can become unavoidable and offering data protection from a backup solution promotes business continuity. Elongated protection windows requiring multiple jobs of different types can create resource contention with production environments and may require valuable IT admin time, so a finite window for system backups can have plenty of importance.
In our hands-on SAN backup testing, the Symantec NetBackup Integrated Appliance running NetBackup 7.6 offered application protection to 1,000 VMs in 66.8 percent less time than Competitor “E” did. In addition, the Symantec NetBackup Integrated Appliance with NetBackup 7.6 created backup images that offered granular recovery without additional steps. These time and effort savings can scale as your VM footprint grows, allowing you to execute both system protection and user-friendly, simplified recovery.
The switching method you choose for your SBC environment can help determine performance and the experience that end-users have. We found that unifying switching with Cisco VM-FEX resulted in up to 29 percent lower latency than a solution using a traditional vSwitch when running a Citrix XenApp hosted shared desktop farm. Furthermore, the Cisco VM-FEX solution used up to 53 percent less CPU than the vSwitch solution did under extreme network conditions. In addition to these performance advantages, Cisco UCS Manager provides a central point of management and a simplified method to add vSphere hosts to the VM-FEX-enabled vSwitch, which can reduce management time and costs.
As our results show, switching to Cisco VM-FEX can provide your users with a more responsive environment.
A single-socket Dell EMC PowerEdge R7515 solution delivered better value on a...Principled Technologies
If your company is running important business applications in VMware vSAN clusters of servers that are several years old, chances are good that you’re considering upgrading to newer hardware. Our testing demonstrated that our clusters of single-socket Dell EMC PowerEdge R7515 servers and clusters of dual-socket HPE ProLiant DL380 Gen10 servers could both improve upon the database performance of a legacy cluster with five-year-old servers by more than 50 percent, with the Dell EMC cluster achieving 93.4 percent of the performance of the HPE cluster.
Reap better SQL Server OLTP performance with next-generation Dell EMC PowerEd...Principled Technologies
These new servers achieved up to 36.1 percent more OLTP database work than current-generation Dell EMC PowerEdge MX servers, while also lowering application response time
VMworld 2013: vSphere Data Protection 5.5 Advanced VMware Backup and Recovery...VMworld
VMworld Europe 2013
Mauricio Barra, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
Jeff Hunter, VMware
Prepare images for machine learning faster with servers powered by AMD EPYC 7...Principled Technologies
A server cluster with 3rd Gen AMD EPYC processors achieved higher throughput and took less time to prepare images for classification than a server cluster with 3rd Gen Intel Xeon Platinum 8380 processors
By automating high-touch, routine tasks, Dell EMC OME integrations and plugins empower IT admins to deliver effective and efficient systems management from a single console.
Slow performance and unavailable critical applications can impinge a company’s progress. You can apply patches and updates to improve application quality and user experience, but these changes need to be tested in resource-intensive environments before deployment. Keeping these applications accessing data is vital, too, as on-premises events can put availability at risk.
Our Dell EMC VMAX 250F and PowerEdge server solution supported test/dev environments and production database applications simultaneously without affecting the production applications’ performance. As we added VMs designed for test/dev environments, the production workload maintained an acceptable level of IOPS and achieved an average storage latency of less than a millisecond. The solution also kept data highly available with no downtime and no performance drop when we initiated a lost host connection for the primary storage. To run critical database applications of your company, consider the Dell EMC VMAX 250F for your datacenter.
Dell PowerEdge R920 running Oracle Database: Benefits of upgrading with NVMe ...Principled Technologies
Strong server performance is essential to companies running Oracle Database. The new Dell PowerEdge R920 provides strong performance in its base configuration with 24 SAS hard disks, but this performance gets an enormous boost when running the configuration containing NVMe Express Flash PCIe SSDs. In our testing, the upgraded configuration of the Dell PowerEdge R920 delivered 14.9 times the database performance of the base configuration. In addition, in testing the raw I/O throughput of the NVMe Express Flash PCIe SSDs, we saw as much as 192.8 times the IOPS as compared to the base configuration. Given that the storage subsystem is critical in servers and specifically database applications, the performance improvements offered by NVMe Express Flash PCIe SSDs can lead to great service improvements for your customers, making this upgrade a very wise investment.
AWS EC2 M6i instances with 3rd Gen Intel Xeon Scalable processors accelerated...Principled Technologies
At multiple instance sizes, M6i instances classified more frames per second than M5n instances with previous-gen processors or M6a instances with 3rd Gen AMD EPYC processors
Boosting performance with the Dell Acceleration Appliance for DatabasesPrincipled Technologies
If your business is expanding and you need to support more users accessing your databases, it’s time to act. Upgrading your database infrastructure with a flash storage-based solution is a smart way to improve performance without adding more servers or taking up very much rack space, which comes at a premium. The Dell Acceleration Appliance for Databases addresses this by providing strong performance when combined with your existing infrastructure or on its own.
We found that adding a highly available DAAD solution to our database application provided up to 3.01 times the Oracle Database 12c performance, which can make a big difference to your bottom line. Additionally, the DAAD delivered 3.14 times the database performance when replacing traditional storage completely, which could enable your infrastructure to keep up with your growing business’ needs.
Accelerate Your Signature Banking Applications with IBM Storage OfferingsPaula Koziol
Signature Users can cut application run and response times by as much as 50% by applying the latest IBM Storage offerings. Hear about an example Signature User’s experience and benefits with IBM Flash. Also, hear about IBM’s direction with the IBMi processor and answer questions you may have in upgrading your IT infrastructure. Current data growth, analytics, and real-time access needs have changed the storage landscape for our clients, particularly in banking. IBM’s multi-billion dollar investments in storage are making a significant impact on the speed, efficiency, and management of these needs. Offerings such as all-flash systems and software defined storage have especially become attractive to our banking clients who are both accelerating the speed of existing applications, such as core banking – or, creating new applications demanding real-time access, such as cybersecurity and cognitive in payments. Learn how others in the Financial Services industry are addressing core banking, payments, and risk & compliance applications using IBM Storage offerings. In addition to Signature, other core banking examples applying flash storage within Fiserv include: Premier, Precision, and XP2. Many of the same business benefits experienced within the banking industry could apply to you and your clients. Learn how you can easily implement these proven capabilities with your Signature application now.
TECHNICAL WHITE PAPER▸ NetBackup 7.6 Plugin for VMware vCenterSymantec
In NetBackup 7.6, the NetBackup plug-in for vCenter integrates with VMware’s vSphere Client user interface to provide new VMware virtual machine administration capabilities.
The plug-in enables VMware administrators…
▸ To monitor their Virtual machine backups directly from the VMware vSphere Client UI.
▸ To export virtual machine backup reports from the vSphere Client UI.
▸ Initiate full virtual machine recovery directly from a Recovery Portal in the vSphere Client UI.
Store data more efficiently and increase I/O performance with lower latency w...Principled Technologies
Compared to an array from another vendor, the PowerMax 8000 offered a better inline data reduction ratio and better performance during simulated OLTP and data extraction workloads
Historically backups have been defined and referenced by the hostname of the physical system being protected. This has worked well when the relationship between the physical host and the operating system was a direct, one to one relationship. Backup processing impact was limited to each physical client and the biggest concern was saturating the network with backup traffic. This was easily managed by limiting the number of simultaneous client backups via a simple setting within the NetBackup policy.
Virtual machine technologies have changed this physical hardware dynamic. Dozens of operating systems (virtual machines) can now reside on a single physical (ESX) host connected to a single storage LUN with network access through a single NIC. When using traditional policy configurations, backup processing randomly occurs with no regard to the physical location of each virtual machine. As backups progress, a subset of ESX servers can be heavily impacted with active backups while other ESX systems sit idly waiting for their virtual machines to be protected. The effect of this is that backups tend to be slower than they need to be and backup processing impact on the ESX servers tends to be random and lopsided. Standard backup policy definitions simply do not translate well into virtual environments.
The NetBackup Virtual machine Intelligent Policy (VIP) feature is designed to solve this problem and more. With Virtual machine Intelligent Policy, backup processing can be automatically load balanced across the entire virtual machine environment. No ESX server is unfairly taxed with excessive backup processing and backups can be significantly faster. Once configured, this load balancing automatically detects changes in the virtual machine environment and automatically compensates backup processing based on these changes. Virtual machine Intelligent Policy places virtual machine backups on autopilot.
Veritas NetBackup benchmark comparison: Data protection in a large-scale virt...Principled Technologies
In an enterprise environment, a data center VM footprint can grow quickly; large-scale deployments of thousands of virtual machines are becoming increasingly common. Risk of failure grows proportionally to the number of systems deployed and critical failures are unavoidable. Your ability to offer data protection from a backup solution is critical to business continuity. Elongated, inefficient protection windows can create resource contention with production environments, therefore, it is critical to execute system backup in a finite window of time.
The Veritas NetBackup Integrated Appliance running NetBackup 7.6 offered application protection to 1,000 VMs in 80.3 percent less time in SAN testing and used NetApp array-based snapshots to create recovery points in 93.8 percent less time than Competitor “C.” In addition, the Veritas NetBackup Integrated Appliance with NetBackup 7.6 created backup images that offered granular recovery without additional steps and in a backup window 69.0 percent shorter than the backup window needed for Competitor “C.” These time savings can scale as your VM footprint grows, allowing you to execute both system protection and user-friendly, simplified recovery.
Symantec NetBackup 7.6 benchmark comparison: Data protection in a large-scale...Principled Technologies
The footprint of a VM can grow quickly in an enterprise environment and large-scale VM deployments in the thousands are common. As this number of deployed systems grows, so does the risk of failure. Critical failures can become unavoidable and offering data protection from a backup solution promotes business continuity. Elongated protection windows requiring multiple jobs of different types can create resource contention with production environments and may require valuable IT admin time, so a finite window for system backups can have plenty of importance.
In our hands-on SAN backup testing, the Symantec NetBackup Integrated Appliance running NetBackup 7.6 offered application protection to 1,000 VMs in 66.8 percent less time than Competitor “E” did. In addition, the Symantec NetBackup Integrated Appliance with NetBackup 7.6 created backup images that offered granular recovery without additional steps. These time and effort savings can scale as your VM footprint grows, allowing you to execute both system protection and user-friendly, simplified recovery.
VMware vSphere vMotion: 5.4 times faster than Hyper-V Live MigrationVMware
Businesses using a virtualized infrastructure have many reasons to move active virtual machines (VMs) from one physical server to another. Whether the migrations are for routine maintenance, balancing performance needs, work distribution (consolidating VMs onto fewer servers during non-peak hours to conserve resources), or another reason, the best virtual infrastructure platform executes the move as quickly as possible and with minimal impact to end users.
We tested two competing features that move active VMs from one server to another, VMware vSphere 5 vMotion and Microsoft® Windows Server® 2008 R2 SP1 Hyper-V Live Migration. While both perform these moves with no VM downtime, in our testing the VMware solution did so faster, with greater application stability, and with less impact to application performance – clearly showing that not all live migration technologies are the same. VMware also holds an enormous advantage in concurrency: VMware vSphere 5 can move eight VMs at a time while a Microsoft Hyper-V cluster node can take part only as the source or destination in one live migration at a time. In our two test scenarios, the VMware vMotion solution was up to 5.4 times faster than the Microsoft Hyper-V Live Migration solution.
Efficient Data Protection in VMware environments. You will learn about the basics of data protection in VMware environments and you will also find sample configurations and recommendations including Symantec Backup Exec / NetBackup, Fujitsu ETERNUS LT and Fujitsu ETERNUS CS800.
Business-critical applications on VMware vSphere 6, VMware Virtual SAN, and V...Principled Technologies
Moving to the virtualized, software-defined datacenter can offer real benefits to today’s organizations. As our testing showed, virtualizing business-critical applications with VMware vSphere, VMware Virtual SAN, and VMware NSX not only delivered reliable performance in a peak utilization scenario, but also delivered business continuity during and after a simulated site evacuation.
Using this VMware Validated Design with QCT hardware and Intel SSDs, we demonstrated a virtualized critical Oracle Database application environment delivering strong performance, even when under extreme duress.
Recognizing that many organizations have multiple sites, we also proved that our environment performed reliably under a site evacuation scenario, migrating the primary site VMs to the secondary in just over eight minutes with no downtime.
With these features and strengths, the VMware Validated Design SDDC is a proven solution that allows for efficient deployment of components and can help improve the reliability, flexibility, and mobility of your multi-site environment.
Dynamic Data Centers - Taking it to the next levelsanvmibj
Delivering on the Promise of a Virtualized Dynamic Data Center
-Maximize economic value with end-to-end virtualization
- Break down your silos (silos of virtualization are still silos)
- Explore the potential of cloud services
Aventis Systems partnered with Veeam to provide a deep technical dive into the features of Veeam Backup Essentials v9. Check out the recording of the 45-minute live webinar now: https://youtu.be/67prVGb4Dwc
E’ un’estensione di VMware vCenter che fornisce ai professionisti IT la possibilità di disaster recovery, migrazione di siti e funzionalità di test non distruttive.
White paper: IBM FlashSystems in VMware EnvironmentsthinkASG
Drive performance in VMware environments with IBM FlashSystem. IBM flash storage delivers extreme, scalable performance for virtualized infrastructure.
A company’s success depends on critical application performance and availability. Upgrades and patches can improve application efficiency and user experience, but making the necessary changes requires resource intensive environments to test updates before deploying them. What’s more, these applications need to continue accessing data even in the event of an on-premises crisis.
Our Dell EMC VMAX 250F and PowerEdge server solution supported test/dev environments and production database applications simultaneously without affecting the production applications’ performance. Storage latency for the VMAX 250F peaked at a millisecond in our testing while IOPS stayed within an acceptable range. The solution also kept data highly available with no downtime or performance drop when we initiated a lost host connection for the primary storage. Consider the Dell EMC VMAX 250F array for your datacenter to support the critical database applications that drive your company.
I/O performance comparison of VMware vCloud Hybrid Service and Amazon Web Ser...Principled Technologies
Business computing is making its way to the cloud in a dramatic fashion. Selecting the right cloud service provider is a pivotal decision that could have a significant effect on how much your company benefits from this move.
Throughout our I/O tests, we found that VMware vCloud Hybrid Service instances performed dramatically better than the AWS instances, earning consistently higher Fio scores. On a 4K random workload, vCHS delivered performance that averaged 7 times that of the AWS solution, and on a 1M sequential workload, it delivered on average 9 times times the performance of AWS. Across all scenarios, vCHS delivered at least 3 times greater performance than AWS.
By choosing a cloud service that can deliver greater throughput, you can boost the performance of your disk-intensive applications, which can help you make the most of your investment in the cloud platform.
VMworld 2013: Maximize Database Performance in Your Software-Defined Data CenterVMworld
VMworld 2013
Mark Achtemichuk, VMware
Michael Webster, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
Similar to Veritas NetBackup 7.6 benchmark comparison: Data protection in a large-scale virtual environment (Part 3) (20)
Investing in GenAI: Cost‑benefit analysis of Dell on‑premises deployments vs....Principled Technologies
Conclusion
Diving into the world of GenAI has the potential to yield a great many benefits for your organization, but it first requires consideration for how best to implement those GenAI workloads. Whether your AI goals are to create a chatbot for online visitors, generate marketing materials, aid troubleshooting, or something else, implementing an AI solution requires careful planning and decision-making. A major decision is whether to host GenAI in the cloud or keep your data on premises. Traditional on-premises solutions can provide superior security and control, a substantial concern when dealing with large amounts of potentially sensitive data. But will supporting a GenAI solution on site be a drain on an organization’s IT budget?
In our research, we found that the value proposition is just the opposite: Hosting GenAI workloads on premises, either in a traditional Dell solution or using a managed Dell APEX pay-per-use solution, could significantly lower your GenAI costs over 3 years compared to hosting these workloads in the cloud. In fact, we found that a comparable AWS SageMaker solution would cost up to 3.8 times as much and an Azure ML solution would cost up to 3.6 times as much as GenAI on a Dell APEX pay-per-use solution. These results show that organizations looking to implement GenAI and reap the business benefits to come can find many advantages in an on-premises Dell solution, whether they opt to purchase and manage it themselves or choose a subscription-based Dell APEX pay-per-use solution. Choosing an on-premises Dell solution could save your organization significantly over hosting GenAI in the cloud, while giving you control over the security and privacy of your data as well as any updates and changes to the environment, and while ensuring your environment is managed consistently.
Workstations powered by Intel can play a vital role in CPU-intensive AI devel...Principled Technologies
In three AI development workflows, Intel processor-powered workstations delivered strong performance, without using their GPUs, making them a good choice for this part of the AI process
Conclusion
We executed three AI development workflows on tower workstations and mobile workstations from three vendors, with each workflow utilizing only the Intel CPU cores, and found that these platforms were suitable for carrying out various AI tasks. For two of the workflows, we learned that completing the tasks on the tower workstations took roughly half as much time as on the mobile workstations. This supports the idea that the tower workstations would be appropriate for a development environment for more complex models with a greater volume of data and that the mobile workstations would be well-suited for data scientists fine-tuning simpler models. In the third workflow, we explored tower workstation performance with different precision levels and learned that using 16-bit floating point precision allowed the workstations to execute the workflow in less time and also reduced memory usage dramatically. For all three AI workflows we executed, we consider the time the workstations needed to complete the tasks to be acceptable, and believe that these workstations can be appropriate, cost-effective choices for these kinds of activities.
Enable security features with no impact to OLTP performance with Dell PowerEd...Principled Technologies
Get comparable online transaction processing (OLTP) performance with or without enabling AMD Secure Memory Encryption and AMD Secure Encrypted Virtualization - Encrypted State
Conclusion
You’ve likely already implemented many security measures for your servers, which may include physical security for the data center, hardware-level security, and software-level security. With the cost of data breaches high and still growing, however, wise IT teams will consider what additional security measures they may be able to implement.
AMD SME and SEV-ES are technologies that are already available within your AMD processor-powered 16th Generation Dell PowerEdge servers—and in our testing, we saw that they can offer extra layers of security without affecting performance. We compared the online transaction processing performance of a Dell PowerEdge R7625 server, powered by AMD EPYC 9274F processors, with and without these two security features enabled. We found that enabling AMD Secure Memory Encryption and Secure Encrypted Virtualization-Encrypted State did not impact performance at all.
If your team is assessing areas where you might be able to enhance security—without paying a large performance cost—consider enabling AME SME and AMD SEV-ES in your Dell PowerEdge servers.
Improving energy efficiency in the data center: Endure higher temperatures wi...Principled Technologies
In high-temperature test scenarios, a Dell PowerEdge HS5620 server continued running an intensive workload without component warnings or failures, while a Supermicro SYS‑621C-TN12R server failed
Conclusion: Remain resilient in high temperatures with the Dell PowerEdge HS5620 to help increase efficiency
Increasing your data center’s temperature can help your organization make strides in energy efficiency and cooling cost savings. With servers that can hold up to these higher everyday temperatures—as well as high temperatures due to unforeseen circumstances—your business can continue to deliver the performance your apps and clients require.
When we ran an intensive floating-point workload on a Dell PowerEdge HS5620 and a Supermicro SYS-621CTN12R in three scenario types simulating typical operations at 25°C, a fan failure, and an HVAC malfunction, the Dell server experienced no component warnings or failures. In contrast, the Supermicro server experienced warnings in all three scenario types and experienced component failures in the latter two tests, rendering the system unusable. When we inspected and analyzed each system, we found that the Dell PowerEdge HS5620 server’s motherboard layout, fans, and chassis offered cooling design advantages.
For businesses aiming to meet sustainability goals by running hotter data centers, as well as those concerned with server cooling design, the Dell PowerEdge HS5620 is a strong contender to take on higher temperatures during day-to-day operations and unexpected malfunctions.
Dell APEX Cloud Platform for Red Hat OpenShift: An easily deployable and powe...Principled Technologies
The 4th Generation Intel Xeon Scalable processor‑powered solution deployed in less than two hours and ran a Kubernetes container-based generative AI workload effectively
Dell APEX Cloud Platform for Red Hat OpenShift: An easily deployable and powe...Principled Technologies
The 4th Generation Intel Xeon Scalable processor‑powered solution deployed in less than two hours and ran a generative AI workload effectively
Conclusion
The appeal of incorporating GenAI into your organization’s operations is likely great. Getting started with an efficient solution for your next LLM workload or application can seem daunting because of the changing hardware and software landscape, but Dell APEX Cloud Platform for Red Hat OpenShift powered by 4th Gen Intel Xeon Scalable processors could provide the solution you need. We started with a Dell Validated Design as a reference, and then went on to modify the deployment as necessary for our Llama 2 workload. The Dell APEX Cloud Platform for Red Hat OpenShift solution worked well for our LLM, and by using this deployment guide in conjunction with numerous Dell documents and some flexibility, you could be well on your way to innovating your next GenAI breakthrough.
Upgrade your cloud infrastructure with Dell PowerEdge R760 servers and VMware...Principled Technologies
Compared to a cluster of PowerEdge R750 servers running VMware Cloud Foundation (VCF)
For organizations running clusters of moderately configured, older Dell PowerEdge servers with a previous version of VCF, upgrading to better-configured modern servers can provide a significant performance boost and more.
Upgrade your cloud infrastructure with Dell PowerEdge R760 servers and VMware...Principled Technologies
Compared to a cluster of PowerEdge R750 servers running VMware Cloud Foundation 4.5
If your company is struggling with underperforming infrastructure, upgrading to 16th Generation Dell PowerEdge servers running VCF 5.1 could be just what you need to handle more database throughput and reduce vSAN latencies. As an additional benefit to IT admins, we also found that the embedded VMware Aria Operation adapter provided useful infrastructure insights.
Realize 2.1X the performance with 20% less power with AMD EPYC processor-back...Principled Technologies
Three AMD EPYC processor-based two-processor solutions outshined comparable Intel Xeon Scalable processor-based solutions by handling more Redis workload transactions and requests while consuming less power
Conclusion
Performance and energy efficiency are significant factors in processor selection for servers running data-intensive workloads, such as Redis. We compared the Redis performance and energy consumption of a server cluster in three AMD EPYC two-processor configurations against that of a server cluster in two Intel Xeon Scalable two-processor configurations. In each of our three test scenarios, the server cluster backed by AMD EPYC processors outperformed the server cluster backed by Intel Xeon Scalable processors. In addition, one of the AMD EPYC processor-based clusters consumed 20 percent less power than its Intel Xeon Scalable processor-based counterpart. Combining these measurements gave us power efficiency metrics that demonstrate how valuable AMD EPYC processor-based servers could be—you could see better performance per watt with these AMD EPYC processor-based server clusters and potentially get more from your Redis or other data intensive applications and workloads while reducing data center power costs.
Improve performance and gain room to grow by easily migrating to a modern Ope...Principled Technologies
We deployed this modern environment, then migrated database VMs from legacy servers and saw performance improvements that support consolidation
Conclusion
If your organization’s transactional databases are running on gear that is several years old, you have much to gain by upgrading to modern servers with new processors and networking components and an OpenShift environment. In our testing, a modern OpenShift environment with a cluster of three Dell PowerEdge R7615 servers with 4th Generation AMD EPYC processors and high-speed 100Gb Broadcom NICs outperformed a legacy environment with MySQL VMs running on a cluster of three Dell PowerEdge R7515 servers with 3rd Generation AMD EPYC processors and 25Gb Broadcom NICs. We also easily migrated a VM from the legacy environment to the modern environment, with only a few steps required to set up and less than ten minutes of hands-on time. The performance advantage of the modern servers would allow a company to reduce the number of servers necessary to perform a given amount of database work, thus lowering operational expenditures such as power and cooling and IT staff time for maintenance. The high-speed 100Gb Broadcom NICs in this solution also give companies better network performance and networking capacity to grow as they embrace emerging technologies such as AI that put great demands on networks.
Boost PC performance: How more available memory can improve productivityPrincipled Technologies
With more memory available, system performance of three Dell devices increased, which can translate to a better user experience
Conclusion
When your system has plenty of RAM to meet your needs, you can efficiently access the applications and data you need to finish projects and to-do lists without sacrificing time and focus. Our test results show that with more memory available, three Dell PCs delivered better performance and took less time to complete the Procyon Office Productivity benchmark. These advantages translate to users being able to complete workflows more quickly and multitask more easily. Whether you need the mobility of the Latitude 5440, the creative capabilities of the Precision 3470, or the high performance of the OptiPlex Tower Plus 7010, configuring your system with more RAM can help keep processes running smoothly, enabling you to do more without compromising performance.
Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...Principled Technologies
A Principled Technologies deployment guide
Conclusion
Deploying VMware Cloud Foundation 5.1 on next gen Dell PowerEdge servers brings together critical virtualization capabilities and high-performing hardware infrastructure. Relying on our hands-on experience, this deployment guide offers a comprehensive roadmap that can guide your organization through the seamless integration of advanced VMware cloud solutions with the performance and reliability of Dell PowerEdge servers. In addition to the deployment efficiency, the Cloud Foundation 5.1 and PowerEdge solution delivered strong performance while running a MySQL database workload. By leveraging VMware Cloud Foundation 5.1 and PowerEdge servers, you could help your organization embrace cloud computing with confidence, potentially unlocking a new level of agility, scalability, and efficiency in your data center operations.
Upgrade your cloud infrastructure with Dell PowerEdge R760 servers and VMware...Principled Technologies
Compared to a cluster of PowerEdge R750 servers running VMware Cloud Foundation 4.5
Conclusion
If your company is struggling with underperforming infrastructure, upgrading to 16th Generation Dell PowerEdge servers running VCF 5.1 could be just what you need to handle more database throughput and reduce vSAN latencies. We found that a Dell PowerEdge R760 server cluster running VCF 5.1 processed over 78 percent more TPM and 79 percent more NOPM than a Dell PowerEdge R750 server cluster running VCF 4.5. It’s also worth noting that the PowerEdge R750 cluster bottlenecked on vSAN storage, with max write latency at 8.9ms. For reference, the PowerEdge R760 cluster clocked in at 3.8ms max write latency. This higher latency is due in part to the single disk group per host on the moderately configured PowerEdge R750 cluster, while the better-configured PowerEdge R760 cluster supported four disk groups per host. As an additional benefit to IT admins, we also found that the embedded VMware Aria Operation adapter provided useful infrastructure insights.
Based on our research using publicly available materials, it appears that Dell supports nine of the ten PC security features we investigated, HP supports six of them, and Lenovo supports three features.
Increase security, sustainability, and efficiency with robust Dell server man...Principled Technologies
Compared to the Supermicro management portfolio
Conclusion
Choosing a vendor for server purchases is about more than just the hardware platform. Decision-makers must also consider more long-term concerns, including system/data security, energy efficiency, and ease of management. These concerns make the systems management tools a vendor offers as important as the hardware.
We investigated the features and capabilities of server management tools from Dell and Supermicro, comparing Dell iDRAC9 against Supermicro IPMI for embedded server management and Dell OpenManage Enterprise and CloudIQ against Supermicro Server Manager for one-to-many device and console management and monitoring. We found that the Dell management tools provided more comprehensive security, sustainability, and management/monitoring features and capabilities than Supermicro servers did. In addition, Dell tools automated more tasks to ease server management, resulting in significant time savings for administrators versus having to do the same tasks manually with Supermicro tools.
When making a server purchase, a vendor’s associated management products are critical to protect data, support a more sustainable environment, and to ease the maintenance of systems. Our tests and research showed that the Dell management portfolio for PowerEdge servers offered more features to help organizations meet these goals than the comparable Supermicro management products.
Increase security, sustainability, and efficiency with robust Dell server man...Principled Technologies
Compared to the Supermicro management portfolio
Conclusion
Choosing a vendor for server purchases is about more than just the hardware platform. Decision-makers must also consider more long-term concerns, including system/data security, energy efficiency, and ease of management. These concerns make the systems management tools a vendor offers as important as the hardware.
We investigated the features and capabilities of server management tools from Dell and Supermicro, comparing Dell iDRAC9 against Supermicro IPMI for embedded server management and Dell OpenManage Enterprise and CloudIQ against Supermicro Server Manager for one-to-many device and console management and monitoring. We found that the Dell management tools provided more comprehensive security, sustainability, and management/monitoring features and capabilities than Supermicro servers did. In addition, Dell tools automated more tasks to ease server management, resulting in significant time savings for administrators versus having to do the same tasks manually with Supermicro tools.
When making a server purchase, a vendor’s associated management products are critical to protect data, support a more sustainable environment, and to ease the maintenance of systems. Our tests and research showed that the Dell management portfolio for PowerEdge servers offered more features to help organizations meet these goals than the comparable Supermicro management products.
Scale up your storage with higher-performing Dell APEX Block Storage for AWS ...Principled Technologies
In our tests, Dell APEX Block Storage for AWS outperformed similarly configured solutions from Vendor A, achieving more IOPS, better throughput, and more consistent performance on both NVMe-supported configurations and configurations backed by Elastic Block Store (EBS) alone.
Dell APEX Block Storage for AWS supports a full NVMe backed configuration, but Vendor A doesn’t—its solution uses EBS for storage capacity and NVMe as an extended read cache—which means APEX Block Storage for AWS can deliver faster storage performance.
Scale up your storage with higher-performing Dell APEX Block Storage for AWSPrincipled Technologies
Dell APEX Block Storage for AWS offered stronger and more consistent storage performance for better business agility than a Vendor A solution
Conclusion
Enterprises desiring the flexibility and convenience of the cloud for their block storage workloads can find fast-performing solutions with the enterprise storage features they’re used to in on-premises infrastructure by selecting Dell APEX Block Storage for AWS.
Our hands-on tests showed that compared to the Vendor A solution, Dell APEX Block Storage for AWS offered stronger, more consistent storage performance in both NVMe-supported and EBS-backed configurations. Using NVMe-supported configurations, Dell APEX Block Storage for AWS achieved 4.7x the random read IOPS and 5.1x the throughput on sequential read operations per node vs. Vendor A. In our EBS-backed comparison, Dell APEX Block Storage for AWS offered 2.2x the throughput per node on sequential read operations vs. Vendor A.
Plus, the ability to scale beyond three nodes—up to 512 storage nodes with capacity of up to 8 PBs—enables Dell APEX Block Storage for AWS to help ensure performance and capacity as your team plans for the future.
Get in and stay in the productivity zone with the HP Z2 G9 Tower WorkstationPrincipled Technologies
We compared CPU performance and noise output of an HP Z2 G9 Tower Workstation in High Performance Mode to Dell Precision 3660 and 5860 tower workstations in optimized performance modes
Conclusion
HP Z2 G9 Tower Workstation users can change the BIOS settings to dial in the performance mode that best suits their needs: High Performance Mode, Performance Mode, or Quiet Mode. In good
news for both creative and technical professionals, we found that an Intel Core i9-13900 processor-powered HP Z2 G9 Tower Workstation set to High Performance mode received higher CPU-based benchmark scores than both a similarly configured Dell Precision 3660 and a Dell Precision 5860 equipped with an Intel Xeon w5-2455x processor. Plus, the HP Z2 G9 Tower Workstation was quieter while running CPU-intensive Cinebench 2024 and SPECapc for Solidworks 2022 workloads than both Dell Precision tower workstations. This means HP Z2 G9 Tower Workstation users who prize performance over everything else can do so without sacrificing a quiet workspace.
Open up new possibilities with higher transactional database performance from...Principled Technologies
In our PostgreSQL tests, R7i instances boosted performance over R6i instances with previous-gen processors
If you use the open-source PostgreSQL database to run your critical business operations, you have many cloud options from which to choose. While many of these instances can do the job, some can deliver stronger performance, which can mean getting a greater return on your cloud investment.
We conducted hands-on testing with the HammerDB TPROC-C benchmark to see how the PostgreSQL performance of Amazon EC2 R7i instances, enabled by 4th Gen Intel Xeon Scalable processors, stacked up to that of R6i instances with previous-generation processors. We learned that small, medium-sized, and large R7i instances with the newer processors delivered better OLTP performance, with improvements as high as 13.8 percent. By choosing the R7i instances, your organization has the potential to support more users, deliver a better experience to those users, and even lower your cloud operating expenditures by requiring fewer instances to get the job done.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Essentials of Automations: Optimizing FME Workflows with Parameters
Veritas NetBackup 7.6 benchmark comparison: Data protection in a large-scale virtual environment (Part 3)
1. JULY 2015 (Revised)
A PRINCIPLED TECHNOLOGIES TEST REPORT
(Third of a three-part series)
Commissioned by Veritas Corp.
VERITAS NETBACKUP BENCHMARK COMPARISON: DATA PROTECTION IN A
LARGE-SCALE VIRTUAL ENVIRONMENT (PART 3)
Virtualization1
technology is changing the way data centers work. Technologies
such as VMware® vSphere® shrink the physical footprint of computing hardware by
increasing the number of virtual servers. Within the enterprise, large-scale deployments
of thousands of virtual machines are now common. To protect the data on these virtual
systems, enterprises employ a variety of backup methods including hardware snapshots,
hypervisor-level backup (vStorage APIs for Data Protection (VADP) in the case of
VMware technology), and traditional agent-in-guest methods. Enterprises that utilize
both block Storage Area Network (SAN) systems and file-based Network-Attached
Storage (NAS) may scale more effectively, but backup and recovery systems must fully
leverage the strengths of each platform to provide efficient service with minimal impact
to the production environment.
In our hands-on testing at Principled Technologies, we wanted to see how
leading enterprise backup and recovery solutions handled large-scale virtual machine
1
In July 2014, Symantec commissioned Principled Technologies to perform Part 1 and Part 2 of this series. In January 2015, Symantec
selected Veritas Technologies Corporation as the name for its independent information management company and changed the
name of the product from Symantec NetBackup to Veritas NetBackup.
2. A Principled Technologies test report 2Veritas NetBackup benchmark comparison: Data protection in a
large-scale virtual environment (Part 3)
(VM) deployments based on vSphere. We tested a solution using industry-leading
Veritas NetBackup software and the Veritas NetBackup Integrated Appliance, with
NetApp FAS3200-series arrays to host the VMs, and a comparable solution from another
leading competitor (Competitor “V”). We tested two types of scenarios: one that utilized
SAN storage and one that utilized NAS storage. In both scenarios, we tested with
increasing populations of VMs—as low as 100 and as high as 1,000–to see how each
solution scaled as the environment grew.
We found the Veritas NetBackup converged platform consisting of the
NetBackup Integrated Appliance and NetBackup 7.6 software, featuring capabilities such
as Accelerator, Replication Director, and Instant Recovery—all for VMware vSphere—
provided a more scalable platform than the Competitor “V” platform. With 1,000 VMs,
NetBackup completed the SAN transport backup in a Fibre Channel SAN environment in
67.3 percent less time than the Competitor “V” solution did. In the NAS scenario with
1,000 VMs, Replication Director created recovery points via NetApp array-based
snapshots in 54.1 percent less time than the Competitor “V” solution did. Competitor
“V” failed to perform application consistent backups when the environment grew to
nearly 300 VMs. It also failed to prove recovery at scale when pushed to recover 24 VMs
concurrently.
In our tests, Veritas NetBackup with the NetBackup Integrated Appliance
provided superior scalability and performance needed to protect the largest virtual
server deployments, when compared to the Competitor “V” solution.
WHAT WE COMPARED
Backup via VMware vStorage APIs for data protection
Using the NetBackup Integrated Appliance as both media server and backup
storage, we tested how long it took to execute a backup with virtual application
protection. Using the breakdown illustrated in Figure 5, we performed full backups with
application protection on groups of VMs from 100 to 1,000, measuring the backup time
elapsed.
Backup via storage-array snapshots
We rebuilt our storage network and datastores as NFS and used NetBackup
Replication Director to create crash-consistent backups of NAS-based NFS storage at
100-, 200-, 500-, and 1,000-VM counts and an application-consistent backup at 1,000
VMs. We then performed the same tests on Competitor “V.” (Note: Because Competitor
“V” failed to perform application-consistent backups at 1,000 VMs, we had to scale
down the environment for further investigation. Please see the details on page 11.) We
captured metrics on backup speed and hardware performance to determine if there was
any performance degradation in the environment. We also recorded the administrative
time required for each test.
3. A Principled Technologies test report 3Veritas NetBackup benchmark comparison: Data protection in a
large-scale virtual environment (Part 3)
OUR ENVIRONMENT
We set up the test environment using 20 Dell™ PowerEdge™ M420 server
blades running VMware vSphere ESXi 5.5. Figure 1 shows how we configured our data
network. We used this configuration universally on SAN and NAS testing. Figure 2 shows
our storage network for vStorage APIs-based backup testing, and Figure 3 shows our
storage network for storage-array snapshot-based backup testing.
Figure 1: Detailed test bed layout: data network.
4. A Principled Technologies test report 4Veritas NetBackup benchmark comparison: Data protection in a
large-scale virtual environment (Part 3)
Figure 2: Detailed storage network: vStorage APIs-based backups.
Figure 3: Detailed storage network: Storage-array snapshot-based backups.
5. A Principled Technologies test report 5Veritas NetBackup benchmark comparison: Data protection in a
large-scale virtual environment (Part 3)
We created a test environment of 1,000 Microsoft® Windows Server®-based
VMs in several different configurations, depending on the test. We used Windows
Server 2012 for application VMs, and Windows Server 2008 R2 Core installation for the
standalone Web and idle file server VMs.
To balance the load across the ESXi hosts and storage, we created a matrix to
ensure that equal load was distributed across all four NetApp filers (four volumes for the
NAS testing, 40 LUNs/datastores for SAN testing) and the 20 ESXi hosts. This prevented
overutilization of individual system components while others were idle, optimizing the
performance of the multi-threaded backup procedures. For SAN testing, we used Veritas
NetBackup’s resources limits capability to eliminate the possibility of resource
contention.
When we completed our NetBackup testing, we removed the NetBackup
appliance, added Competitor “V” on similarly configured hardware, and retested. For
Competitor “V,” we performed iterative testing to determine the most effective number
of streams to use in our environment, arriving at eight simultaneous streams.
For this first scenario, on SAN transport, we created 200 Windows Server 2012
application VMs running Microsoft SQL Server®, Microsoft Exchange, or Microsoft
SharePoint® (10 tiles of 20 VMs each), and up to 800 idle Windows Server 2012 VMs.
Figure 4 represents the grouping of VMs included in each backup job.
Figure 4: Backup via vStorage APIs based transport VM grouping.
6. A Principled Technologies test report 6Veritas NetBackup benchmark comparison: Data protection in a
large-scale virtual environment (Part 3)
Figure 5 provides the details for the sub-categories of VMs we used in this phase
of testing.
Server VM type Disk size (in GB)
VM count
100 200 400 1,000
Active Directory® server 55 5 10 10 10
Exchange Server 50 25 50 50 50
SharePoint Web server 55 15 30 30 30
SharePoint SQL server 160 5 10 10 10
Web application SQL server 50 50 100 100 100
Idle Web server 22 200 800
Figure 5: Production VMs on SAN storage. Color-coding corresponds with Figure 4.
For the NAS test phase, we created crash-consistent backups of NAS-based NFS
storage at 100-, 200-, 500-, and 1,000-VM counts, as well as a 1,000-VM application-
consistent backup. Figure 6 represents the groupings we used in each test category.
Figure 6: Backup via NAS transport VM grouping.
7. A Principled Technologies test report 7Veritas NetBackup benchmark comparison: Data protection in a
large-scale virtual environment (Part 3)
Figure 7 lists the details for the sub-categories of VMs we used in this phase of
testing.
Server VM type Disk size (in GB)
VM count
100 200 500 1,000
Active Directory server 55 1 1 3 5
Exchange Server 50 5 5 15 25
SharePoint Web server 55 3 6 15
SharePoint SQL server 160 1 2 5
Web application SQL server 50 4 10 24 50
Standalone Web server 22 15 30 75 150
OS 22 75 150 375 750
Figure 7: Production VMs on NAS Storage. Color-coding corresponds with Figure 6.
WHAT WE FOUND
Scenario 1 – SAN testing vs. Competitor “V”
Backup with virtual application protection via SAN transport
Using a comparable Intel® Xeon® processor-based server platform with identical
memory and I/O configurations to the NetBackup Integrated Appliance as the backup
target and using Competitor “V’s” enterprise backup software and best practices,2
we
timed how long it took to complete an application-consistent backup of a group of VMs
using SAN transport.
For this scenario, we created policies or groups containing the client VMs we
wished to target, and from the GUI, instructed the orchestration server of each product
to perform backups of the entire group. The NetBackup solution backed up 1,000 VMs
in 67.3 percent less time than the Competitor “V” solution—5 hours and 52 minutes
less. In other words, the NetBackup solution completed the backup of 1,000 VMs in less
than one-third the time that Competitor “V” did. Figure 8 shows the total time to
complete the SAN transport backup for both solutions at every level of VM count we
tested.
2
This configuration fell within the recommendations of Competitor “V.”
8. A Principled Technologies test report 8Veritas NetBackup benchmark comparison: Data protection in a
large-scale virtual environment (Part 3)
Figure 8: The total time
each system took to
complete vStorage
APIs-based SAN
transport backup in
hours:minutes:seconds.
Lower numbers are
better.
We closely examined our infrastructure to ensure there were no bottlenecks
affecting either solution’s performance. After we found no evidence of bottlenecks in
our test bed, we tested each platform. When running backups, we found Competitor
“V” was at its maximum performance, analysis of data captured during backup runs
suggested that the VM storage I/O was the limiting resource. As can be seen in Figure
10, when all eight streams were concurrently active, the virtual machine storage
showed consistently high disk utilization, higher than the impact from NetBackup-based
VADP backups for the same number of streams. See Appendix C for more details on the
storage latency we saw during the run.
Competitor “V” runs longer than a typical 6- to 8-hour backup window, as we
saw with our 8-hour window. This means that when it runs over the typical backup
window, production applications may not be able to operate at peak efficiency due to
the impact of resource contention, as Figures 9 and 10 show.
9. A Principled Technologies test report 9Veritas NetBackup benchmark comparison: Data protection in a
large-scale virtual environment (Part 3)
Figure 9: Disk utilization for the four individual NetApp filers for Veritas NetBackup.
Figure 10: Disk utilization for the four individual NetApp filers for Competitor “V.”
10. A Principled Technologies test report 10Veritas NetBackup benchmark comparison: Data protection in a
large-scale virtual environment (Part 3)
Recovery at scale
When multiple servers need recovering at the same time, in the case of an off-
site disaster recovery operation, for example, the ability to recover servers quickly can
mean a faster return to service and a smaller impact on your bottom line. In our labs at
Principled Technologies, we compared the time it took to recover a single application
VM, and concurrently, 8, 16, and 24 application VMs with Symantec NetBackup 7.6 and
the Competitor “V” solution.
As seen in Figure 11, the two backup solutions delivered comparable
performance at every concurrent recovery level with the exception of the last one, with
24 application VMs, where Competitor “V” started to show instability. We found that
Competitor “V” consistently failed due to a hard-coded timeout value when concurrent
recoveries are performed at the higher end of the scale.
Figure 11: The total time
each system took to
complete concurrent
restores in
hours:minutes:seconds.
Lower numbers are better.
11. A Principled Technologies test report 11Veritas NetBackup benchmark comparison: Data protection in a
large-scale virtual environment (Part 3)
Scenario 2 – Storage-array snapshot-based backup testing vs. Competitor “V”
Backup testing via NAS array-based snapshot
Our second scenario tested the ability to integrate with NetApp array-based
snapshots to create recovery points in a high-VM-count environment. First, we ran
Veritas NetBackup Replication Director and the comparable software from Competitor
“V” on 100 application VMs and then on 200, 500, and 1,000 VMs. At the 1,000-VM
level, crash-consistent recovery points with the NetBackup solution took up to 54.1
percent less time than the Competitor “V” solution. Because Competitor “V” failed to
perform application-consistent backups at 1,000 VMs, we had to scale down the
environment for further investigation.
In the crash-consistent snapshot use case, as we increased our VM count from
100 to 1,000, the total integration times with NetBackup Replication Director increased
slightly, from 5 minutes and 4 seconds with 100 VMs to 8 minutes and 29 seconds with
1,000 VMs. The total recovery-point integration times with comparable software from
Competitor “V” increased at a much greater rate, from 2 minutes and 39 seconds for
100 VMs to 18 minutes and 29 seconds for 1,000 VMs. Figure 12 shows the total time to
complete array-based snapshots for both solutions at every level of VM count we
tested.
Figure 12: The total time
each system took to
complete a storage array-
based snapshot backup in
minutes:seconds. Lower
numbers are better. Note:
For all testing, we did not
enable snapshot indexing or
make copies of the array-
based snapshots.
We measured integration with application-consistent recovery points at 1,000
VMs and crash-consistent recovery points for both systems at four VM counts: 100, 200,
500, and 1,000. There was no noteworthy I/O activity on the storage or the backup
targets to measure or report because our testing measured hardware-based snapshots
without indexing. Additionally, we encountered greater instability when we tried to get
Competitor “V” to perform application-consistent snapshots. After initially attempting
to get 1,000 VMs backed up and encountering a hard-coded timeout error, we went
12. A Principled Technologies test report 12Veritas NetBackup benchmark comparison: Data protection in a
large-scale virtual environment (Part 3)
backwards and tried to determine where the failure was. We determined that
application-consistent snapshots from Competitor “V” started failing when the
environment grew to a point somewhere between 200 and 300 VMs. Figure 13 shows
the application-consistent and crash-consistent times for both solutions.
100 VMs 200 VMs 500 VMs 1,000 VMs
Application-consistent recovery points
Veritas NetBackup Integrated Appliance with NetBackup
Replication Director
00:38:22
Competitor “V” with snapshot integration technology 03:49:06
Crash-consistent recovery points
Veritas NetBackup Integrated Appliance with NetBackup
Replication Director
0:05:04 0:05:17 0:06:30 00:08:29
Competitor “V” with snapshot integration technology 0:02:39 0:04:22 0:09:42 0:18:29
Figure 13: The times to complete application- and crash-consistent recovery points for both solutions in
hours:minutes:seconds. Lower numbers are better.
CONCLUSION
In an enterprise environment, a data center VM footprint can grow quickly;
large-scale deployments of thousands of virtual machines are becoming increasingly
common. Risk of failure grows proportionally to the number of systems deployed and
critical failures are unavoidable. Your ability to offer data protection from a backup
solution is critical to business continuity. Elongated, inefficient protection windows can
create resource contention with production environments, making it critical to execute
system backup in a finite window of time.
The Veritas NetBackup Integrated Appliance running NetBackup 7.6 offered
application protection to 1,000 VMs in 67.3 percent less time in SAN testing and used
NetApp array-based snapshots to create recovery points in 54.1 percent less time than
Competitor “V.” NetBackup was able to perform application-consistent backups at 1,000
VMs while Competitor “V” started to fail as the environment approached 300 VMs. Also,
Competitor “V” was not able to complete a concurrent restore of 24 VMs while Veritas
NetBackup was. The ability to complete backups and recoveries at scale are the most
critical factor when determining the right solution for you. These time savings can scale
as your VM footprint grows, allowing you to execute both system protection and user-
friendly, simplified recovery.
13. A Principled Technologies test report 13Veritas NetBackup benchmark comparison: Data protection in a
large-scale virtual environment (Part 3)
APPENDIX A – SYSTEM CONFIGURATION INFORMATION
Figure 14 lists the information for the server from the NetBackup solution.
System Dell PowerEdge M420 blade server (vSphere host)
Power supplies (in the Dell PowerEdge M1000e
Blade Enclosure)
Total number 6
Vendor and model number Dell A236P-00
Wattage of each (W) 2,360
Cooling fans (in the Dell PowerEdge M1000e Blade
Enclosure)
Total number 9
Vendor and model number Dell YK776 Rev. X50
Dimensions (h x w) of each 3.1” x 3.5”
Volts 12
Amps 7
General
Number of processor packages 2
Number of cores per processor 8
Number of hardware threads per core 2
System power management policy Performance
CPU
Vendor Intel
Name Xeon
Model number E5-2420
Stepping 2S
Socket type FCLGA1356
Core frequency (GHz) 1.9
Bus frequency 7.2
L1 cache 32 KB + 32 KB (per core)
L2 cache 256 KB (per core)
L3 cache 15 MB
Platform
Vendor and model number Dell PowerEdge M420
Motherboard model number 0MN3VC
BIOS name and version 1.2.4
BIOS settings Default, Performance profile
Memory module(s)
Total RAM in system (GB) 96
Vendor and model number Samsung® M393B2G70BH0-YH9
Type PC3L-10600R
Speed (MHz) 1,333
Speed running in the system (MHz) 1,333
Timing/Latency (tCL-tRCD-tRP-tRASmin) 9-9-9-36
Size (GB) 16
14. A Principled Technologies test report 14Veritas NetBackup benchmark comparison: Data protection in a
large-scale virtual environment (Part 3)
System Dell PowerEdge M420 blade server (vSphere host)
Number of RAM module(s) 6
Chip organization Double-sided
Rank Dual
Operating system
Name VMware vSphere 5.5.0
Build number 1209974
File system VMFS
Kernel VMkernel 5.5.0
Language English
Graphics
Vendor and model number Matrox® G200eR
Graphics memory (MB) 16
RAID controller
Vendor and model number Dell PERC H310 Embedded
Firmware version 20.10.1-0084
Driver version 5.1.112.64 (6/12/2011)
Cache size (MB) 0 MB
Hard drive
Vendor and model number Dell SG9XCS1
Number of disks in system 2
Size (GB) 50
Buffer size (MB) N/A
RPM N/A
Type SSD
Ethernet adapters
Vendor and model number 2 x Broadcom® BCM57810 NetXtreme® II 10 GigE
Type LOM
USB ports
Number 2 External
Type 2.0
Figure 14: Detailed information for the server we tested from the NetBackup solution.
15. A Principled Technologies test report 15Veritas NetBackup benchmark comparison: Data protection in a
large-scale virtual environment (Part 3)
Figure 15 lists the information for the NetApp storage from the NetBackup solution.
System NetApp FAS3240
Platform
Vendor and model number 4 x NetApp FAS3240
OS name and version NetApp Release 8.1.3 (7-Mode)
Hard drives
Number of drives 24
Size (GB) 560
RPM 15K
Type SAS
Network adapters
Vendor and model number 2 x 10Gbps
Type Integrated
Fiber adapters
Vendor and model number 2 x 8Gbps
Type PCI-E
Figure 15: System configuration information for the NetApp storage array.
Figure 16 details the configuration of the NetBackup integrated appliance and the Competitor “V” media server.
System
NetBackup 5230
integrated appliance
Competitor “V” media server
General
Number of processor packages 2 2
Number of cores per processor 6 6
Number of hardware threads per
core
2 2
System power management policy Default Default
CPU
Vendor Intel Intel
Name Xeon E5-2620 Xeon E5-2620
Model number E5-2620 E5-2620
Socket type FCLGA2011 FCLGA2011
Core frequency (GHz) 2 GHz 2 GHz
Bus frequency 7.2 GT/s 7.2 GT/s
L1 cache 32 KB + 32 KB per core 32 KB + 32 KB per core
L2 cache 1.5 MB (256 KB per core) 1.5 MB (256 KB per core)
L3 cache 15 MB 15 MB
Platform
Vendor and model number
Veritas NetBackup 52
30 Integrated Appliance
N/A
Memory module(s)
Total RAM in system (GB) 64 64
Vendor and model number Ventura Tech® D3-60MM104SV-999 Ventura Tech D3-60MM104SV-999
Type PC3-10600 PC3-10600
16. A Principled Technologies test report 16Veritas NetBackup benchmark comparison: Data protection in a
large-scale virtual environment (Part 3)
System
NetBackup 5230
integrated appliance
Competitor “V” media server
Speed (MHz) 1,333 1,333
Speed running in the system (MHz) 1,333 1,333
Timing/Latency (tCL-tRCD-tRP-
tRASmin)
9-9-9-27 9-9-9-27
Size (GB) 8 8
Number of RAM module(s) 8 8
Chip organization Double-sided Double-sided
Rank Dual rank Dual rank
Operating system
Name NetBackup Appliance 2.6.0.2 Windows Server 2012
Build number 2.6.32.59-0.7-default-fsl N/A
RAID controller
Vendor and model number Intel RMS25CB080 Intel RMS25CB080
Firmware version 23.9.0-0025 23.9.0-0025
Cache size (MB) 1024 1024
Hard drives
Vendor and model number
Seagate Constellation ES
ST1000NM0001
Seagate Constellation ES
ST1000NM0001
Number of drives 10 10
Size (GB) 1,000 1,000
RPM 7.2K 7.2K
Type SAS SAS
Storage shelf
Vendor and model number HGST HUS723030ALS640 HGST HUS723030ALS640
Number of drives 16 16
Size (GB) 3,000 3,000
RPM 7.2K 7.2K
Type SAS SAS
Ethernet adapters
Vendor and model number
Intel X520 10Gbps dual-port
Ethernet adapter
Intel X520 10Gbps dual-port
Ethernet adapter
Type PCI-E PCI-E
Figure 16: Detailed information on the media server from each solution.
17. A Principled Technologies test report 17Veritas NetBackup benchmark comparison: Data protection in a
large-scale virtual environment (Part 3)
APPENDIX B – HOW WE TESTED
We set up hardware and software for Competitor “V” according to administrative best practices.
Creating a storage lifecycle policy with NetBackup 7.6
1. Open a connection to the NetBackup machine.
2. If the Veritas NetBackup Activity Monitor is not open, open it.
3. Log into nbu-master-a with administration credentials.
4. Go to StorageStorage Lifecycle Policies.
5. Right-click in the right pane, and select New Storage Lifecycle Policy.
6. Enter a name for your SLP.
7. Click Add.
8. In the New Operation window, change the operation to Snapshot, and select primary-snap as your destination
storage.
9. Click OK.
Creating a policy with NetBackup 7.6
1. Open a connection to the NetBackup machine.
2. If the Veritas NetBackup Activity Monitor is not open, open it.
3. Log into nbu-master-a with administration credentials.
4. Go to Policies.
5. Right-click the All Policies area, and select New Policy.
6. Under Add a New Policy, enter your policy name, and click OK.
7. Change Policy type to VMware.
8. Click the Policy storage drop-down menu, and select the policy you created earlier.
9. Check Use Replication Director, and click Options.
10. In the Replication Director options, change Maximum Snapshots to 1,000, and make sure that Application
Consistent Snapshot is Enabled.
11. Click the Schedules tab.
12. In the Schedules tab, select New.
13. In the Attributes window, enter a name for your scheduled backup, click Calendar, and click the Calendar
Schedule tab.
14. In the Calendar Schedule tab, select a date as far away as you deem reasonable, and click OK.
15. Click the Clients tab.
16. Click Select automatically through query. If a warning window appears, click Yes.
17. Choose the VMs you wish to backup through queries (for example, if you want to back up all VMs on a drive,
choose Datastore in the Field category, and enter the drive you want to pull all VMs from in quotes in the Values
field.
18. A Principled Technologies test report 18Veritas NetBackup benchmark comparison: Data protection in a
large-scale virtual environment (Part 3)
Running a test with NetBackup 7.6
1. Open a connection to the NetBackup machine.
2. If the Veritas NetBackup Activity Monitor is not open, open it.
3. Log into nbu-master-a with administration credentials.
4. Go to Policies.
5. Right-click the policy you wish to run, and select Manual Backup.
6. Click OK.
Note: In the case of the NAS backups, we had two separate policies as each one targets the opposite VMs. Make
sure to run the even and odd backup.
Backing up VM hosts in NetBackup 7.6
1. Select Policies.
2. Under All Policies, right-click and select New Policy.
3. Provide a policy name and click OK.
4. On the Attributes tab, use the pull-down menu for Policy type and select VMware.
5. For Destination, use the pull-down menu and select your target storage. We selected media-msdp.
6. Check the box for Disable client-side deduplication.
7. Check the box for Use Accelerator.
8. On the Schedules tab, create a backup schedule based on the desired parameters.
9. On the Clients tab, choose Select automatically through query.
10. Select the master server as the NetBackup host to perform automatic virtual machine selection.
11. Build a query to select the correct VMs required for the backup job.
12. Click Test Query to ensure the correct VMs are properly selected.
13. Start the backup.
Launching collectors and compiling data for NetBackup 7.6
The following two tasks (Launch the collectors & Compile the data) should be executed from the
domainadministrator login on INFRA-SQL.
Launching the collectors
Note: If this is a first run collection, skip to step 2.
1. Double-click the collector job (located in C:Scripts) associated with the number of VMs you want to collect.
2. In the PuTTY session launched for the media server collection, enter the following sequence:
Support
Maintenance
(P@ssw0rd)
iostat –d 30
3. RDP into the Backup-Test server.
4. On the NetBackup Console, expand nbu-master-aNetBackup ManagementPolicies.
5. Right-click the Policy you want to start, and select Manual Backup.
6. To start the job, click OK.
19. A Principled Technologies test report 19Veritas NetBackup benchmark comparison: Data protection in a
large-scale virtual environment (Part 3)
7. Open the Activity Monitor on the NetBackup Administration Console.
8. The Backup job will execute and spawn four different kinds of jobs for each target VM:
Application State Check
VM Snapshot
Backup
Image Cleanup
Compiling the data
In the following steps, ### represents the number of VMs you’re testing, and # represents the test number.
1. At job completion, double-click the StopCollection.bat file (located in C:Scripts).
2. Capture screenshots of the Main Backup Job (both Tabs) and sub jobs for a SQL server, an Exchange Server, and
a SharePoint server.
a. Save each screenshot in:
E:Veritas Test Results01 Backup Test### VM Results RepositoryTest #
b. If this is a first run, return to step 1 above.
3. On the menu at the top of the NetBackup Console, select FileExport.
4. Select All Rows, and export to <Test#.xls>. Click Save.
5. Manually select all the rows in the activity monitor and delete them.
6. Open WinSCP.
7. Select My Workspace on the left panel and click Login. This will open a connection and automatically log into
each of the ESX servers undergoing data collection.
a. In the left panel, browse for the correct job folder:
### VM Results RepositoryTest #esxtop
b. In the right panel, select the esxout file (which may be of considerable size) and drag it into the esxtop
directory.
c. Once the file transfer is complete, delete the esxtop from the server (right panel).
d. Repeat steps a-c for each of the esx servers.
8. Close WinSCP.
9. On the INFRA-SQL server, open E:Putty Output.
10. In a separate window, open:
E:Veritas Test Results01 Backup Test### VM Results RepositoryTest #sysstats.
11. Move all the files from E:Putty Output to the Test folder you selected in the previous step.
12. Close all Explorer windows.
13. Return to step 1 above.
General concurrent restore procedure
1. Delete restore target VM(s) from disk in vCenter.
2. Launch the data collector script.
3. Execute a restore job using one of the following methods:
a. For NetBackup:
20. A Principled Technologies test report 20Veritas NetBackup benchmark comparison: Data protection in a
large-scale virtual environment (Part 3)
i. Open a PuTTY session to the NBU master server (172.16.100.100).
1. Log in as admin/P@ssw0rd
2. Type support and press Enter.
3. Type maintenance and press Enter.
4. Enter the maintenance password P@ssw0rd
5. Type elevate and press Enter.
ii. Copy the commands to be executed from a text file and paste them into the command line
interface on the NetBackup master server.
Example recovery command:
nbrestorevm -vmw -C client_DNS_name -O -vmtm san -vmpo -vmbz –vmkeephv
4. Determine the time by determining the difference between the time the first job begins and the end-time of the
last job to complete.
5. Export the NBU job log to disk and copy it to the results folder.
6. Stop the collection script.
7. Transfer the relevant data collector output into the test folder.
21. A Principled Technologies test report 21Veritas NetBackup benchmark comparison: Data protection in a
large-scale virtual environment (Part 3)
APPENDIX C – STORAGE LATENCY
Figure 17 shows the storage latency for the Competitor “V” solution.
Figure 17: CPU utilization for the Competitor “V” solution using the media agent.
22. A Principled Technologies test report 22Veritas NetBackup benchmark comparison: Data protection in a
large-scale virtual environment (Part 3)
ABOUT PRINCIPLED TECHNOLOGIES
Principled Technologies, Inc.
1007 Slater Road, Suite 300
Durham, NC, 27703
www.principledtechnologies.com
We provide industry-leading technology assessment and fact-based
marketing services. We bring to every assignment extensive experience
with and expertise in all aspects of technology testing and analysis, from
researching new technologies, to developing new methodologies, to
testing with existing and new tools.
When the assessment is complete, we know how to present the results to
a broad range of target audiences. We provide our clients with the
materials they need, from market-focused data to use in their own
collateral to custom sales aids, such as test reports, performance
assessments, and white papers. Every document reflects the results of
our trusted independent analysis.
We provide customized services that focus on our clients’ individual
requirements. Whether the technology involves hardware, software, Web
sites, or services, we offer the experience, expertise, and tools to help our
clients assess how it will fare against its competition, its performance, its
market readiness, and its quality and reliability.
Our founders, Mark L. Van Name and Bill Catchings, have worked
together in technology assessment for over 20 years. As journalists, they
published over a thousand articles on a wide array of technology subjects.
They created and led the Ziff-Davis Benchmark Operation, which
developed such industry-standard benchmarks as Ziff Davis Media’s
Winstone and WebBench. They founded and led eTesting Labs, and after
the acquisition of that company by Lionbridge Technologies were the
head and CTO of VeriTest.
Principled Technologies is a registered trademark of Principled Technologies, Inc.
All other product names are the trademarks of their respective owners.
Disclaimer of Warranties; Limitation of Liability:
PRINCIPLED TECHNOLOGIES, INC. HAS MADE REASONABLE EFFORTS TO ENSURE THE ACCURACY AND VALIDITY OF ITS TESTING, HOWEVER,
PRINCIPLED TECHNOLOGIES, INC. SPECIFICALLY DISCLAIMS ANY WARRANTY, EXPRESSED OR IMPLIED, RELATING TO THE TEST RESULTS AND
ANALYSIS, THEIR ACCURACY, COMPLETENESS OR QUALITY, INCLUDING ANY IMPLIED WARRANTY OF FITNESS FOR ANY PARTICULAR PURPOSE.
ALL PERSONS OR ENTITIES RELYING ON THE RESULTS OF ANY TESTING DO SO AT THEIR OWN RISK, AND AGREE THAT PRINCIPLED
TECHNOLOGIES, INC., ITS EMPLOYEES AND ITS SUBCONTRACTORS SHALL HAVE NO LIABILITY WHATSOEVER FROM ANY CLAIM OF LOSS OR
DAMAGE ON ACCOUNT OF ANY ALLEGED ERROR OR DEFECT IN ANY TESTING PROCEDURE OR RESULT.
IN NO EVENT SHALL PRINCIPLED TECHNOLOGIES, INC. BE LIABLE FOR INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES IN
CONNECTION WITH ITS TESTING, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. IN NO EVENT SHALL PRINCIPLED TECHNOLOGIES,
INC.’S LIABILITY, INCLUDING FOR DIRECT DAMAGES, EXCEED THE AMOUNTS PAID IN CONNECTION WITH PRINCIPLED TECHNOLOGIES, INC.’S
TESTING. CUSTOMER’S SOLE AND EXCLUSIVE REMEDIES ARE AS SET FORTH HEREIN.