Aspirus implemented EMC's Avamar backup solution to address issues with long backup times and inconsistent backups using their previous Exagrid and NetBackup system. Testing showed Avamar provided significantly better deduplication rates of over 110:1 and reduced backup times from over 10 hours to under 6 hours. The improved performance, reliability and reduced storage needs provided by Avamar resulted in lower costs and allowed IT staff to focus on more strategic work.
This document summarizes Avamar backup software and services. It discusses how Avamar uses global, source-based data deduplication to reduce the amount of data that needs to be transferred during backups by up to 500 times. This allows for fast, daily full backups over existing infrastructure. Avamar can also reduce required backup storage space by up to 50 times through deduplication. The document highlights how Avamar is well-suited for protecting virtualized environments and remote offices. It provides an overview of Avamar's deployment options and management capabilities.
The document discusses various storage options on Amazon Web Services (AWS) including Simple Storage Service (S3), Elastic Block Store (EBS), and Glacier. It then provides details on how to configure NetBackup to leverage these AWS storage services for backup and recovery. Specific scenarios are presented on backing up on-premises and cloud-based workloads to S3, EBS, and Glacier using different NetBackup and AWS configurations. Reporting and monitoring capabilities are also demonstrated.
Get higher transaction throughput and better price/performance with an Amazon...Principled Technologies
In addition, the EBS gp3-backed EC2 r5b.16xlarge instance delivered a lower average transaction latency to offer more consistent transactional database performance than two Microsoft Azure E64ds_v4 VM configurations
Boosting performance with the Dell Acceleration Appliance for DatabasesPrincipled Technologies
If your business is expanding and you need to support more users accessing your databases, it’s time to act. Upgrading your database infrastructure with a flash storage-based solution is a smart way to improve performance without adding more servers or taking up very much rack space, which comes at a premium. The Dell Acceleration Appliance for Databases addresses this by providing strong performance when combined with your existing infrastructure or on its own.
We found that adding a highly available DAAD solution to our database application provided up to 3.01 times the Oracle Database 12c performance, which can make a big difference to your bottom line. Additionally, the DAAD delivered 3.14 times the database performance when replacing traditional storage completely, which could enable your infrastructure to keep up with your growing business’ needs.
Consolidate SAS 9.4 workloads with Intel Xeon processor E7 v3 and Intel SSD t...Principled Technologies
A key to modernizing your data center is to consolidate your legacy workloads through virtualization, which can help reduce complexity for your business. Fewer servers require fewer physical resources, such as power, cabling, and switches, and reduce the burden on IT for ongoing management tasks such as updates. In addition, integrating newer hardware technology into your data center can provide new features that strengthen your infrastructure, such as RAS features on the processor and disk performance improvements. Finally, using SAS 9.4 ensures that you have the latest features and toolsets that SAS can offer.
Compared to a legacy server, we found that a modern four-socket server powered by Intel Xeon processors E7-8890 v3 with Intel SSD DC P3700 Series provided 12 times the amount of SAS work, nearly 14 times the relative performance, and a shorter average time to complete the SAS workload. Running 12 virtual SAS instances also left capacity on the server for additional work. Consolidating your SAS workloads from legacy servers onto servers powered by Intel Xeon processors E7 v3 and SAS 9.4 can provide your business with the latest hardware and software features, reduce complexity in your data center, and potentially reduce costs for your business.
A single-socket Dell EMC PowerEdge R7515 solution delivered better value on a...Principled Technologies
If your company is running important business applications in VMware vSAN clusters of servers that are several years old, chances are good that you’re considering upgrading to newer hardware. Our testing demonstrated that our clusters of single-socket Dell EMC PowerEdge R7515 servers and clusters of dual-socket HPE ProLiant DL380 Gen10 servers could both improve upon the database performance of a legacy cluster with five-year-old servers by more than 50 percent, with the Dell EMC cluster achieving 93.4 percent of the performance of the HPE cluster.
Avamar is backup software from EMC that uses global, source-based data deduplication to reduce the size of backup data. It delivers fast daily full backups using existing infrastructure by reducing network bandwidth usage for backup by up to 500 times and reducing total backup storage needs by up to 50 times compared to traditional backup methods. Avamar supports various operating systems, applications, and virtual environments. It provides flexible deployment options including an integrated hardware/software appliance and a virtual edition for VMware.
This document summarizes Avamar backup software and services. It discusses how Avamar uses global, source-based data deduplication to reduce the amount of data that needs to be transferred during backups by up to 500 times. This allows for fast, daily full backups over existing infrastructure. Avamar can also reduce required backup storage space by up to 50 times through deduplication. The document highlights how Avamar is well-suited for protecting virtualized environments and remote offices. It provides an overview of Avamar's deployment options and management capabilities.
The document discusses various storage options on Amazon Web Services (AWS) including Simple Storage Service (S3), Elastic Block Store (EBS), and Glacier. It then provides details on how to configure NetBackup to leverage these AWS storage services for backup and recovery. Specific scenarios are presented on backing up on-premises and cloud-based workloads to S3, EBS, and Glacier using different NetBackup and AWS configurations. Reporting and monitoring capabilities are also demonstrated.
Get higher transaction throughput and better price/performance with an Amazon...Principled Technologies
In addition, the EBS gp3-backed EC2 r5b.16xlarge instance delivered a lower average transaction latency to offer more consistent transactional database performance than two Microsoft Azure E64ds_v4 VM configurations
Boosting performance with the Dell Acceleration Appliance for DatabasesPrincipled Technologies
If your business is expanding and you need to support more users accessing your databases, it’s time to act. Upgrading your database infrastructure with a flash storage-based solution is a smart way to improve performance without adding more servers or taking up very much rack space, which comes at a premium. The Dell Acceleration Appliance for Databases addresses this by providing strong performance when combined with your existing infrastructure or on its own.
We found that adding a highly available DAAD solution to our database application provided up to 3.01 times the Oracle Database 12c performance, which can make a big difference to your bottom line. Additionally, the DAAD delivered 3.14 times the database performance when replacing traditional storage completely, which could enable your infrastructure to keep up with your growing business’ needs.
Consolidate SAS 9.4 workloads with Intel Xeon processor E7 v3 and Intel SSD t...Principled Technologies
A key to modernizing your data center is to consolidate your legacy workloads through virtualization, which can help reduce complexity for your business. Fewer servers require fewer physical resources, such as power, cabling, and switches, and reduce the burden on IT for ongoing management tasks such as updates. In addition, integrating newer hardware technology into your data center can provide new features that strengthen your infrastructure, such as RAS features on the processor and disk performance improvements. Finally, using SAS 9.4 ensures that you have the latest features and toolsets that SAS can offer.
Compared to a legacy server, we found that a modern four-socket server powered by Intel Xeon processors E7-8890 v3 with Intel SSD DC P3700 Series provided 12 times the amount of SAS work, nearly 14 times the relative performance, and a shorter average time to complete the SAS workload. Running 12 virtual SAS instances also left capacity on the server for additional work. Consolidating your SAS workloads from legacy servers onto servers powered by Intel Xeon processors E7 v3 and SAS 9.4 can provide your business with the latest hardware and software features, reduce complexity in your data center, and potentially reduce costs for your business.
A single-socket Dell EMC PowerEdge R7515 solution delivered better value on a...Principled Technologies
If your company is running important business applications in VMware vSAN clusters of servers that are several years old, chances are good that you’re considering upgrading to newer hardware. Our testing demonstrated that our clusters of single-socket Dell EMC PowerEdge R7515 servers and clusters of dual-socket HPE ProLiant DL380 Gen10 servers could both improve upon the database performance of a legacy cluster with five-year-old servers by more than 50 percent, with the Dell EMC cluster achieving 93.4 percent of the performance of the HPE cluster.
Avamar is backup software from EMC that uses global, source-based data deduplication to reduce the size of backup data. It delivers fast daily full backups using existing infrastructure by reducing network bandwidth usage for backup by up to 500 times and reducing total backup storage needs by up to 50 times compared to traditional backup methods. Avamar supports various operating systems, applications, and virtual environments. It provides flexible deployment options including an integrated hardware/software appliance and a virtual edition for VMware.
Prepare images for machine learning faster with servers powered by AMD EPYC 7...Principled Technologies
A server cluster with 3rd Gen AMD EPYC processors achieved higher throughput and took less time to prepare images for classification than a server cluster with 3rd Gen Intel Xeon Platinum 8380 processors
Transforming Backup and Recovery in VMware environments with EMC Avamar and D...CTI Group
This document discusses the transition from tape-based backup systems to backup appliances and deduplication backup software. It notes that backup appliances are disrupting the market, with tape being marginalized and storage and software functionality converging. Purpose-built backup appliances and deduplication backup software are experiencing much faster growth than tape automation. Deduplication technology is accelerating this transition by making backup storage more efficient and reducing bandwidth needs.
This technical paper provides the best practices for implementing the IBM Storwize V7000 Unified system NDMP backup solution using EMC NetWorker. To know more about the IBM Storwize V7000, visit http://ibm.co/TaLb6Q.
AWS EC2 M6i instances with 3rd Gen Intel Xeon Scalable processors accelerated...Principled Technologies
At multiple instance sizes, M6i instances classified more frames per second than M5n instances with previous-gen processors or M6a instances with 3rd Gen AMD EPYC processors
Presentation deduplication backup software and systemxKinAnx
The document provides information on EMC's Avamar deduplication backup software and system. It discusses how Avamar reduces backup time and storage requirements through client-side deduplication. Avamar provides daily full backups, one-step recovery, and supports both physical and virtual environments. It integrates with EMC Data Domain systems and is optimized for backing up virtual machines, remote offices, desktops/laptops, and enterprise applications.
Symantec NetBackup 7.6 benchmark comparison: Data protection in a large-scale...Principled Technologies
The footprint of a VM can grow quickly in an enterprise environment and large-scale VM deployments in the thousands are common. As this number of deployed systems grows, so does the risk of failure. Critical failures can become unavoidable and offering data protection from a backup solution promotes business continuity. Elongated protection windows requiring multiple jobs of different types can create resource contention with production environments and may require valuable IT admin time, so a finite window for system backups can have plenty of importance.
In our hands-on SAN backup testing, the Symantec NetBackup Integrated Appliance running NetBackup 7.6 offered application protection to 1,000 VMs in 66.8 percent less time than Competitor “E” did. In addition, the Symantec NetBackup Integrated Appliance with NetBackup 7.6 created backup images that offered granular recovery without additional steps. These time and effort savings can scale as your VM footprint grows, allowing you to execute both system protection and user-friendly, simplified recovery.
EMC Data domain advanced features and functionssolarisyougood
This document provides an overview of advanced features and functions of Data Domain systems. It covers topics such as virtual tape libraries (VTL), snapshots, replication, DD Boost integration, capacity and throughput planning, and system monitoring tools. The document consists of multiple lessons that describe these topics in detail and includes configuration examples.
This document provides details about Avamar backup configurations and procedures for production and campus environments. It includes information on cluster details, utilization and capacities, backup policies, groups, schedules, and retention policies. It also describes how to perform on-demand backups and restores in Avamar, and covers the Avamar Enterprise Manager and replication.
3 key wins: Dell EMC PowerEdge MX with OpenManage Enterprise over Cisco UCS a...Principled Technologies
In head-to-head tests, the modular Dell EMC™ PowerEdge™ MX7000 with
OpenManage™ Enterprise reduced admin time and effort on repetitive tasks when compared to Cisco UCS® 5108 with Cisco UCS Manager and HPE Synergy with OneView.
Les solutions EMC de sauvegarde des données avec déduplication dans les envir...ljaquet
The document discusses EMC's backup and recovery solutions, with a focus on deduplication-based products. It provides an overview of EMC's portfolio including Avamar, Data Domain, and NetWorker. It then discusses key concepts like deduplication fundamentals and how the technology has evolved backup solutions from tape-based to disk-based. Specific product features and benefits are highlighted, such as Avamar's guest-level VMware backup and Data Domain's inline deduplication approach.
Component upgrades from Intel and Dell can increase VM density and boost perf...Principled Technologies
The document summarizes an experiment conducted by Principled Technologies that tested the performance improvements from upgrading server components. They found that upgrading from a Dell PowerEdge R720 to a Dell PowerEdge R730 server, along with upgrading the processor, operating system, storage drives and network cards, increased the number of supported VMs by 67% and database performance by 60%. Upgrading all components maximized performance benefits.
What is NetBackup appliance? Is it just NetBackup pre-installed on hardware?
The answer is both yes and no.
Yes, NetBackup appliance is simply backup in a box if you are looking for a solution for your data protection and disaster recovery readiness. That is the business problem you are solving with this turnkey appliance that installs in minutes and reduce your operational costs.
No, NetBackup appliance is more than a backup in box if you are comparing it with rolling your own hardware for NetBackup or if you are comparing it with third party deduplication appliances. Here is why I say this…
NetBackup appliance comes with redundant storage in RAID6 for storing your backups
Symantec worked with Intel to design the hardware for running NetBackup optimally for predictable and consistent performance. Eliminates the guesswork while designing the solution.
Many vendors will talk about various processes running on their devices to perform integrity checks, some solutions even need blackout windows to do those operations. NetBackup appliances include Storage Foundation at no additional cost. The storage is managed by Veritas Volume Manager (VxVM) and presented to operating system through Veritas File System. Why is this important? SF is industry-leading storage management infrastructure that powers the most mission-critical applications in the enterprise space. It is built for high-performance and resiliency. NetBackup appliance provides 24/7 protection with data integrity on storage provided by the industry leading technology.
The Linux based operating system, optimized for NetBackup, harden by Symantec eliminates the cost of deploying and maintaining general purpose operating system and associated IT applications.
NetBackup appliances include built-on WAN Optimization driver. Replicate to appliances on remote sites or to the cloud up to 10 times faster on across high latency links.
Your backups need to be protected. Symantec Critical System Protection provides non-signature based Host Intrusion Prevention protection. It protects against zero-day attacks using granular OS hardening policies along with application, user and device controls, all pre-defined for you in NetBackup appliance so that you don’t need to worry about configuring it.
Best of all, reduce your operational expenditure and eliminate complexity! One patch updates everything in this stack! The most holistic data protection solution with the least number of knobs to operate.
Spend less time, effort, and money by choosing a Dell EMC server with pre-ins...Principled Technologies
Deploying a Dell EMC PowerEdge R740 with pre-installed Microsoft Windows Server 2016 Standard took less time and fewer steps than deploying the same server without it
By automating high-touch, routine tasks, Dell EMC OME integrations and plugins empower IT admins to deliver effective and efficient systems management from a single console.
VMworld 2014: Data Protection for vSphere 101VMworld
VDP and vSphere Replication provide different data protection techniques for virtual machines. VDP uses agent-less, disk-based backups for virtual machines with capabilities like application-awareness, granular recovery, and self-service file recovery. It has an RPO of greater than 24 hours and RTO of hours. vSphere Replication provides near-synchronous replication between sites with RPO under 24 hours and RTO of minutes for disaster recovery and testing. The document discusses use cases, features, and best practices for using VDP and vSphere Replication together for backup and replication in vSphere environments.
SQL Server 2016 database performance on the Dell EMC PowerEdge FC630 QLogic 1...Principled Technologies
Upgrading the hardware running your SQL Server to a space-efficient modular Dell EMC modern environment can help your company achieve a great deal of database work in a small amount of space. With the Dell Express Flash technology, adding a caching solution such as Samsung AutoCache can make the environment even more efficient.
In the PT labs, we ran a mixed database workload on six Dell EMC PowerEdge FC630 servers, powered by Intel Xeon E5-2667 processors, in three PowerEdge FX2 enclosures. The solution included the QLogic QLE2692 16Gb FC adapter with StorFusion Technology, Dell EMC Storage SC9000 all-flash storage, and Dell EMC PowerEdge Express Flash NVMe Performance PCIe SSDs.
With no caching solution, the 36 SQL Server 2016 VMs on the six servers achieved a total of 431,839 orders per minute while an Oracle workload ran on 12 VMs. When we added a caching solution to accelerate the SQL database volumes, the performance across the 36 SQL Server 2016 VMs doubled to 871,580. These numbers show the power of server-side caching to alleviate pressure on the storage array allowing you to get even more out of the Dell EMC modern environment.
A company’s success depends on critical application performance and availability. Upgrades and patches can improve application efficiency and user experience, but making the necessary changes requires resource intensive environments to test updates before deploying them. What’s more, these applications need to continue accessing data even in the event of an on-premises crisis.
Our Dell EMC VMAX 250F and PowerEdge server solution supported test/dev environments and production database applications simultaneously without affecting the production applications’ performance. Storage latency for the VMAX 250F peaked at a millisecond in our testing while IOPS stayed within an acceptable range. The solution also kept data highly available with no downtime or performance drop when we initiated a lost host connection for the primary storage. Consider the Dell EMC VMAX 250F array for your datacenter to support the critical database applications that drive your company.
The document provides an overview of Oracle Database Backup Service (ODBS), which enables customers to securely store database backups in Oracle's cloud storage. It describes how the Oracle Database Cloud Backup Module (ODCBM) installs on the database server and uses familiar RMAN commands to transparently backup databases to ODBS and restore from ODBS. The document also outlines the steps to set up ODBS, including purchasing storage, installing ODCBM, configuring RMAN and encryption settings, performing backups, and restoring from backups.
VMworld 2013: vSphere Data Protection 5.5 Advanced VMware Backup and Recovery...VMworld
VMworld Europe 2013
Mauricio Barra, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
Jeff Hunter, VMware
The document compares the performance of migrating virtual machines between chassis using VMware vMotion on three modular server solutions: Dell EMC PowerEdge MX, HPE Synergy, and Cisco UCS. Testing showed the Dell EMC solution moved VMs between chassis up to 42.3% faster than the other solutions, with throughput up to 1.7x higher and latency up to 73.0% lower. This allows maintenance to be completed more quickly and with less impact on workloads.
The document summarizes the new Data Domain DD160 appliance. It is an affordable entry-level deduplication storage system for small enterprises and remote offices, starting at $10,000 for 1.6TB of usable capacity. Key features include throughput up to 1.1TB/hr, scalable capacity up to 3.98TB, and support for all Data Domain software. The document positions the DD160 for backups of less than 4TB and highlights its cost-effectiveness and integration capabilities for small organizations.
1. Backup and recovery architectures are evolving from conventional tape-centric models to more transformational disk-centric models using deduplication backup software and storage.
2. EMC offers various backup and recovery solutions including the Avamar deduplication backup software, Data Domain deduplication storage systems, and NetWorker backup software that can be used on-premise or off-premise.
3. These solutions provide benefits like reducing backup times and data transferred over networks through global deduplication and integration with virtual environments.
Prepare images for machine learning faster with servers powered by AMD EPYC 7...Principled Technologies
A server cluster with 3rd Gen AMD EPYC processors achieved higher throughput and took less time to prepare images for classification than a server cluster with 3rd Gen Intel Xeon Platinum 8380 processors
Transforming Backup and Recovery in VMware environments with EMC Avamar and D...CTI Group
This document discusses the transition from tape-based backup systems to backup appliances and deduplication backup software. It notes that backup appliances are disrupting the market, with tape being marginalized and storage and software functionality converging. Purpose-built backup appliances and deduplication backup software are experiencing much faster growth than tape automation. Deduplication technology is accelerating this transition by making backup storage more efficient and reducing bandwidth needs.
This technical paper provides the best practices for implementing the IBM Storwize V7000 Unified system NDMP backup solution using EMC NetWorker. To know more about the IBM Storwize V7000, visit http://ibm.co/TaLb6Q.
AWS EC2 M6i instances with 3rd Gen Intel Xeon Scalable processors accelerated...Principled Technologies
At multiple instance sizes, M6i instances classified more frames per second than M5n instances with previous-gen processors or M6a instances with 3rd Gen AMD EPYC processors
Presentation deduplication backup software and systemxKinAnx
The document provides information on EMC's Avamar deduplication backup software and system. It discusses how Avamar reduces backup time and storage requirements through client-side deduplication. Avamar provides daily full backups, one-step recovery, and supports both physical and virtual environments. It integrates with EMC Data Domain systems and is optimized for backing up virtual machines, remote offices, desktops/laptops, and enterprise applications.
Symantec NetBackup 7.6 benchmark comparison: Data protection in a large-scale...Principled Technologies
The footprint of a VM can grow quickly in an enterprise environment and large-scale VM deployments in the thousands are common. As this number of deployed systems grows, so does the risk of failure. Critical failures can become unavoidable and offering data protection from a backup solution promotes business continuity. Elongated protection windows requiring multiple jobs of different types can create resource contention with production environments and may require valuable IT admin time, so a finite window for system backups can have plenty of importance.
In our hands-on SAN backup testing, the Symantec NetBackup Integrated Appliance running NetBackup 7.6 offered application protection to 1,000 VMs in 66.8 percent less time than Competitor “E” did. In addition, the Symantec NetBackup Integrated Appliance with NetBackup 7.6 created backup images that offered granular recovery without additional steps. These time and effort savings can scale as your VM footprint grows, allowing you to execute both system protection and user-friendly, simplified recovery.
EMC Data domain advanced features and functionssolarisyougood
This document provides an overview of advanced features and functions of Data Domain systems. It covers topics such as virtual tape libraries (VTL), snapshots, replication, DD Boost integration, capacity and throughput planning, and system monitoring tools. The document consists of multiple lessons that describe these topics in detail and includes configuration examples.
This document provides details about Avamar backup configurations and procedures for production and campus environments. It includes information on cluster details, utilization and capacities, backup policies, groups, schedules, and retention policies. It also describes how to perform on-demand backups and restores in Avamar, and covers the Avamar Enterprise Manager and replication.
3 key wins: Dell EMC PowerEdge MX with OpenManage Enterprise over Cisco UCS a...Principled Technologies
In head-to-head tests, the modular Dell EMC™ PowerEdge™ MX7000 with
OpenManage™ Enterprise reduced admin time and effort on repetitive tasks when compared to Cisco UCS® 5108 with Cisco UCS Manager and HPE Synergy with OneView.
Les solutions EMC de sauvegarde des données avec déduplication dans les envir...ljaquet
The document discusses EMC's backup and recovery solutions, with a focus on deduplication-based products. It provides an overview of EMC's portfolio including Avamar, Data Domain, and NetWorker. It then discusses key concepts like deduplication fundamentals and how the technology has evolved backup solutions from tape-based to disk-based. Specific product features and benefits are highlighted, such as Avamar's guest-level VMware backup and Data Domain's inline deduplication approach.
Component upgrades from Intel and Dell can increase VM density and boost perf...Principled Technologies
The document summarizes an experiment conducted by Principled Technologies that tested the performance improvements from upgrading server components. They found that upgrading from a Dell PowerEdge R720 to a Dell PowerEdge R730 server, along with upgrading the processor, operating system, storage drives and network cards, increased the number of supported VMs by 67% and database performance by 60%. Upgrading all components maximized performance benefits.
What is NetBackup appliance? Is it just NetBackup pre-installed on hardware?
The answer is both yes and no.
Yes, NetBackup appliance is simply backup in a box if you are looking for a solution for your data protection and disaster recovery readiness. That is the business problem you are solving with this turnkey appliance that installs in minutes and reduce your operational costs.
No, NetBackup appliance is more than a backup in box if you are comparing it with rolling your own hardware for NetBackup or if you are comparing it with third party deduplication appliances. Here is why I say this…
NetBackup appliance comes with redundant storage in RAID6 for storing your backups
Symantec worked with Intel to design the hardware for running NetBackup optimally for predictable and consistent performance. Eliminates the guesswork while designing the solution.
Many vendors will talk about various processes running on their devices to perform integrity checks, some solutions even need blackout windows to do those operations. NetBackup appliances include Storage Foundation at no additional cost. The storage is managed by Veritas Volume Manager (VxVM) and presented to operating system through Veritas File System. Why is this important? SF is industry-leading storage management infrastructure that powers the most mission-critical applications in the enterprise space. It is built for high-performance and resiliency. NetBackup appliance provides 24/7 protection with data integrity on storage provided by the industry leading technology.
The Linux based operating system, optimized for NetBackup, harden by Symantec eliminates the cost of deploying and maintaining general purpose operating system and associated IT applications.
NetBackup appliances include built-on WAN Optimization driver. Replicate to appliances on remote sites or to the cloud up to 10 times faster on across high latency links.
Your backups need to be protected. Symantec Critical System Protection provides non-signature based Host Intrusion Prevention protection. It protects against zero-day attacks using granular OS hardening policies along with application, user and device controls, all pre-defined for you in NetBackup appliance so that you don’t need to worry about configuring it.
Best of all, reduce your operational expenditure and eliminate complexity! One patch updates everything in this stack! The most holistic data protection solution with the least number of knobs to operate.
Spend less time, effort, and money by choosing a Dell EMC server with pre-ins...Principled Technologies
Deploying a Dell EMC PowerEdge R740 with pre-installed Microsoft Windows Server 2016 Standard took less time and fewer steps than deploying the same server without it
By automating high-touch, routine tasks, Dell EMC OME integrations and plugins empower IT admins to deliver effective and efficient systems management from a single console.
VMworld 2014: Data Protection for vSphere 101VMworld
VDP and vSphere Replication provide different data protection techniques for virtual machines. VDP uses agent-less, disk-based backups for virtual machines with capabilities like application-awareness, granular recovery, and self-service file recovery. It has an RPO of greater than 24 hours and RTO of hours. vSphere Replication provides near-synchronous replication between sites with RPO under 24 hours and RTO of minutes for disaster recovery and testing. The document discusses use cases, features, and best practices for using VDP and vSphere Replication together for backup and replication in vSphere environments.
SQL Server 2016 database performance on the Dell EMC PowerEdge FC630 QLogic 1...Principled Technologies
Upgrading the hardware running your SQL Server to a space-efficient modular Dell EMC modern environment can help your company achieve a great deal of database work in a small amount of space. With the Dell Express Flash technology, adding a caching solution such as Samsung AutoCache can make the environment even more efficient.
In the PT labs, we ran a mixed database workload on six Dell EMC PowerEdge FC630 servers, powered by Intel Xeon E5-2667 processors, in three PowerEdge FX2 enclosures. The solution included the QLogic QLE2692 16Gb FC adapter with StorFusion Technology, Dell EMC Storage SC9000 all-flash storage, and Dell EMC PowerEdge Express Flash NVMe Performance PCIe SSDs.
With no caching solution, the 36 SQL Server 2016 VMs on the six servers achieved a total of 431,839 orders per minute while an Oracle workload ran on 12 VMs. When we added a caching solution to accelerate the SQL database volumes, the performance across the 36 SQL Server 2016 VMs doubled to 871,580. These numbers show the power of server-side caching to alleviate pressure on the storage array allowing you to get even more out of the Dell EMC modern environment.
A company’s success depends on critical application performance and availability. Upgrades and patches can improve application efficiency and user experience, but making the necessary changes requires resource intensive environments to test updates before deploying them. What’s more, these applications need to continue accessing data even in the event of an on-premises crisis.
Our Dell EMC VMAX 250F and PowerEdge server solution supported test/dev environments and production database applications simultaneously without affecting the production applications’ performance. Storage latency for the VMAX 250F peaked at a millisecond in our testing while IOPS stayed within an acceptable range. The solution also kept data highly available with no downtime or performance drop when we initiated a lost host connection for the primary storage. Consider the Dell EMC VMAX 250F array for your datacenter to support the critical database applications that drive your company.
The document provides an overview of Oracle Database Backup Service (ODBS), which enables customers to securely store database backups in Oracle's cloud storage. It describes how the Oracle Database Cloud Backup Module (ODCBM) installs on the database server and uses familiar RMAN commands to transparently backup databases to ODBS and restore from ODBS. The document also outlines the steps to set up ODBS, including purchasing storage, installing ODCBM, configuring RMAN and encryption settings, performing backups, and restoring from backups.
VMworld 2013: vSphere Data Protection 5.5 Advanced VMware Backup and Recovery...VMworld
VMworld Europe 2013
Mauricio Barra, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
Jeff Hunter, VMware
The document compares the performance of migrating virtual machines between chassis using VMware vMotion on three modular server solutions: Dell EMC PowerEdge MX, HPE Synergy, and Cisco UCS. Testing showed the Dell EMC solution moved VMs between chassis up to 42.3% faster than the other solutions, with throughput up to 1.7x higher and latency up to 73.0% lower. This allows maintenance to be completed more quickly and with less impact on workloads.
The document summarizes the new Data Domain DD160 appliance. It is an affordable entry-level deduplication storage system for small enterprises and remote offices, starting at $10,000 for 1.6TB of usable capacity. Key features include throughput up to 1.1TB/hr, scalable capacity up to 3.98TB, and support for all Data Domain software. The document positions the DD160 for backups of less than 4TB and highlights its cost-effectiveness and integration capabilities for small organizations.
1. Backup and recovery architectures are evolving from conventional tape-centric models to more transformational disk-centric models using deduplication backup software and storage.
2. EMC offers various backup and recovery solutions including the Avamar deduplication backup software, Data Domain deduplication storage systems, and NetWorker backup software that can be used on-premise or off-premise.
3. These solutions provide benefits like reducing backup times and data transferred over networks through global deduplication and integration with virtual environments.
This document provides a comparison of EMC Avamar and Unitrends backup solutions. It outlines key differences in their product and licensing models, support offerings, monitoring and management capabilities, and recovery and backup features. Overall, Unitrends offers a more integrated appliance-based solution while Avamar provides more options for non-integrated software.
This document discusses laptop backup solutions and introduces Druva inSync. It notes that 80% of corporate data resides on laptops and desktops, which are not always connected and have changing IP addresses. Existing solutions like Time Machine have limitations as backups may be stored locally and lost if the laptop is lost. An ideal laptop backup solution would not require removable devices or human intervention, use incremental backups without full backups, and have cross-site deduplication. Druva inSync is presented as a simple, fully automated solution that uses deduplication and WAN optimization to provide efficient backups with fast restores while minimizing storage usage and bandwidth. It can scale to backup 2000 laptops from a single server appliance.
Presentation backup and recovery best practices for very large databases (v...xKinAnx
This document provides best practices for backup and recovery of very large databases (VLDBs). It discusses VLDB trends requiring databases to scale to terabytes and beyond. The key is protecting growing data while maintaining cost efficiency. The presentation covers assessing recovery requirements, architecting backup environments, leveraging Oracle tools, planning data layout, developing backup procedures, and recovery strategies. It also provides a Starbucks case study example.
This document provides an introduction to recovery and backups for beginners. It discusses having both a backup strategy and a recovery strategy. It also covers key concepts like RTO, RPO, recovery models, and how to match the appropriate recovery model to your backup strategy based on your business needs and data. The document emphasizes that having backups without a recovery plan is inadequate and concludes with an invitation for questions.
This document discusses data backup, recovery, and disaster planning. It defines backup as creating duplicate copies of important data and explains different backup types (full, incremental, differential). Backup media include tapes, disks, and optical storage. Creating a backup schedule, testing restores, and storing backups securely and offsite are recommended. Disaster recovery involves restoring systems after damage and includes strategies like automated recovery, backing up open files, and maintaining hot, warm or cold backup sites.
This document discusses backup and recovery strategies for Oracle Exadata systems. It outlines the fundamental principles of backups including having multiple copies of data stored on different media with one copy offsite. It then describes the various backup options for Exadata, including using additional Exadata storage cells for the fastest backups, using a ZFS storage appliance for flexibility, or backing up to tape for economical long-term storage with removable offline copies. Key metrics like backup and restore speeds are provided for each option.
Lesson 8 - Understanding Backup and Recovery MethodsGene Carboni
This document discusses various backup and recovery methods in Windows. It covers creating file and system backups, restoring files from backups, creating system images, using System Restore to roll back to earlier system states, and accessing advanced recovery options like the recovery boot menu. The goal of backups and recovery options is to protect users from data loss and enable restoring systems and files if needed.
This document provides an agenda and overview for a training session on Oracle Database backup and recovery. The agenda covers the purpose of backups and recovery, Oracle data protection solutions including Recovery Manager (RMAN) and flashback technologies, and the Data Recovery Advisor tool. It also discusses various types of data loss to protect against, backup strategies like incremental backups, and validating and recovering backups.
This document discusses various topics related to backup and recovery of computer systems, including common threats to systems, types of viruses, importance of regular backups, different backup strategies and methods, and ensuring continuity of service. It provides details on full, incremental, and differential backups and recommends backing up data regularly to external storage drives or online backup services. The document stresses having clear backup procedures, assigning responsibility, and being prepared to recover from data loss through training and alternative plans.
Whitepaper : ESG Whitepaper: Backup and Recovery of Large Scale VMware Enviro...EMC
This white paper discusses the challenges of backing up and recovering large scale VMware environments. As organizations increasingly adopt virtualization, protecting large numbers of virtual machines becomes more difficult due to redundancy of data and strain on system resources. Traditional backup methods are not optimized for virtual infrastructures. The paper recommends planning backup and recovery strategies when expanding virtualization to avoid potential problems. It also introduces EMC Avamar as a solution that can ease data protection for large virtual deployments through efficient management and backup of virtual machines.
EMC IT's Journey to Cloud : BUSINESS PRODUCTION BACKUP & RECOVERY SYSTEMSEMC
EMC IT's Journey to Cloud
PHASE 2: BUSINESS PRODUCTION
BACKUP & RECOVERY SYSTEMS
Discover how EMC uses its next generation deduplication, backup, and archiving.
The document provides an overview of services marketing concepts including:
1) It defines services and identifies key differences between goods and services such as intangibility, perishability, and simultaneous production and consumption.
2) It introduces the services marketing triangle and expanded 7Ps marketing mix framework for services.
3) It discusses models for understanding service quality like the gaps model and challenges in consumer behavior related to services like higher perceived risk and difficulty evaluating service alternatives.
https://masterclass.etiennegarbugli.com
This presentation was voted Most Liked presentation of the year by SlideShare. In December 2013, 26 Time Management Hacks I Wish I'd Known at 20 was included in the Slideshare Zeitgeist.
This document provides an overview of teaching technology to children. It discusses the three strands of technology: Strand A focuses on practical skills, Strand B covers terminology and methods, and Strand C examines the history and impact of technology. Various learning intentions and activities are presented to help teachers develop lessons on the nature of technology, including defining technology, understanding how it has shaped our lives, and creating teaching strategies. The document emphasizes developing students' broad understanding of technology beyond just the tools or activities they are engaged with.
EMC for Network Attached Storage (NAS) Backup and Recovery Using NDMPEMC
This white paper discusses EMC's backup and recovery solutions for NAS systems using NDMP. It describes how EMC's Avamar and NetWorker solutions can provide optimized data protection for NAS using data deduplication and integration with Data Domain storage. The paper recommends using backups, snapshots, and offsite replication as best practices to meet recovery objectives while improving efficiency.
The document summarizes testing of Symantec NetBackup 7.6 and a competitor's solution for backing up virtual environments of increasing size, from 100 to 1,000 VMs. It found that NetBackup with the NetBackup Integrated Appliance provided backups that were 66.8% faster than the competitor's solution for backing up 1,000 VMs using SAN transport. NetBackup also offered capabilities like Replication Director and Accelerator that the competitor did not support. The testing environment used VMware vSphere and NetApp storage arrays to host the VMs.
Back up and restore data faster with a Dell PowerProtect Data Manager AppliancePrincipled Technologies
Compared to a competitor, the faster Dell Technologies solution also consumed significantly less power during backup and restore processes
Conclusion
Safeguarding your data isn’t just a business necessity; it’s the turbocharger propelling you towards unwavering business continuity. When you choose a speedy backup and recovery solution, you could slash downtime and narrow backup windows while enabling data protection teams to schedule more frequent backups. This approach can help you bounce back quickly from a disruptive event, getting closer to the pre-corruption point in time with more precision.
The Dell PowerProtect Data Manager Appliance with Transparent Snapshots delivered a faster initial backup of 500 VMs than the Vendor X solution, backing up the VMs within an overnight backup window—something the Vendor X solution could not do. The PowerProtect Data Manager Appliance also delivered faster incremental backups and a faster VM restore than the solution from Vendor X, which used a traditional transport mode (NBD). The Dell solution consumed less power, too, during both backup and restore scenarios. Our results show that the Dell PowerProtect Data Manager Appliance with Transparent Snapshots backs up and restores data while also helping your bottom line and carbon footprint by consuming less power, making it a good solution for your data protection needs.
Avamar is backup software that uses global, source-based data deduplication to reduce the size of backup data. It delivers fast, daily full backups using less bandwidth and storage than traditional backup methods. Avamar can be deployed as software on standard servers, as integrated hardware/software appliances, or as a virtual appliance for VMware environments. It provides efficient backup solutions for virtual machines, remote/branch offices, file servers, and desktops/laptops.
Scaling Oracle 12c database performance with EMC XtremIO storage in a Databas...Principled Technologies
Oracle single instance database VMs need plenty of storage capacity and performance to handle increased workload demands placed on them by users. Whether your organization uses DBaaS or traditional Oracle 12c instances, consider the reliable performance and scaling flexibility that the EMC XtremIO storage array can offer. We found IOPS levels stayed consistent as we scaled up to eight Oracle single instance VMs and scaled by an average of 14,700 IOPS for each VM (totaling 118,067). In addition, we found that the inline deduplication, compression, and thin provisioning capabilities on the XtremIO array resulted in an overall efficiency ratio of 51 to 1 and a data reduction ratio of 14.6 to 1. With this level of consistent performance, users can expect great performance to meet high demand for IOPS in a DBaaS environment.
Symantec NetBackup 7.6 benchmark comparison: Data protection in a large-scale...Principled Technologies
In an enterprise environment, a data center VM footprint can grow quickly; large-scale deployments of thousands of virtual machines are becoming increasingly common. Risk of failure grows proportionally to the number of systems deployed and critical failures are unavoidable. Your ability to offer data protection from a backup solution is critical to business continuity. Elongated, inefficient protection windows can create resource contention with production environments, therefore, it is critical to execute system backup in a finite window of time.
The Symantec NetBackup Integrated Appliance running NetBackup 7.6 offered application protection to 1,000 VMs in 80.3 percent less time in SAN testing and used NetApp array-based snapshots to create recovery points in 93.8 percent less time than Competitor “C.” In addition, the Symantec NetBackup Integrated Appliance with NetBackup 7.6 created backup images that offered granular recovery without additional steps and in a backup window 69.0 percent shorter than the backup window needed for Competitor “C.” These time savings can scale as your VM footprint grows, allowing you to execute both system protection and user-friendly, simplified recovery.
NetBackup 7.6 with the NetBackup Integrated Appliance provided more scalable data protection than a competitor's solution for large virtual environments. With 1,000 VMs, NetBackup completed a SAN transport backup in 80.3% less time. For NAS backups of 1,000 VMs using storage snapshots, NetBackup was 93.8% faster. Testing showed NetBackup had superior scalability for protecting the largest virtual server deployments compared to the competitor.
Backup and Recovery with Cloud-Native Deduplication and Use Cases from the Fi...Amazon Web Services
by Hugh Emberson, CTO, StorReduce
Designing and deploying cloud-enabled backup & recovery solutions often leads to opportunities for reducing storage requirements and increasing efficiencies. Having effective cloud-native deduplication capabilities as part of your backup & recovery strategy can optimize migration, decrease the need for purpose built backup appliances like Data Domains, large tape archives, and enable cost reductions of up to 95%. In this session, StorReduce will provide best practices around data deduplication in relation to designing and deploying solutions around backup, archive, and general unstructured file data. They will also demonstrate how using a cloud native interface with scale-out deduplication enables generic cloud services like search inside all backups moved to cloud. They will guide the audience through two customer use cases from the financial services and healthcare industries.
The Pensions Trust - VM Backup Experiencesglbsolutions
VMware Backup Experiences Darren Bull Business Support Manager, The Pensions Trust
The Pensions Trust previously used tape backups and manual server recovery for disaster recovery that took 48 hours. They virtualized servers with VMware which simplified backups and DR. They tried EMC's Mirrorview replication but it fell behind and failed. They implemented DataDomain deduplicated storage for backups and replication which achieved 40x storage savings and offsite replication within 24 hours. For backups they moved from BackupExec to Veeam which reduced backup times from 24 hours to minutes and allowed DR testing recovery in under 6 hours. In conclusion, newer backup software and deduplicated storage provided reliable, efficient backups and disaster recovery meeting their 24 hour
Accelerating Spark Genome Sequencing in Cloud—A Data Driven Approach, Case St...Spark Summit
Spark data processing is shifting from on-premises to cloud service to take advantage of its horizontal resource scalability, better data accessibility and easy manageability. However, fully utilizing the computational power, fast storage and networking offered by cloud service can be challenging without deep understanding of workload characterizations and proper software optimization expertise. In this presentation, we will use a Spark based programing framework – Genome Analysis Toolkit version 4 (GATK4, under development), as an example to present a process of configuring and optimizing a proficient Spark cluster on Google Cloud to speed up genome data processing. We will first introduce an in-house developed data profiling framework named PAT, and discuss how to use PAT to quickly establish the best combination of VM configurations and Spark configurations to fully utilize cloud hardware resources and Spark computational parallelism. In addition, we use PAT and other data profiling tools to identify and fix software hotspots in application. We will show a case study in which we identify a thread scalability issue of Java Instanceof operator. The fix in Scala language hugely improves performance of GATK4 and other Spark based workloads.
8 considerations for evaluating disk based backup solutionsServium
This document discusses considerations for evaluating disk-based backup solutions and compares different approaches. It provides an example showing how data deduplication can significantly reduce disk storage needs for backups. The key points of evaluation are identified as backup performance, restore performance, deduplication approach, scalability, support for heterogeneous environments and backup application features, offsite data protection, and total cost of ownership. ExaGrid is presented as a disk-based backup solution that addresses these considerations through its use of post-process deduplication, which allows for the fastest backup performance and quick restores from the most recent full backup stored complete on disk.
- NoSQL databases like Cassandra perform better with low-latency SSD storage but running all-SSD systems is expensive.
- PerfAccel provides intelligent caching that uses instance store SSDs optimally to improve performance of NoSQL databases on cloud deployments at lower cost than all-SSD.
- It analyzes I/O behavior to place hot data in the faster SSD cache, avoiding data loss issues, and offloads reads to improve backend storage performance.
Net App Syncsort Integrated Backup Solution SheetMichael Hudak
NetApp has core technology for
block-level data protection .
What does Syncsort add?
A: Syncsort leverages NetApp core technology while adding the following: Heterogeneous application support for Exchange,Oracle, SQL• Deeper integration with VMware for advanced,
automated recovery scenarios• Catalog search and restore across disk-based backup• A catalog that spans both disk and tape• Recovery from SnapMirror DR destinations• Automated Bare Metal Recovery
White Paper: Understanding EMC Avamar with EMC Data Protection Advisor — Appl...EMC
EMC Data Protection Advisor provides enhanced monitoring and reporting for EMC Avamar environments. Avamar uses deduplication at the client level and across clients to reduce backup storage requirements by up to 50x. It treats each backup as full but only transmits changed data blocks. DPA collects data from Avamar's database to generate reports on backup jobs, used storage capacity, and restore jobs to help manage the Avamar environment.
Slow performance and unavailable critical applications can impinge a company’s progress. You can apply patches and updates to improve application quality and user experience, but these changes need to be tested in resource-intensive environments before deployment. Keeping these applications accessing data is vital, too, as on-premises events can put availability at risk.
Our Dell EMC VMAX 250F and PowerEdge server solution supported test/dev environments and production database applications simultaneously without affecting the production applications’ performance. As we added VMs designed for test/dev environments, the production workload maintained an acceptable level of IOPS and achieved an average storage latency of less than a millisecond. The solution also kept data highly available with no downtime and no performance drop when we initiated a lost host connection for the primary storage. To run critical database applications of your company, consider the Dell EMC VMAX 250F for your datacenter.
The document summarizes testing of a Dell EMC VMAX 250F all-flash storage array and Dell EMC PowerEdge servers to support both production and test/development Oracle Database 12c workloads. Key findings include:
1) The solution maintained low latency of less than 1 millisecond even when adding 7 test/dev database snapshots to the production workload.
2) The production workload saw less than 2% degradation in IOPS despite increasing overall storage IOPS by adding test/dev workloads.
3) Using SRDF/Metro replication, the solution provided uninterrupted access to data with no downtime or performance drop when one array was unavailable, ensuring high availability.
Eliminate tape backups by replicating disk-based backups from a primary ExaGrid system to a secondary ExaGrid system offsite. This provides faster disaster recovery than tape due to ExaGrid's post-processing deduplication and ability to restore directly from the most recent backup. It also adds an extra layer of protection by allowing restoration of any lost primary site data from the offsite system. Setup is simple, involving initializing the offsite appliance and configuring replication from the primary system.
DATASHEET▶ Enterprise Cloud Backup & Recovery with Symantec NetBackupSymantec
Symantec NetBackup delivers reliable backup and recovery across applications,
platforms, physical and virtual environments.1 A single console unites the management and reporting of both on-premises and on-cloud information to provide additional operating efficiencies and simplified administration. The NetBackup platform has deep VMware® and Microsoft Hyper-V integration, built-in deduplication to protect the private cloud, seamless integration with industry-leading public cloud storage providers, self-service and multi-tenancy for backup as a service (BaaS).
The NetBackup cloud storage module enables you to backup and restore data from cloud storage providers and is integrated with Symantec's Open Storage (OST) module which provides features that can enhance the operational experience of backup and recovery from the cloud.
Achieve up to 80% better throughput and increase storage efficiency with the ...Principled Technologies
Compared to a competing array, Dell’s high-end PowerMax 8500 storage array offered better I/O performance for simulated OLTP and data workloads, saved more space through more efficient data reduction, and performed snapshots with no performance impact
The document discusses data backup and recovery strategies. It defines data recovery as retrieving files that have been deleted, forgotten passwords, or recovering damaged hard drives. It discusses challenges with backups like network bandwidth, backup windows, and lack of resources. It also covers backup storage technologies and strategies to improve backups like incremental and block-level backups. The document recommends automating recovery, testing recovery plans, and using tools like BMC's Back-up and Recovery Solution to manage the backup process and improve recovery outcomes.
Similar to Aspirus Enterprise Backup Assessment And Implementation Of Avamar (20)
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
"Choosing proper type of scaling", Olena SyrotaFwdays
Imagine an IoT processing system that is already quite mature and production-ready and for which client coverage is growing and scaling and performance aspects are life and death questions. The system has Redis, MongoDB, and stream processing based on ksqldb. In this talk, firstly, we will analyze scaling approaches and then select the proper ones for our system.
Freshworks Rethinks NoSQL for Rapid Scaling & Cost-EfficiencyScyllaDB
Freshworks creates AI-boosted business software that helps employees work more efficiently and effectively. Managing data across multiple RDBMS and NoSQL databases was already a challenge at their current scale. To prepare for 10X growth, they knew it was time to rethink their database strategy. Learn how they architected a solution that would simplify scaling while keeping costs under control.
zkStudyClub - LatticeFold: A Lattice-based Folding Scheme and its Application...Alex Pruden
Folding is a recent technique for building efficient recursive SNARKs. Several elegant folding protocols have been proposed, such as Nova, Supernova, Hypernova, Protostar, and others. However, all of them rely on an additively homomorphic commitment scheme based on discrete log, and are therefore not post-quantum secure. In this work we present LatticeFold, the first lattice-based folding protocol based on the Module SIS problem. This folding protocol naturally leads to an efficient recursive lattice-based SNARK and an efficient PCD scheme. LatticeFold supports folding low-degree relations, such as R1CS, as well as high-degree relations, such as CCS. The key challenge is to construct a secure folding protocol that works with the Ajtai commitment scheme. The difficulty, is ensuring that extracted witnesses are low norm through many rounds of folding. We present a novel technique using the sumcheck protocol to ensure that extracted witnesses are always low norm no matter how many rounds of folding are used. Our evaluation of the final proof system suggests that it is as performant as Hypernova, while providing post-quantum security.
Paper Link: https://eprint.iacr.org/2024/257
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Conversational agents, or chatbots, are increasingly used to access all sorts of services using natural language. While open-domain chatbots - like ChatGPT - can converse on any topic, task-oriented chatbots - the focus of this paper - are designed for specific tasks, like booking a flight, obtaining customer support, or setting an appointment. Like any other software, task-oriented chatbots need to be properly tested, usually by defining and executing test scenarios (i.e., sequences of user-chatbot interactions). However, there is currently a lack of methods to quantify the completeness and strength of such test scenarios, which can lead to low-quality tests, and hence to buggy chatbots.
To fill this gap, we propose adapting mutation testing (MuT) for task-oriented chatbots. To this end, we introduce a set of mutation operators that emulate faults in chatbot designs, an architecture that enables MuT on chatbots built using heterogeneous technologies, and a practical realisation as an Eclipse plugin. Moreover, we evaluate the applicability, effectiveness and efficiency of our approach on open-source chatbots, with promising results.
What is an RPA CoE? Session 1 – CoE VisionDianaGray10
In the first session, we will review the organization's vision and how this has an impact on the COE Structure.
Topics covered:
• The role of a steering committee
• How do the organization’s priorities determine CoE Structure?
Speaker:
Chris Bolin, Senior Intelligent Automation Architect Anika Systems
AppSec PNW: Android and iOS Application Security with MobSFAjin Abraham
Mobile Security Framework - MobSF is a free and open source automated mobile application security testing environment designed to help security engineers, researchers, developers, and penetration testers to identify security vulnerabilities, malicious behaviours and privacy concerns in mobile applications using static and dynamic analysis. It supports all the popular mobile application binaries and source code formats built for Android and iOS devices. In addition to automated security assessment, it also offers an interactive testing environment to build and execute scenario based test/fuzz cases against the application.
This talk covers:
Using MobSF for static analysis of mobile applications.
Interactive dynamic security assessment of Android and iOS applications.
Solving Mobile app CTF challenges.
Reverse engineering and runtime analysis of Mobile malware.
How to shift left and integrate MobSF/mobsfscan SAST and DAST in your build pipeline.
Essentials of Automations: Exploring Attributes & Automation ParametersSafe Software
Building automations in FME Flow can save time, money, and help businesses scale by eliminating data silos and providing data to stakeholders in real-time. One essential component to orchestrating complex automations is the use of attributes & automation parameters (both formerly known as “keys”). In fact, it’s unlikely you’ll ever build an Automation without using these components, but what exactly are they?
Attributes & automation parameters enable the automation author to pass data values from one automation component to the next. During this webinar, our FME Flow Specialists will cover leveraging the three types of these output attributes & parameters in FME Flow: Event, Custom, and Automation. As a bonus, they’ll also be making use of the Split-Merge Block functionality.
You’ll leave this webinar with a better understanding of how to maximize the potential of automations by making use of attributes & automation parameters, with the ultimate goal of setting your enterprise integration workflows up on autopilot.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Northern Engraving | Nameplate Manufacturing Process - 2024Northern Engraving
Manufacturing custom quality metal nameplates and badges involves several standard operations. Processes include sheet prep, lithography, screening, coating, punch press and inspection. All decoration is completed in the flat sheet with adhesive and tooling operations following. The possibilities for creating unique durable nameplates are endless. How will you create your brand identity? We can help!
Northern Engraving | Nameplate Manufacturing Process - 2024
Aspirus Enterprise Backup Assessment And Implementation Of Avamar
1. Aspirus Enterprise Backup Assessment and Implementation of Avamar Written by: Thomas Whalen – Server and Storage Infrastructure Team Leader, Aspirus Information Technology Department Executive Summary Since the initial implementation of Epic within the Aspirus Health System, the ability to maintain a consistent backup process was a recurring challenge. The largest aspect of this challenge was finding a combination of backup technology and storage solutions to handle the continual growth of data as Aspirus continued to expand its Epic environment both in terms of clinical records and application modules. In late 2009, the Aspirus Information Technology department was able to participate in a proof of concept around EMC’s Avamar host-based de-duplication backup grid and Networker backup management software to see what the results of pushing the Epic production data to this backup architecture. In the past, we had leveraged a product from Exagrid to perform target-based de-duplication but found that the Exagrid didn’t yield the performance and de-duplication rates we considered acceptable as the environment continued to grow. In front of Exagrid we also had Symantec’s Netbackup backup management software that was proving to be inconsistent in performing routine backups and was plagued with various system issues forcing the IT staff to constantly focus a large degree of attention to it just to assure that routine backups could take place. Once we began the proof of concept with Avamar, we determined very quickly that the de-duplication rates observed were superior to Exagrid’s target-based de-duplication appliance. Also we felt that the scalability for the long-term needs of Aspirus’ ever-increasing data growth showed that EMC’s Avamar technology using its RAIN (Redundant Array of Independent Nodes) architecture would scale as Aspirus data rates grew. As important as all of this, the other aspect was that while implementing the system, we never observed any specific system issues with backups simply not working. At the end of the proof of concept and its eventual implementation, we now can realize an overall de-duplication rate of our Epic environment (based on routine nightly backups) of 110:1 storing an average of 900GB of total storage with an average nightly change rate of 1.5 – 2.5% or roughly 8G of daily changes. Because of the aggressive de-duplication capabilities, this equates to a significantly lower cost of ownership on securing the same amount of data typical written to tape or even another disk-based de-duplication system. It also allows us to free up staffing dedicated to hand-holding our previous backup system reallocating that time to be focused to more meaningful work in IT. Lastly, the days of random missed backups appear to be a thing of the past which assures us that our clinical and financial data will be consistently protected through its life-cycle. Epic Backup Architecture Aspirus uses a number of technologies to position its Epic clinical data for backup. In the beginning we would simply pull the backups from a snapshot mounted to the Epic shadow server and then spin that data off the magnetic tape. We found that this posed a number of specific problems both in Epic performance and also in the performance of writing the backup. In the area of Epic performance, using the SnapView tools from EMC on our Clariion SAN, we found that, based on the nature of how snapshots are designed to work, when the backup was initiated and the snapshot was mounted to the shadow server that this caused a residual effect in degraded performance of the production environment. As our datasets began to get larger and larger we found this performance issue becoming more visible to users and the mission of the IT technical group is to assure that we maintained the highest degree of performance 24x7. But as our environment started to grow, we also noticed that our backup window was getting longer and longer while we wrote 500-600G of data off to our DLT tape array. As time progressed and the data grew, we saw the writing on the wall that DLT was not going to be the long-term solution if we wanted to keep a daily backup process intact. At this point, we decided to use EMC SnapView clones to replicate the data from the production storage LUN’s to cloned LUN’s. While this is more expensive because of the duplicate storage requirements of the clone, mitigation of the performance issues we saw snapshot process was a good trade off in our opinion. Also the clone could be used for other purposes like environment refreshes. The initial clone was created using EMC 500G SATA drives which were slower in their overall speed but had more overall disk capacity. At this same time, we also moved away from using our DLT tape array to a target-based disk appliance from Exagrid. This transition was a good move as it brought about faster backups and restores but also introduced target-based de-duplication. As backups began to be written to Exagrid, we started show de-duplication rates around the 15:1. While transitioning from snapshots to clones, we also made the decision to move the backup processes off the production Shadow Server. The Shadow Server was pulling double-duty by not only providing the DR shadow as part of Epic’s overall best-practices, but we were also using that same Shadow Server to be the extracting Cache database for Epic Clarity reporting, a very intensive process. In order to reduce the Shadow Server workload, we decided to build a dedicated IBM AIX cloning server to present the Epic production clone to. This allowed us to make sure that no other Epic-specific processes or services were being impacted while we performed routine backups. The clone again would also allow us to use it for routine non-production environment refreshes for future builds, testing, validation, etc. Visual Representation of Previous Backup System In this design, we were getting acceptable backups but in using the SATA disks as well as the growth of the Epic production database, we started to see limits to the value of using the Exagrid storage system in speed and de-duplication as well as seeing more and more problems with managing the Epic backups through NetBackup. Avamar and Networker Assessment Contrasting Avamar vs. Exagrid The Avamar technology is comprised of a collection of servers or nodes or RAIN (Redundant Array of Independent Nodes) that comprises a “grid” of storage resources. The grid can grow as your storage needs grow and can natively support backups across the network along with the ability to manage NDMP backups for NAS-based storage solutions. Avamar also has the capabilities to replicate of backups across separate grids to provide DR for your critical data. While the Avamar and Exagrid storage architectures share similarities in function, the biggest difference is in their general method handling the backup data it’s self. Avamar is a host-based de-duplication system utilizing a client that sits on the server where the data required to backup resides to interrogate data that will be sent to the Avamar grid and then only sending the changed data down the wire. This results in less network traffic for your backup data. Avamar used a patented “Commonality Factor” process which learns patterns of data behavior and uses this to determine the degree of changed data from unchanged data and thus determines its de-duplication rates. Exagrid on the other hand is a target-based de-duplication system in which all data is sent to the grid into a high-speed disk repository. In this repository, Exagrid does a comparison of the data to create its de-duplication at the byte-level and moves the changed data to a lower-speed, higher capacity disk area for long-term retention and compression. This process takes place once all the data’s been passed to the Exagrid indicating to the software client that the backup is completed. One can argue the benefits of both types of technologies, and in fact, both are generally very good. But in considering Epic as our target application, we determined that sending close to 1TB down the wire nightly was a big part of our current backup pains. The Avamar system mitigates that again by using the host-based client to help determine what changes have been made and only sends the changed data down the wire resulting in Avamar just storing and managing the data that’s changed between backup cycles. But also in using the client to manage the changed data we uncovered an issue around our cloning process. Our initial testing using the SATA-based clone of Epic production showed an unacceptable degree of IOPS being pushed the host to interrogate data. Our first backups running with SATA ran in excess of 10 hours before it was completed. After investigating the host’s performance during the backup process, it was easy to see that the clone IOPS were slowing down the ability of the Avamar client to interrogate and move the changed data down the wire to the grid. Based on this, we created a new Fiber Channel-based clone running on 300G, 15K RPM drives. In this configuration, the impact was very positive. Our backup went from 10 hours to 6 hours on the first run faster than any backup we’ve ever cut since we went live in 2004. After a number of days of testing nightly backups, we began to see Avamar’s de-duplication process. Avamar Backup Performance Results The Avamar grid showed very good performance in accepting the data from the host even while using a single 1 gigabyte network connection. Over the course of 10 backup tests, backup timings were recorded along with the amount of changed data and then computed de-duplication ratios from the change rate. Figure 1 illustrates those results: Figure 1: De-Duplication Change Rate The figure 1 shows 11 backups that were run against Aspirus Epic production data using the Avamar de-duplication grid. What this chart shows is over the period of the backup cycles, the daily change rate decreases as the host-based client “learns” the pattern of changes day-to-day. This knowledge is used to then capture the differences only and send those to the grid. The chart’s left column is the percentage of de-duplication in percentage. What this shows is that as the daily backups were performed, each night the amount of data that the client determined was unchanged increased. The first backup showed zero data changes as it was the first backup performed and Avamar saw all data as new. Then backup 2 through 11 showed a steady increase in de-duplicated data. By backup 11, the rate of de-duplication was over 90%. Given a 900GB Epic database, this means that the backup consisted of roughly 7 to 10G in total changes sent to the grid. The benefits of this are a dramatic decrease is network traffic and over the continuum of backups along with a significant decrease in overall storage needs to keep a longer retention of Epic backups available. Based on the amount of total data over the amount of changed data, this shows a de-duplication rate of approximately 110:1. The value of this is measure in a number of ways. The largest consideration is in space required to store the same data to tape. Using DLT, even with compression you would need 2-3 tapes per night to keep that data safe. With Avamar, the amount of data required is 900G plus the daily changes. So for a week’s worth of backups, that equates to storage needs of about 950G versus approximately 21 tapes to keep about 4.9TB factoring in a moderate compression ratio on the DLT tape drive. Figure 2: Backup Time – Snapshot 1 Figure 2 shows the tracking of backup time in hours for the 11 backups we monitored. You’ll note that between backup 5 and 6 you’ll see a dramatic drop in time needed to perform the backup. This is the impact of using the Fiber Channel clone versus the SATA clone. This change reduced the backup time by 50%. What this chart does not show is the impact of Commonality Factor as the Avamar client learns the pattern of data change between backup cycles. As of the writing of this document, another capture of backup times show a much more interesting chart that illustrated the impact of Avamar’s Commonality Factor. Figure 3: Backup Time with Commonality Factor Figure 3 illustrated that over a longer period of time how Commonality Factor plays a role in the reduction to your backup window. As Commonality Factor learns the pattern of changed and unchanged data day-by-day, it uses algorithms to determine how to best scan the data on the host. When this efficiency occurs, this reduces the work needed by the client to review the data with the impact being an overall reduction of time in backup. You will see above that around backup 8 through 20 a slow decline in backup time as Commonality Factor plays are larger role in how much time the client needs to spend scanning the file systems. The fact that Aspirus can now backup their entire Epic production cache database instance in roughly 4 hours speaks volumes around the power on the Commonality Factor process versus other backup and deduplication technologies we’ve used in the past. This is simply the finest backup process we’ve encountered to date. Avamar Restore Performance Results Using Networker as the front-end to our Backup and Restore process posed a challenge in the area of Epic Production data restores. The reason is based around the aspect of Networker being designed for more a windows-oriented restore process. Said differently, unlikely backups where Networker will fire off a multi-threaded backup process (multiple avtar processes or Networker save processes), for restores it will only create a single restore for each of the file systems one at a time until all file systems are restored. Because of this characteristic, this poses challenges in the area of Epic restores. Because the traditional Epic cache database instance is comprised of multiple Epic production file systems, restoring those file systems one at a time would take a significant amount of time to complete even with the smallest of cache instances. In our testing, we found that by launching multiple restore processes against each file system allowed Networker to leverage the horsepower of the Avamar Grid and network infrastructure to pull back each Epic file system at the same time, thus simulating a multi-threaded restore. During the course of testing 4 restore points with the EMC Networker/Avamar technology, we recorded an aggregate restore time noted in Figure 3. Figure 4: Restores Single Instance vs. Multi-Instance In figure 4, we see that in the area of a single-instance restore, the restoration process takes significantly longer, upwards to days to finish. In a multi-instance restore, the ability to pull back your Epic production data is more palatable and results in a restore time that you can base an SLA around. Also if you look at the graph, you’ll see that multi-instance restore performs almost as well as the backup which is contrary to conventional 2:1 backup to restore baselines used in the IT industry today. But as we were learning about the recovery process, an optimization concern emerged that plays a significant role in the restoration process. Figure 5: Epic System File System Provisioning In figure 5, what we found as we were really dissected the restore process for our Epic production system was that one file system was significantly larger than any other file system in the production instance. Because of this file system, we noticed that individually all of our restores were resulting in about a 6 hour recovery time frame. All file systems but /epic/prd01. The /epic/prd01 area of Cache was individually taking ~12 hours to finish and thus pushed our recovery window to 12 hours in total. Considering the /epic/prd01 is 2 ½ times the size of any other /epic/prdxx file system, the restore time seemed to make sense albeit not optimal. To avoid this situation, when we learned is that we’ll need to do a better job of being more mindful of the balance of data between file systems and keep them all relative in size to assure that in a restore situation, we can maximize our time to recovery between all file systems. In this case, by balancing /epic/prd01 with all the remaining file system, even if they all grow an additional 10-20%, we should be able to reduce our recovery window from 12-13 hours to approximately 6-6 ½ hours given the restore timing we’ve already collected with the other /epic/prd02 – 08 file systems. Aspirus will actively engage Epic to better balance the /epic/prd01 file system with the other remaining Epic Cache file systems and then will revisit the recovery window again but we feel that our projections of recovery will be acceptable given the testing already completed. Also as stated early, every instance we restored, the file systems passed Epic integrity tests without issue. Analysis Summary Based on the finding we captured in both the backup and recovery processes of using the EMC Networker and Avamar Grid technology is that from an Epic perspective, offers a significant improvement in the overall management of backup data. From an SLA perspective, Aspirus was able to move their backup window for Epic from a 12-14 hour backup window to a 4–54 ½ hour backup window with recovery RTO of 7-8 hours down from 24-48 hours spinning back from tape. Care must be taken in assessing your Epic file systems to ensure they are balanced as well as positioning yourself with a host that the Epic data can be presented to. These steps are critical to the success of the implementation. From a Cost of Ownership perspective, we’ve observed in using aggressiveness of Avamar’s deduplication technology and its use of Commonality Factor, we’ve been able to reduce the long-term size of our backups for the retention windows we feel necessary by almost 90% over using the Exagrid. This equates to less money being spent to continually add capacity for all the other backups in the enterprise and extends our initial storage provisioning far longer than we originally anticipated. Also because of the host-based client, less data is traversing the network which helps to maintain overall network performance and make WAN-based Networker/Avamar back-ups a reality versus a wish-list item. The Avamar grid is sold based on your data deduplication needs not on the total amount of backup space like other backup storage technologies. Again because of the Commonality Factoring process, you’re initial determination of storage needs based on CF means that generally the Avamar RAIN grid will cost less per GB and require less total storage space due to higher degrees of deduplication achieved over other deduplication systems. Another major cost factor is client costs. With other backup technologies you must license the clients or hosts you wish the backup. With Avamar, the clients are free for a wide array of hosts (Windows, IBM AIX, HP-UX, Linux) but includes agents for Microsoft Exchange and Sharepoint, Oracle, DB2, and others which are usually high priced accessory licenses to the host license itself. In a lot of cases, it’s in this area that the costs of implementations of backup systems become very expensive quickly. In closing, Aspirus has spent a lot of time working with the EMC Avamar / Networker backup technology and feel it was absolutely the right move to make for all the points above but there’s one final point I have yet to cover. The best part outside of all the cool things technically with this backup environment is that we feel our backups are safe, recoverable, and we manage backups versus backups managing us. The time we can now devote to other work because we have a technically sound and functionally stable backup environment. The Aspirus Backup Architecture Today