Emc data domain® boost integration guideArvind Varade
The document provides an integration guide for using EMC NetWorker Version 9.0.x with EMC Data Domain Boost (DD Boost) technology. It covers planning, practices, and configuration information for using DD Boost devices within a NetWorker backup and storage management environment. Key points include:
- DD Boost allows deduplication of backup data on Data Domain storage systems for reduced storage requirements.
- The guide provides roadmaps and procedures for configuring DD Boost devices, policies for backups and cloning, software requirements, restoring data, monitoring and reporting, and upgrading existing DD Boost configurations.
- Details are given on network and hardware requirements, performance considerations, licensing, and best practices for backup retention, data types
EMC presented an overview of SQL Server 2012 and how it can help organizations unlock insights from data, improve performance of mission critical applications, and create business solutions across on-premises and cloud environments. EMC positions itself as the leader in mission critical infrastructure and discusses how its storage solutions like VNX, VMAX, and FAST cache can boost the performance of SQL Server workloads by 3-4x while improving reliability, availability, backup speeds and reducing storage needs. The presentation provides best practices for optimizing SQL Server deployments and highlights EMC's management and data protection tools for SQL Server.
VMware vSphere 5.5 with features like Flash Read Cache (vFRC) can improve performance of virtualized Oracle 12c databases without impacting reliability functions like VMotion. Testing showed vFRC decreased time to complete an OLAP workload by 14% and allowed seamless migration of vFRC-enabled VMs during VMotion. The combination of VMware, Cisco, and EMC technologies provided reliable virtualization and storage with increased Oracle 12c performance using vFRC.
Updated study material available for 1Z0-027 Exam-Oracle Exadata Database Machine Administration, Software Release visit@ https://www.troytec.com/1Z0-027-exams.html
Emc data domain® boost integration guideArvind Varade
The document provides an integration guide for using EMC NetWorker Version 9.0.x with EMC Data Domain Boost (DD Boost) technology. It covers planning, practices, and configuration information for using DD Boost devices within a NetWorker backup and storage management environment. Key points include:
- DD Boost allows deduplication of backup data on Data Domain storage systems for reduced storage requirements.
- The guide provides roadmaps and procedures for configuring DD Boost devices, policies for backups and cloning, software requirements, restoring data, monitoring and reporting, and upgrading existing DD Boost configurations.
- Details are given on network and hardware requirements, performance considerations, licensing, and best practices for backup retention, data types
EMC presented an overview of SQL Server 2012 and how it can help organizations unlock insights from data, improve performance of mission critical applications, and create business solutions across on-premises and cloud environments. EMC positions itself as the leader in mission critical infrastructure and discusses how its storage solutions like VNX, VMAX, and FAST cache can boost the performance of SQL Server workloads by 3-4x while improving reliability, availability, backup speeds and reducing storage needs. The presentation provides best practices for optimizing SQL Server deployments and highlights EMC's management and data protection tools for SQL Server.
VMware vSphere 5.5 with features like Flash Read Cache (vFRC) can improve performance of virtualized Oracle 12c databases without impacting reliability functions like VMotion. Testing showed vFRC decreased time to complete an OLAP workload by 14% and allowed seamless migration of vFRC-enabled VMs during VMotion. The combination of VMware, Cisco, and EMC technologies provided reliable virtualization and storage with increased Oracle 12c performance using vFRC.
Updated study material available for 1Z0-027 Exam-Oracle Exadata Database Machine Administration, Software Release visit@ https://www.troytec.com/1Z0-027-exams.html
EMC Data domain advanced features and functionssolarisyougood
This document provides an overview of advanced features and functions of Data Domain systems. It covers topics such as virtual tape libraries (VTL), snapshots, replication, DD Boost integration, capacity and throughput planning, and system monitoring tools. The document consists of multiple lessons that describe these topics in detail and includes configuration examples.
Scaling Oracle 12c database performance with EMC XtremIO storage in a Databas...Principled Technologies
Oracle single instance database VMs need plenty of storage capacity and performance to handle increased workload demands placed on them by users. Whether your organization uses DBaaS or traditional Oracle 12c instances, consider the reliable performance and scaling flexibility that the EMC XtremIO storage array can offer. We found IOPS levels stayed consistent as we scaled up to eight Oracle single instance VMs and scaled by an average of 14,700 IOPS for each VM (totaling 118,067). In addition, we found that the inline deduplication, compression, and thin provisioning capabilities on the XtremIO array resulted in an overall efficiency ratio of 51 to 1 and a data reduction ratio of 14.6 to 1. With this level of consistent performance, users can expect great performance to meet high demand for IOPS in a DBaaS environment.
This document discusses Oracle Cloud Infrastructure compute options including bare metal instances, virtual machine instances, and dedicated hosts. It provides details on instance types, images, volumes, instance configurations and pools, autoscaling, metadata, and lifecycle. Key points covered include the differences between bare metal, VM, and dedicated host instances, bringing your own images, customizing boot volumes, using instance configurations and pools for management and autoscaling, and accessing instance metadata.
The document discusses best practices for running Oracle databases on VMware virtual machines. It recommends: 1) carefully sizing workloads based on physical constraints; 2) optimizing ESXi host settings like disabling unnecessary processes, using large memory pages, and matching vCPUs to sessions; 3) optimizing the guest operating system; 4) using dedicated storage like SSDs and aligning datastores; and 5) separating infrastructure and VM network traffic using features like NIC teaming.
Optimizing Oracle databases with SSD - April 2014Guy Harrison
Presentation on using Solid State Disk (SSD) with Oracle databases, including the 11GR2 db flash cache and using flash in Exadata. Last given at Collaborate 2014 #clv14.
This document provides an overview of VERITAS Storage Foundation, which includes VERITAS Volume Manager (VxVM) and VERITAS File System (VxFS). VxVM provides storage virtualization and management, while VxFS provides a high-performance file system. Together they deliver increased manageability, availability, and performance for storage resources. Storage Foundation simplifies storage management and reduces costs through features like online administration and hardware failure protection.
VMworld 2013: Architecting Oracle Databases on vSphere 5 with NetApp StorageVMworld
This document discusses architecting Oracle databases on VMware vSphere 5 with NetApp storage. It begins with objectives such as understanding how to provision NetApp storage for an Oracle database to take advantage of VMware and NetApp technologies. It then covers topics like using Oracle with vSphere 5, recommendations for vSphere 5, virtualizing Oracle with NetApp, reference architectures, and where to learn more. The presenters are experts on Oracle and virtualization technologies looking to provide best practices on implementing Oracle databases with VMware and NetApp.
The document discusses various Oracle Cloud Infrastructure storage services including local NVMe storage, block volumes, file storage, object storage, and archive storage. It provides details on the type, durability, capacity, unit size, and use cases of each storage service. Local NVMe storage provides temporary SSD-based storage attached directly to compute instances, while block volumes provide durable block-level storage that can be attached to instances independently. File storage provides shared NFS-compatible file systems, object storage offers highly durable object storage, and archive storage is for long-term archival and backups.
This document contains a practice exam for the Oracle Exadata Database Machine 2014 Implementation Essentials certification (exam 1Z0-485). It includes 21 multiple choice questions about configuring and implementing Exadata, with explanations provided for each answer. Key topics covered include Exadata networking, storage configuration, cell offloading, I/O resource management, backups, health checks, and integrating Exadata with Enterprise Manager.
This document discusses EMC RecoverPoint for Virtual Machines, a software-only solution that provides continuous data protection for VMs with VM-level granularity. It protects VMs running on VMware ESXi, supports various storage types, and integrates with VMware vCenter. RecoverPoint for VMs allows admins to optimize RPO and RTO to meet SLAs, streamline recovery workflows, and lower TCO. It provides automated VM discovery, protection, and orchestrated disaster recovery failover/failback to any point in time.
Oracle Automatic Storage Management has proven to be one of the most widely adopted new features in Oracle Database 10g and it has been dramatically improved in the later 11g releases. This presentation will explain what changes are solved by ASM, how these challenges are solved, what barriers there are to ASM adoptions, and how 11g Release 2 addresses these barriers.
Boosting virtualization performance with Intel SSD DC Series P3600 NVMe SSDs ...Principled Technologies
When it comes time to make your server purchase or if you’re looking for an easy way to boost performance of existing infrastructure, consider upgrading your server’s internal storage. As our hands-on tests with a Dell EMC PowerEdge R630 environment running VMware Virtual SAN proved, Intel SSD DC P3600 Series NVMe SSDs could increase virtualized mixed-workload performance by as much as 59.9 percent compared to SATA SSDs while allowing you to run a large additional number of VMs. When you improve performance for your virtualized workloads, your employees and customers will benefit. By increasing performance with Intel NVMe SSDs on your Dell EMC PowerEdge R630 servers, you can potentially slash wait times and do more work on your servers without having to expand your infrastructure with additional storage arrays, which can translate to happier users and a more efficient infrastructure.
Accelerating Oracle on Red Hat Enterprise Linux with ioMemorySumeet Bansal
Oracle on RHEL is a great combination. The pot gets even sweeter with Fusion-io's iomemory gets added to the mix. Team Red Hat has done some excellent benchmarking to show that a single commodity server with RHEL and ioDrives can deliver mind-blowing throughput and IOPS. If you have a read-heavy Oracle workload on RHEL and can't use Oracle Smart Flash Cache, just use directCache from Fusion-io and get it done.
I am presenting this at the Red Hat mini-theatre at the Oracle Open World 2012.
Emc data domain technical deep dive workshopsolarisyougood
The document provides an overview of EMC Data Domain products and services. It discusses Data Domain systems which provide scalable and high performance protection storage for backup and archive data. The systems integrate with leading backup and archiving applications. The document also summarizes Data Domain software options such as Boost, Encryption, Replicator and Extended Retention which provide additional functionality.
Presentation deduplication backup software and systemxKinAnx
The document provides information on EMC's Avamar deduplication backup software and system. It discusses how Avamar reduces backup time and storage requirements through client-side deduplication. Avamar provides daily full backups, one-step recovery, and supports both physical and virtual environments. It integrates with EMC Data Domain systems and is optimized for backing up virtual machines, remote offices, desktops/laptops, and enterprise applications.
EqualLogic is changing how people experience storage by delivering dynamic virtual storage solutions that address changing business needs in real time at a reasonable cost. The presentation covers how EqualLogic avoids disruptions and underutilization compared to traditional storage, provides high performance for applications, and includes comprehensive data management features like snapshots, cloning and replication at no additional cost. It also demonstrates how EqualLogic integrates with virtualization platforms and simplifies management.
Deduplication Solutions Are Not All Created Equal: Why Data Domain?EMC
Data Domain systems provide significant advantages over other deduplication solutions through their unique technologies and leadership. Their Data Invulnerability Architecture ensures the integrity of backup data through end-to-end verification, fault avoidance, detection and healing, and rapid file system recoverability. Stream Informed Segment Layout delivers industry-leading performance that scales with CPU improvements. Data Domain Boost distributes deduplication processing for up to 50% faster backups and 99% less network usage. These technologies simplify backup operations, improve reliability and recoverability of data, and help customers meet backup windows.
Architectural designs driving sql server performance and high availabilitySumeet Bansal
DBAs are often asked to design database infrastructure for new applications or upgrade existing systems. Therefore, they must have a keen understanding of the application’s requirements, operational environment, and infrastructure so they can recommend the best approach.
In this session, you’ll learn about the latest advancements in storage technology, the various ways flash can be deployed (flash caching, server-side PCIe flash, hybrid, and all-flash), and the pros and cons for each. We’ll also discuss how a well-designed infrastructure can drive efficiencies and help your organization save on licensing and maintenance costs.
This white paper discusses optimizing backup and recovery for VMware Infrastructure using EMC Avamar. It provides an overview of VMware Infrastructure and its components. It then discusses three solutions for backing up virtual machines using Avamar: backing up via the VMware Consolidated Backup proxy server, installing Avamar agents inside each virtual machine, or installing an agent on the ESX server service console. Avamar reduces backup sizes and times through global data deduplication.
Presentation data domain advanced features and functionsxKinAnx
This document provides an overview of Data Domain advanced features and functions for Velocity Partner Accreditation. It covers topics such as virtual tape library (VTL) planning, snapshots, replication, recovery, DD Boost integration, capacity and throughput planning, and system monitoring tools. The document contains lessons and explanations on these topics to help partners learn about and describe Data Domain's data protection solutions.
Benchmark emc vnx7500, emc fast suite, emc snap sure and oracle rac on v-mwaresolarisyougood
This document describes a scalable virtualized Oracle RAC 11g database deployment using EMC VNX7500 storage with EMC FAST Suite. Testing showed that using FAST Cache improved transactions per minute by 133% and response time by over 90%, while FAST Suite improved TPM by 136% and response time by over 95%. The solution also enabled rapid provisioning of Oracle databases through SnapSure checkpoints and Oracle dNFS clonedb. It provided high availability with automatic failover during network or storage hardware failures.
EMC Data domain advanced features and functionssolarisyougood
This document provides an overview of advanced features and functions of Data Domain systems. It covers topics such as virtual tape libraries (VTL), snapshots, replication, DD Boost integration, capacity and throughput planning, and system monitoring tools. The document consists of multiple lessons that describe these topics in detail and includes configuration examples.
Scaling Oracle 12c database performance with EMC XtremIO storage in a Databas...Principled Technologies
Oracle single instance database VMs need plenty of storage capacity and performance to handle increased workload demands placed on them by users. Whether your organization uses DBaaS or traditional Oracle 12c instances, consider the reliable performance and scaling flexibility that the EMC XtremIO storage array can offer. We found IOPS levels stayed consistent as we scaled up to eight Oracle single instance VMs and scaled by an average of 14,700 IOPS for each VM (totaling 118,067). In addition, we found that the inline deduplication, compression, and thin provisioning capabilities on the XtremIO array resulted in an overall efficiency ratio of 51 to 1 and a data reduction ratio of 14.6 to 1. With this level of consistent performance, users can expect great performance to meet high demand for IOPS in a DBaaS environment.
This document discusses Oracle Cloud Infrastructure compute options including bare metal instances, virtual machine instances, and dedicated hosts. It provides details on instance types, images, volumes, instance configurations and pools, autoscaling, metadata, and lifecycle. Key points covered include the differences between bare metal, VM, and dedicated host instances, bringing your own images, customizing boot volumes, using instance configurations and pools for management and autoscaling, and accessing instance metadata.
The document discusses best practices for running Oracle databases on VMware virtual machines. It recommends: 1) carefully sizing workloads based on physical constraints; 2) optimizing ESXi host settings like disabling unnecessary processes, using large memory pages, and matching vCPUs to sessions; 3) optimizing the guest operating system; 4) using dedicated storage like SSDs and aligning datastores; and 5) separating infrastructure and VM network traffic using features like NIC teaming.
Optimizing Oracle databases with SSD - April 2014Guy Harrison
Presentation on using Solid State Disk (SSD) with Oracle databases, including the 11GR2 db flash cache and using flash in Exadata. Last given at Collaborate 2014 #clv14.
This document provides an overview of VERITAS Storage Foundation, which includes VERITAS Volume Manager (VxVM) and VERITAS File System (VxFS). VxVM provides storage virtualization and management, while VxFS provides a high-performance file system. Together they deliver increased manageability, availability, and performance for storage resources. Storage Foundation simplifies storage management and reduces costs through features like online administration and hardware failure protection.
VMworld 2013: Architecting Oracle Databases on vSphere 5 with NetApp StorageVMworld
This document discusses architecting Oracle databases on VMware vSphere 5 with NetApp storage. It begins with objectives such as understanding how to provision NetApp storage for an Oracle database to take advantage of VMware and NetApp technologies. It then covers topics like using Oracle with vSphere 5, recommendations for vSphere 5, virtualizing Oracle with NetApp, reference architectures, and where to learn more. The presenters are experts on Oracle and virtualization technologies looking to provide best practices on implementing Oracle databases with VMware and NetApp.
The document discusses various Oracle Cloud Infrastructure storage services including local NVMe storage, block volumes, file storage, object storage, and archive storage. It provides details on the type, durability, capacity, unit size, and use cases of each storage service. Local NVMe storage provides temporary SSD-based storage attached directly to compute instances, while block volumes provide durable block-level storage that can be attached to instances independently. File storage provides shared NFS-compatible file systems, object storage offers highly durable object storage, and archive storage is for long-term archival and backups.
This document contains a practice exam for the Oracle Exadata Database Machine 2014 Implementation Essentials certification (exam 1Z0-485). It includes 21 multiple choice questions about configuring and implementing Exadata, with explanations provided for each answer. Key topics covered include Exadata networking, storage configuration, cell offloading, I/O resource management, backups, health checks, and integrating Exadata with Enterprise Manager.
This document discusses EMC RecoverPoint for Virtual Machines, a software-only solution that provides continuous data protection for VMs with VM-level granularity. It protects VMs running on VMware ESXi, supports various storage types, and integrates with VMware vCenter. RecoverPoint for VMs allows admins to optimize RPO and RTO to meet SLAs, streamline recovery workflows, and lower TCO. It provides automated VM discovery, protection, and orchestrated disaster recovery failover/failback to any point in time.
Oracle Automatic Storage Management has proven to be one of the most widely adopted new features in Oracle Database 10g and it has been dramatically improved in the later 11g releases. This presentation will explain what changes are solved by ASM, how these challenges are solved, what barriers there are to ASM adoptions, and how 11g Release 2 addresses these barriers.
Boosting virtualization performance with Intel SSD DC Series P3600 NVMe SSDs ...Principled Technologies
When it comes time to make your server purchase or if you’re looking for an easy way to boost performance of existing infrastructure, consider upgrading your server’s internal storage. As our hands-on tests with a Dell EMC PowerEdge R630 environment running VMware Virtual SAN proved, Intel SSD DC P3600 Series NVMe SSDs could increase virtualized mixed-workload performance by as much as 59.9 percent compared to SATA SSDs while allowing you to run a large additional number of VMs. When you improve performance for your virtualized workloads, your employees and customers will benefit. By increasing performance with Intel NVMe SSDs on your Dell EMC PowerEdge R630 servers, you can potentially slash wait times and do more work on your servers without having to expand your infrastructure with additional storage arrays, which can translate to happier users and a more efficient infrastructure.
Accelerating Oracle on Red Hat Enterprise Linux with ioMemorySumeet Bansal
Oracle on RHEL is a great combination. The pot gets even sweeter with Fusion-io's iomemory gets added to the mix. Team Red Hat has done some excellent benchmarking to show that a single commodity server with RHEL and ioDrives can deliver mind-blowing throughput and IOPS. If you have a read-heavy Oracle workload on RHEL and can't use Oracle Smart Flash Cache, just use directCache from Fusion-io and get it done.
I am presenting this at the Red Hat mini-theatre at the Oracle Open World 2012.
Emc data domain technical deep dive workshopsolarisyougood
The document provides an overview of EMC Data Domain products and services. It discusses Data Domain systems which provide scalable and high performance protection storage for backup and archive data. The systems integrate with leading backup and archiving applications. The document also summarizes Data Domain software options such as Boost, Encryption, Replicator and Extended Retention which provide additional functionality.
Presentation deduplication backup software and systemxKinAnx
The document provides information on EMC's Avamar deduplication backup software and system. It discusses how Avamar reduces backup time and storage requirements through client-side deduplication. Avamar provides daily full backups, one-step recovery, and supports both physical and virtual environments. It integrates with EMC Data Domain systems and is optimized for backing up virtual machines, remote offices, desktops/laptops, and enterprise applications.
EqualLogic is changing how people experience storage by delivering dynamic virtual storage solutions that address changing business needs in real time at a reasonable cost. The presentation covers how EqualLogic avoids disruptions and underutilization compared to traditional storage, provides high performance for applications, and includes comprehensive data management features like snapshots, cloning and replication at no additional cost. It also demonstrates how EqualLogic integrates with virtualization platforms and simplifies management.
Deduplication Solutions Are Not All Created Equal: Why Data Domain?EMC
Data Domain systems provide significant advantages over other deduplication solutions through their unique technologies and leadership. Their Data Invulnerability Architecture ensures the integrity of backup data through end-to-end verification, fault avoidance, detection and healing, and rapid file system recoverability. Stream Informed Segment Layout delivers industry-leading performance that scales with CPU improvements. Data Domain Boost distributes deduplication processing for up to 50% faster backups and 99% less network usage. These technologies simplify backup operations, improve reliability and recoverability of data, and help customers meet backup windows.
Architectural designs driving sql server performance and high availabilitySumeet Bansal
DBAs are often asked to design database infrastructure for new applications or upgrade existing systems. Therefore, they must have a keen understanding of the application’s requirements, operational environment, and infrastructure so they can recommend the best approach.
In this session, you’ll learn about the latest advancements in storage technology, the various ways flash can be deployed (flash caching, server-side PCIe flash, hybrid, and all-flash), and the pros and cons for each. We’ll also discuss how a well-designed infrastructure can drive efficiencies and help your organization save on licensing and maintenance costs.
This white paper discusses optimizing backup and recovery for VMware Infrastructure using EMC Avamar. It provides an overview of VMware Infrastructure and its components. It then discusses three solutions for backing up virtual machines using Avamar: backing up via the VMware Consolidated Backup proxy server, installing Avamar agents inside each virtual machine, or installing an agent on the ESX server service console. Avamar reduces backup sizes and times through global data deduplication.
Presentation data domain advanced features and functionsxKinAnx
This document provides an overview of Data Domain advanced features and functions for Velocity Partner Accreditation. It covers topics such as virtual tape library (VTL) planning, snapshots, replication, recovery, DD Boost integration, capacity and throughput planning, and system monitoring tools. The document contains lessons and explanations on these topics to help partners learn about and describe Data Domain's data protection solutions.
Benchmark emc vnx7500, emc fast suite, emc snap sure and oracle rac on v-mwaresolarisyougood
This document describes a scalable virtualized Oracle RAC 11g database deployment using EMC VNX7500 storage with EMC FAST Suite. Testing showed that using FAST Cache improved transactions per minute by 133% and response time by over 90%, while FAST Suite improved TPM by 136% and response time by over 95%. The solution also enabled rapid provisioning of Oracle databases through SnapSure checkpoints and Oracle dNFS clonedb. It provided high availability with automatic failover during network or storage hardware failures.
Presentation symmetrix vmax family with enginuity 5876solarisyougood
This document summarizes EMC's VMAX family of high-end storage arrays. It introduces the new VMAX 40K model which offers 3x performance and 2x scale of previous versions. It describes key software capabilities of the VMAX like FAST VP for automated tiering, federated tiered storage for managing external arrays, and enhancements to replication features. The document aims to showcase how the VMAX delivers powerful scalability and storage services for virtualized and private cloud environments.
EMC Symmetrix VMAX: An Introduction to Enterprise Storage: Brian Boyd, Varrow...Brian Boyd
This session gives an overview of the EMC Symmetrix VMAC enterprise storage array. We will discuss the appropriate time to start looking at enterprise storage in your datacenter, the benefits and difference in technology between VMAC and other storage arrays, and give specific examples of how VMAX has helped out customers in their environments
Transforming your Business with Scale-Out Flash: How MongoDB & Flash Accelera...MongoDB
<b>Transforming your Business with Scale-Out Flash: How MongoDB & Flash Accelerate Application Performance </b>[1:40 pm - 2:00 pm]<br />MongoDB lets you build next-generation applications that require new levels of performance and latency. Flash has become a critical component to meeting these needs and this session will focus on how to best leverage Flash in a MongoDB deployment, covering key best practices and approaches. Armed with these best practices, as your environment scales, the on-going management of Flash within a traditional DAS architecture may still introduce some fundamental challenges. In addition, we will introduce EMC’s XtremIO platform which fully automates and offloads this overhead, allowing MongoDB administrators and architects to focus on driving new capabilities into their applications, all while scaling infinitely. In addition, key features like data-reduction, agile copy services, and free encryption extend the value of Flash well beyond what can be done with traditional DAS architectures.
The document discusses several high availability and disaster recovery options for SQL Server including failover clustering, database mirroring, log shipping, and replication. It provides examples of how different companies have implemented these technologies depending on their requirements. Key factors that influence architecture choices are downtime tolerance, deployment of technologies, and operational procedures. The document also covers SQL Server upgrade processes and how to move databases to a new datacenter while maintaining high availability.
ProSphere is a storage management solution from EMC that provides:
- End-to-end visibility of storage performance and capacity across sites
- Monitoring and alerting on capacity utilization and storage infrastructure
- Reports and dashboards on capacity, configuration, and performance to improve planning and reduce costs
The document discusses various configurations for EMC VNX storage arrays. It describes a configuration for consolidating Oracle workloads using a VNX5400 array with SSD caching and storage pools for VMs, Oracle redo logs, backups and data. It also outlines a configuration for virtualization with a VNX5400 providing over 50TiB of capacity for 500 VMs. Charts show how these configurations can be expanded through additional storage array extensions as capacity and performance needs grow.
Unleash oracle 12c performance with cisco ucssolarisyougood
This document discusses performance testing of Oracle 12c on Cisco UCS blade servers. An 8-node Oracle RAC cluster was tested achieving 750K IOPS and 25GB/sec bandwidth. OLTP workloads achieved 330K IOPS and DSS workloads achieved 17GB/sec bandwidth running together. Pluggable databases were also compared to traditional containers, showing higher throughput with pluggable databases. Various hardware failures were tested to demonstrate high availability of the Oracle RAC cluster on Cisco UCS.
My MySQL and NoSQL presentation from the NoSQL Search event in Copenhagen: http://nosqlroadshow.com/nosql-cph-2013/speaker/Ted+Wennmark
MySQL offers solutions to implement NoSQL concepts like auto-sharding, key-value access or asynchronous operations. This adds all known solutions from the SQL world to the NoSQL space.
The combined approach of SQL and NoSQL gives developers the choice to select whatever features from both worlds they need.
In this talk we take a deeper look at key-value access to MySQL and MySQL Cluster, auto-sharding and scalability of MySQL Cluster, mapping of schemaless key value access to a relational data model and the performance of NoSQL access to MySQL.
EMC World 2016 - code.15 Better Together: Scale-Out Databases on Scale-Out St...{code}
The introduction of scale-out persistent applications, such as databases, have changed the requirements on infrastructure. A common design pattern is to focus on local direct attached storage to satisfy storage needs. There is opportunity to transform and build a complimentary strategy for your scale-out applications with storage. Learn how to run these applications in new ways and see the possibilities that emerge.
The document discusses Oracle Database Appliance (ODA) high availability and disaster recovery solutions. It compares Oracle Real Application Clusters (RAC), RAC One Node, and Standard Edition High Availability (SEHA). RAC provides automatic restart and failover capabilities for load balancing across nodes. RAC One Node and SEHA provide restart and failover, but no load balancing. SEHA is suitable for Standard Edition databases if up to 16 sessions are adequate and a few minutes of reconnection time is acceptable without data loss during failover.
Oaktable World 2014 Kevin Closson: SLOB – For More Than I/O!Kyle Hailey
The document discusses using SLOB (Synthetic Load On Box) to test various Oracle database configurations and platforms. SLOB is described as a simple and predictable workload generator that allows testing the performance of databases under different conditions with minimal variability. The document outlines several potential uses of SLOB, including testing Oracle in-memory database options, multitenant architectures, and measuring the impact of database contention. It provides examples of using SLOB to analyze CPU and storage I/O performance.
I dati al giorno d’oggi sono un elemento di estrema importanza e d’intrinseco valore per ogni entità. Per questo quando parliamo di Oracle Database facciamo riferimento al capitale della nostra azienda, sia essa pubblica che privata. Per poter sfruttare al massimo le potenzialità del database Oracle è però necessario avere a disposizione un’infrastruttura in grado di facilitarne l’accesso, di semplificarne la gestione, di proporzionare il livello di performance necessario al fine di garantire la scalabilità utile a mantenere queste condizioni nel tempo. Il costante cambiamento della società spinge le imprese ad aggiornarsi e, con il passare del tempo, questo processo comporta una crescita dei dati immagazzinati nei nostri Database con conseguente aumento della criticità degli stessi. Oracle Database Appliance è il sistema ingegnerizzato creato da Oracle per gestire in modo efficiente i propri Database, minimizzando lo sforzo necessario per il loro mantenimento e permettendo così di focalizzare i propri sforzi in attività direttamente relazionate con il core business. Durante la webinar analizzeremo use case pratici che dimostreranno come al giorno d’oggi sia possibile approfittare dei vantaggi offerti dall’Oracle Database Appliance per rispondere alle differenti necessità che la gestione di una complessa e performante infrastruttura IT possa richiedere.
This document discusses Huawei's vision for making IT simple and business agile through data center innovation. It outlines Huawei's strategy in five areas: 1) Redesigning modern data center architecture for openness, automation, and efficiency. 2) Accelerating storage solutions with all-flash storage arrays. 3) Developing open platforms for critical business applications. 4) Creating a unified ICT cloud operating system. 5) Delivering converged infrastructure through modular systems. The goal is to help customers simplify IT operations, lower costs, and rapidly deploy new services through software-defined infrastructure.
Similar to EMC Multisite DR for SQL Server 2012 (20)
Engage for success ibm spectrum accelerate 2xKinAnx
IBM Spectrum Accelerate is software that extends the capabilities of IBM's XIV storage system, such as consistent performance tuning-free, to new delivery models. It provides enterprise storage capabilities deployed in minutes instead of months. Spectrum Accelerate runs the proven XIV software on commodity x86 servers and storage, providing similar features and functions to an XIV system. It offers benefits like business agility, flexibility, simplified acquisition and deployment, and lower administration and training costs.
Accelerate with ibm storage ibm spectrum virtualize hyper swap deep divexKinAnx
The document provides an overview of IBM Spectrum Virtualize HyperSwap functionality. HyperSwap allows host I/O to continue accessing volumes across two sites without interruption if one site fails. It uses synchronous remote copy between two I/O groups to make volumes accessible across both groups. The document outlines the steps to configure a HyperSwap configuration, including naming sites, assigning nodes and hosts to sites, and defining the topology.
Software defined storage provisioning using ibm smart cloudxKinAnx
This document provides an overview of software-defined storage provisioning using IBM SmartCloud Virtual Storage Center (VSC). It discusses the typical challenges with manual storage provisioning, and how VSC addresses those challenges through automation. VSC's storage provisioning involves three phases - setup, planning, and execution. The setup phase involves adding storage devices, servers, and defining service classes. In the planning phase, VSC creates a provisioning plan based on the request. In the execution phase, the plan is run to automatically complete all configuration steps. The document highlights how VSC optimizes placement and streamlines the provisioning process.
This document discusses IBM Spectrum Virtualize 101 and IBM Spectrum Storage solutions. It provides an overview of software defined storage and IBM Spectrum Virtualize, describing how it achieves storage virtualization and mobility. It also provides details on the new IBM Spectrum Virtualize DH8 hardware platform, including its performance improvements over previous platforms and support for compression acceleration.
Accelerate with ibm storage ibm spectrum virtualize hyper swap deep dive dee...xKinAnx
HyperSwap provides high availability by allowing volumes to be accessible across two IBM Spectrum Virtualize systems in a clustered configuration. It uses synchronous remote copy to replicate primary and secondary volumes between the two systems, making the volumes appear as a single object to hosts. This allows host I/O to continue if an entire system fails without any data loss. The configuration requires a quorum disk in a third site for the cluster to maintain coordination and survive failures across the two main sites.
IBM Spectrum Protect (formerly IBM Tivoli Storage Manager) provides data protection and recovery for hybrid cloud environments. This document summarizes a presentation on IBM's strategic direction for Spectrum Protect, including plans to enhance the product to better support hybrid cloud, virtual environments, large-scale deduplication, simplified management, and protection for key workloads. The presentation outlines roadmap features for 2015 and potential future enhancements.
Ibm spectrum scale fundamentals workshop for americas part 1 components archi...xKinAnx
The document provides instructions for installing and configuring Spectrum Scale 4.1. Key steps include: installing Spectrum Scale software on nodes; creating a cluster using mmcrcluster and designating primary/secondary servers; verifying the cluster status with mmlscluster; creating Network Shared Disks (NSDs); and creating a file system. The document also covers licensing, system requirements, and IBM and client responsibilities for installation and maintenance.
Ibm spectrum scale fundamentals workshop for americas part 2 IBM Spectrum Sca...xKinAnx
This document discusses quorum nodes in Spectrum Scale clusters and recovery from failures. It describes how quorum nodes determine the active cluster and prevent partitioning. The document outlines best practices for quorum nodes and provides steps to recover from loss of a quorum node majority or failure of the primary and secondary configuration servers.
Ibm spectrum scale fundamentals workshop for americas part 3 Information Life...xKinAnx
IBM Spectrum Scale can help achieve ILM efficiencies through policy-driven, automated tiered storage management. The ILM toolkit manages file sets and storage pools and automates data management. Storage pools group similar disks and classify storage within a file system. File placement and management policies determine file placement and movement based on rules.
Ibm spectrum scale fundamentals workshop for americas part 4 Replication, Str...xKinAnx
The document provides an overview of IBM Spectrum Scale Active File Management (AFM). AFM allows data to be accessed globally across multiple clusters as if it were local by automatically managing asynchronous replication. It describes the various AFM modes including read-only caching, single-writer, and independent writer. It also covers topics like pre-fetching data, cache eviction, cache states, expiration of stale data, and the types of data transferred between home and cache sites.
Ibm spectrum scale fundamentals workshop for americas part 4 spectrum scale_r...xKinAnx
This document provides information about replication and stretch clusters in IBM Spectrum Scale. It defines replication as synchronously copying file system data across failure groups for redundancy. While replication improves availability, it reduces performance and increases storage usage. Stretch clusters combine two or more clusters to create a single large cluster, typically using replication between sites. Replication policies and failure group configuration are important to ensure effective data duplication.
Ibm spectrum scale fundamentals workshop for americas part 5 spectrum scale_c...xKinAnx
This document provides information about clustered NFS (cNFS) in IBM Spectrum Scale. cNFS allows multiple Spectrum Scale servers to share a common namespace via NFS, providing high availability, performance, scalability and a single namespace as storage capacity increases. The document discusses components of cNFS including load balancing, monitoring, and failover. It also provides instructions for prerequisites, setup, administration and tuning of a cNFS configuration.
Ibm spectrum scale fundamentals workshop for americas part 6 spectrumscale el...xKinAnx
This document provides an overview of managing Spectrum Scale opportunity discovery and working with external resources to be successful. It discusses how to build presentations and configurations to address technical and philosophical solution requirements. The document introduces IBM Spectrum Scale as providing low latency global data access, linear scalability, and enterprise storage services on standard hardware for on-premise or cloud deployments. It also discusses Spectrum Scale and Elastic Storage Server, noting the latter is a hardware building block with GPFS 4.1 installed. The document provides tips for discovering opportunities through RFPs, RFIs, events, workshops, and engaging clients to understand their needs in order to build compelling proposal information.
Ibm spectrum scale fundamentals workshop for americas part 7 spectrumscale el...xKinAnx
This document provides guidance on sizing and configuring Spectrum Scale and Elastic Storage Server solutions. It discusses collecting information from clients such as use cases, workload characteristics, capacity and performance goals, and infrastructure requirements. It then describes using tools to help architect solutions that meet the client's needs, such as breaking the problem down, addressing redundancy and high availability, and accounting for different sites, tiers, clients and protocols. The document also provides tips for working with the configuration tool and pricing the solution appropriately.
Ibm spectrum scale fundamentals workshop for americas part 8 spectrumscale ba...xKinAnx
The document provides an overview of key concepts covered in a GPFS 4.1 system administration course, including backups using mmbackup, SOBAR integration, snapshots, quotas, clones, and extended attributes. The document includes examples of commands and procedures for administering these GPFS functions.
Ibm spectrum scale fundamentals workshop for americas part 5 ess gnr-usecases...xKinAnx
This document provides an overview of Spectrum Scale 4.1 system administration. It describes the Elastic Storage Server options and components, Spectrum Scale native RAID (GNR), and tips for best practices. GNR implements sophisticated data placement and error correction algorithms using software RAID to provide high reliability and performance without additional hardware. It features auto-rebalancing, low rebuild overhead through declustering, and end-to-end data checksumming.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
20 Comprehensive Checklist of Designing and Developing a WebsitePixlogix Infotech
Dive into the world of Website Designing and Developing with Pixlogix! Looking to create a stunning online presence? Look no further! Our comprehensive checklist covers everything you need to know to craft a website that stands out. From user-friendly design to seamless functionality, we've got you covered. Don't miss out on this invaluable resource! Check out our checklist now at Pixlogix and start your journey towards a captivating online presence today.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Welcome everyone to todays webcast, my name is David Ring and I work as part of the strategic solutions engineering group, working on Microsoft midrange applications. I am joined by Michael Morris, who also worked on this solution.
Our presentation details the solution:
EMC MULTISITE DISASTER RECOVERY FOR MICROSOFT SQL SERVER 2012 enabled by
EMC VNX5700
EMC FAST Cache
SQL Server 2012 AlwaysOn Availability Groups
During this presentation we will cover the following topics:
EMC Proven Solutions
SQL Server 2012 overview
Solution overview
Architecture design of the solution
test results
Summary
Followed by a
Q&A Session
Proven solutions are based on real-world requirements that are based on customer demand and feedback. EMC designs and tests proven solutions that are based on emerging technologies and demonstrates the best way to combine these technologies, and design useable and cost-effective architectural solutions.
By applying strict feasibility guidelines and reviews, EMC can define use cases that answer the challenges that customers are facing. Our job is to champion the customer and test solutions you would like.
As part of our solution we create a solutions pack which consists of :
White papers
Articles posted on ECN
AND
DEMOS which are published to our EMC Proven Solutions YOUTUBE Channel
This slide details all of SQL Server 2012’S new features.
The feature we are showcasing in this presentation is SQL Server 2012 AlwaysOn Availability Groups. Microsoft has provided critical enhancements to High Availability with the introduction of new AlwaysOn features, particularily AlwaysOn Availabiliy Groups which provide the next evolution for SQL Server transactional replication.
SQL Server offers administrators several options to configure high availability for both servers and databases. These high availability configurations have until now included:
Database mirroring
AND
Log shipping
SQL Server 2012 introduces two high availability configurations as part of SQL Server AlwaysOn, which provides availability at either the application database or instance level:
AlwaysOn Failover Clustering—for instance level protection
AND
AlwaysOn Availability Groups—for database level protection
As stated: SQL Server High Availability and Disaster Recovery can be implemented at SQL Server database level or SQL Server instance level.
A database-level High Availability and Disaster Recovery feature provides more flexibility in managing which databases should, or should not, be moved to the secondary server. AlwaysOn Availability Group is an example of a database-level solution.
SQL Server 2012 AlwaysOn Failover Cluster is an example of a SQL Server instance level solution.
Before SQL Server 2012, having too many HA features in SQL Server could be confusing to customers. You may wonder which solution is better for your application and what are the pros and cons for each HA solution.
With AlwayON microsoft has evolved its HA features, simplifying the choice for customers . For database level protection microsoft recommends the use of Avaialabilty Groups over traditional Log Shipping and Database mirroring.
With the AlwaysON Failover Cluster a single SQL Server instance is installed across multiple Windows Server Failover Cluster nodes. WSFC functionality provides high availability at the instance level, by presenting a failover cluster instance to the network as a single computer accessible through the cluster’s virtual name. This configuration is an enhancement to the SQL Server FCI functionality available in previous versions of SQL Server.
It is very much like todays FCI but more resilient in terms of varying networks.
Our current testing involves using ALWAYSON FCI with Recoverpoint and Cluster enabler.
Significant improvements have been delivered to the multisite failover clustering technology making it a viable option for HADR for many use cases and specifically multi-subnet failover clustering implementation.
Two major enhancements which support multi-subnet clustering are:
1. Cluster Setup support—can intelligently detect a multi-subnet environment, and automatically set the IP address resource dependency to OR. as shown in the slide
2. SQL Server Engine support—To bring the SQL Server resource online, the SQL Server Engine startup logic skips binding to any IP address that is not in an online state.
Moving on to AlwaysAn Availabilty Groups. SQL Server 2012 Availability Groups are similar in concept to an EXCHANGE DAG type implementation.
AlwaysOn Availability Groups support a failover environment for a specific set of user databases, known as availability databases, these databases failover together.
Like AlwaysOn Failover Clustering, AlwaysOn Availability Groups require the SQL Server instances to be configured on nodes of the same cluster, but with the instances remaining and being presented to the network as separate computers.
Availability groups support a set of primary databases and one to four sets of corresponding secondary databases. An availability group fails over at the level of an availability replica and, optionally, secondary databases can be made available for read-only access and some backup operations.
Availability groups consist of a set of two or more failover partners referred to as availability replicas. Each availability replica is hosted on a separate instance of SQL Server
Each availability replica hosts a copy of the availability databases in the availability group. Each availability replica is assigned an initial role as either the primary role or the secondary role
The purpose of this solution was to showcase the ability of the EMC VNX storage array to easily support heavy SQL Server OLTP workloads and
To characterize a geographically dispersed SQL Server 2012 environment protected by AlwaysOn technology, and highlight multi-subnet support at both synchronous and asynchronous distances.
EMC VNX5700 storage array offers a simple, efficient, and powerful platform for enterprise-class SQL Server 2012 infrastructures.
The testing of this solution validated the ability of the VNX5700 storage array to support SQL Server 2012 instances running OLTP-like workloads that generated over 50,000 IOPS.
This slide shows the overall physical architecture of the environment.
We had 2 SQL Server physical instances:
One production SQL Server instance which is the primary replica
And One read-only secondary SQL Server instance which is the secondary replica
We had Four mission-critical, active OLTP databases, totaling 1.8 TB of data, that are replicated to the secondary site using SQL Server 2012 AlwaysOn Availability Groups.
This solution was provisioned by the VNX5700 with 641 GB of FAST Cache of which 60% was hot and featured the AlwaysOn Availability Group replica on the secondary site.
EMC FAST Cache technology automatically placed the most frequently accessed data on high-performing Flash drives.
The solution was based on a multi-subnet environment and tested at synchronous and asynchronous distances of 80 km, 800 km, and 4,000 km.
Tests involved:
Comparing AlwaysOn Availability Groups in the following availability modes:
Synchronous-commit mode with Automatic failover
Synchronous-commit mode with Manual failover
Asynchronous-commit mode with Forced failover
This slide shows the SQL Server layout.
As you can see the four databases totaled 180,000 users
We had:
1 * 50GB 5,000 user database
1 * 250GB 25,000 user database
1 * 500GB 50,000 user database and
1 * 1TB 100,000 user database
The OLTP-like workload generated 50,000 IOPS, with a read/write ratio of approximately 9:1
In this slide you can see our Production array storage configuration.
In this solution transaction logs and Tempdb files are segregated to dedicated spindles, hosting traditional RAID 1/0 RAID groups.
The best practice for SQL Server log files is to use RAID 1/0. Therefore, the configuration of a Raid Group with 8 x 2.5” 10k SAS drives was best suited for the log file location.
Also, it is best practice to isolate the log from data at the physical disk level.
Performance may also benefit if Tempdb is placed on RAID 1/0 configuration. Because Tempdb has a high write rate - RAID1/0 is the best configuration to use.
The virtually provisioned pool was created with RAID 5 protection. This was created as a homogeneous pool with 40 SAS drives. With 40 drives for a RAID 5 pool, Virtual Provisioning™ creates eight five-drive (4+1) RAID groups.
Consideration was also given to the impact of FAST Cache in significantly reducing the volume of mechanical spinning disks required by VNX storage arrays to service the target workload.
The DR Storage configuration had the same configuration as production minus FAST CACHE.
This slide shows the storage design for SQL Server at the production site. The DR design was a copy of this layout.
Best practices for SQL Server were followed in laying out our storage. Reasons for having multiple files per database are:
Very active databases perform better with multiple datafiles
And also
spreading the datafiles across disks in the pool helps to avoid contention.
As can be seen from the database file design, data files should be of equal size for each OLTP database. This is because SQL Server uses a proportional fill algorithm that favors allocations in files with more free space.
We will now go through our results from the solution.
Throughput was measured using the Microsoft Performance Monitor counter: Avgerage Disk Transfers per second. (IOPS)
The primary replica is represented by the yellow line. And:
The secondary replica is represented by the blue line.
During baseline testing with 40 SAS disks in the storage pool, transactional I/O throughput on the primary replica produced approximately 11,500 IOPS and the secondary replica produced 1,400 IOPS.
After 30 minutes with FAST Cache enabled on the storage pool, we saw an immediate effect on performance. I/O throughput increased to over 19,000 IOPS on the primary replica and to 2,300 on the secondary.
After just two hours of FAST Cache running, we saw throughput increase to over 50,000 IOPS on the primary replica, while at the same time providing amazing low database latency of no more than 3 ms for reads and 2 ms for writes.
During this period of FAST CACHE being in a steady state we changed the mode from synchronous to asynchronous and increased the distance from 80km to 4000km’s. Latency was maintained at <3ms and IOPS increased slightly as we removed the impact of maintaining a synchronous state.
As an example of the read/writes being replicated between the primary and secondary replicas, perfmon counters were analyzed for a point in time during the FAST Cache steady state for synchronous-commit mode at 80 km for the 1 TB OLTP_1 database, as shown in this slide.
As you can see the Primary replica is on the left and the secondary on the right.
It can be seen that only 3.7 percent of the primary replica read activity occurs on the secondary replica, compared to 89.51 percent of the write activity. During this period, transactions per second (TPS) for both primary and secondary replicas was 119. This highlights that, with no read access on the secondary replica, the major activity on the secondary is the writes being replicated.
As shown in this slide, there is negligible impact on SQL Server CPU utilization when synchronous-commit mode was used up to 80 km. A small rise of 4 percent CPU percentage utilization occurred by using asynchronous-commit mode up to 4,000 km. In all synchronization states, minimal CPU utilization occurs on the secondary replica as no additional activity is occurring on the secondary replica databases.
Here is a graphical representation of our Perfmon data for Disk transactions per sec (TPS) for both primary and secondary replicas
THESE results were taken during the same test as the IOPS slide.
The slide shows the transactional performance boost received from the introduction of EMC FAST Cache to our environment.
The ability to service Transaction PER second increased from 4,900 to over 25,000 TPS on the production databases.
Using our EMC VNX Unisphere Analyser, our Peformance Analysis tool, we could see how the storage pool was initially I/O bound, having reached the limit of its ability to service I/O requests.
Initial disk utilization on the storage pool hosting the primary replica on production site was too high at 90 percent. Which is represented by the yellow line.
We see minimal impact on secondary during initial baseline.
Improvements were seen after EMC FAST Cache was enabled. FAST Cache was able to reduce pressure on the SAS pool because frequently accessed data from the pool was placed in cache. After a two-hour warm up period, disk utilization on production reduced from 90 percent to just 57 percent.
As FAST Cache boosts storage performance for SQL Server 2012, allowing the primary replica to service increased I/O levels, pressure on the storage pool hosting the data files for the secondary replica also increases as the writes on the primary are replicated to the secondary. This highlights the importance of correctly sizing the secondary replicas storage.
THE VNX5700 Storage processor utilization was measured after analyzing the Unisphere NAR files.
In this graph we see the SP Utilizations being represented by red and green for production and blue and yellow for DR.
The results for the production storage array show how SPA and SPB storage processor utilization increases as the array works to automatically boost performance through EMC FAST Cache technology.
AS:
The SPs are analyzing and promoting the frequently accessed data.
SP utilization for the DR storage array increases slightly as disk utilization rises due to the increase in write data being replicated from primary to secondary databases.
Creation of the availability groups can be done through scripting or in SQL Server Management Studio by either completing details for a New Availability Group or following the New Availability Group Wizard. The wizard also has an option to generate the script during the steps.
To test creation times we generated a script file to create the availability groups.
This slide shows a simplified process flow for creation of an availability group. During testing it was found that if adding multiple databases, it was best to backup, restore, and add secondary replicas to the availability groups one at a time before looping back to add additional databases.
An important consideration during Availability group creation is provisioning a shared storage space.
A shared storage space is required when performing a Full Initial Data Synchronization as part of availability group creation. In order to minimize duration of the seeding and reseeding process, users should consider the storage used for these database backups and restorations.
The backup process has a high bandwidth requirement from storage, as it is a sequential write/read workload. RAID 1/0 would best suit the seeding/reseeding of databases for AlwaysOn Availability Group creation.
The following table outlines database creation times at:
80 km synchronous-commit
800 km asynchronous-commit mode
4,000 kmasynchronous-commit mode
Note: These timing are for databases already populated and running a full workload of approximately 50,000 IOPS.
The timings demonstrate that, as expected, as distance increases the time taken for creation of the availability groups increases.
SQL Server 2012 AlwaysOn Availability Groups provides the flexibility to protect specific databases, individually or collectively, in either synchronous or asynchronous availability modes. These configurations allow SQL Server 2012 replicate data to a secondary replica over distance.
The solution clearly shows the ability of EMC FAST Cache to significantly boost performance of the VNX series storage array. AS Testing showed how enabling FAST Cache on a heavily utilized storage pool not only alleviated pressure, but allowed the same storage pool to service over four times the I/O - from approximately 11,000 IOPS to over 50,000 IOPS, while returning incredibly low SQL Server datafile average latency times of 3 ms and less for reads and writes. The ability of FAST Cache to automatically react to the changes in OLTP workload I/O patterns is an invaluable tool for administrators.
Results clearly show the benefits of using EMC Technology with SQL Server 2012.
Thank you all for attending. I will hand over to Michael for Q&A Session.