This document summarizes an agenda for a TechNet Live meeting about using SQL Server AlwaysOn availability groups to offload reporting workloads from production servers. The agenda includes an introduction to AlwaysOn, setting up readable secondary replicas, configuring connection access, performing backups on secondary replicas, and discussing the impact of workloads on high availability, primary servers, and query plans.
The document summarizes different high availability solutions in SQL Server 2012, including AlwaysOn Failover Cluster Instances, AlwaysOn Availability Groups, and Database Mirroring. It defines key terms like primary/secondary databases and replicas. It also describes the benefits of each solution, such as automatic failover, read-only access on secondaries, and support for different storage options.
1. The document provides an overview of OVH LAB's Enterprise Cloud Databases offering, including its architecture, features, pricing, and roadmap.
2. The architecture is designed for high availability, with automatic failover that can occur within 30 seconds. It uses dedicated hardware in multiple availability zones for isolation and includes daily backups.
3. Key features include PostgreSQL and planned MariaDB databases, 24/7 monitoring, automatic minor version updates, IP whitelisting, encryption, and observability tools for logs and metrics. Pricing aims to be competitive with AWS RDS and Google SQL.
Built-in replication in PostgreSQL 9.0 allows a master database to stream transaction log changes asynchronously to one or more standby databases. This provides high availability and allows read-only queries on standbys. Replication is at the entire database level and supports all SQL supported in PostgreSQL. However, it does not provide query distribution or per-table granularity.
With compelling new features in SUSE Linux Enterprise Server—and the stellar workgroup services found in Novell Open Enterprise Server 2—the benefits of moving your Novell GroupWise environment to Linux are more readily apparent than ever. This session will cover how you can use available Novell tools to move a GroupWise system to the Linux platform. In addition to tools and utilities, we'll share practical tips, tricks and demonstrations.
This session was presented at Reboot IT conference in Bangalore, India. This session intends to introduce the Hyper-V Replica feature and the related technologies such as Azure Hyper-V Recovery Manager.
Chef conf-2015-chef-patterns-at-bloomberg-scaleBiju Nair
This document discusses various patterns used at Bloomberg for managing infrastructure at scale using Chef. It describes how dedicated bootstrap servers are used to regularly build clusters in an isolated manner. The use of lightweight VMs for bootstrapping is explained. Techniques for building the bootstrap server, cleaning up configurations and converting it to an admin client are outlined. The document also covers topics like dynamic resource creation, injecting logic into community cookbooks, handling service restarts and implementing pluggable alerts.
Always On - Wydajność i bezpieczeństwo naszych danych - High Availability SQL...SQLExpert.pl
SQL Server 2012 udostępnia zupełnie nowe spojrzenie na zagadnienia związanie z Wysoką dostępnością (High Avability). Najbardziej oczekiwaną nową funkcjonalnością jest AlwaysOn.
W trakcie sesji chcemy pokazać praktyczne zastosowanie AlwaysOn, odpowiadając na pytania:
• Co daje AlwaysOn czego nie było do tej pory
• W jakim stopniu AlwaysOn może zastąpić mirroring oraz logshiping
• Jak zbudować High Avability na potrzeby naszej organizacji
• Jak efektywnie skorzystać z wielu replik danych.
• Czym jest AlwaysOn Failover Cluster
W ramach sesji przedstawiona będzie koncepcja budowy rozwiązania spełniające oczekiwania w zakresie High Avability, dotyczące wydajności jak i bezpieczeństwa naszych danych, a także zgodne z potrzebami Distaster Recovery.
The document discusses best practices for virtualizing Microsoft Exchange Server 2013. It recommends virtualizing Exchange for standardization, optimizations in deployment, management, and monitoring. However, virtualization may increase complexity and impact performance. The Microsoft Exchange team supports virtualization when sized correctly to avoid oversubscription issues. Key best practices include avoiding single points of failure, properly sizing virtual machines for CPU and memory, and using features like storage area networks and database availability groups for high availability rather than host-based clustering or hypervisor snapshots.
The document summarizes different high availability solutions in SQL Server 2012, including AlwaysOn Failover Cluster Instances, AlwaysOn Availability Groups, and Database Mirroring. It defines key terms like primary/secondary databases and replicas. It also describes the benefits of each solution, such as automatic failover, read-only access on secondaries, and support for different storage options.
1. The document provides an overview of OVH LAB's Enterprise Cloud Databases offering, including its architecture, features, pricing, and roadmap.
2. The architecture is designed for high availability, with automatic failover that can occur within 30 seconds. It uses dedicated hardware in multiple availability zones for isolation and includes daily backups.
3. Key features include PostgreSQL and planned MariaDB databases, 24/7 monitoring, automatic minor version updates, IP whitelisting, encryption, and observability tools for logs and metrics. Pricing aims to be competitive with AWS RDS and Google SQL.
Built-in replication in PostgreSQL 9.0 allows a master database to stream transaction log changes asynchronously to one or more standby databases. This provides high availability and allows read-only queries on standbys. Replication is at the entire database level and supports all SQL supported in PostgreSQL. However, it does not provide query distribution or per-table granularity.
With compelling new features in SUSE Linux Enterprise Server—and the stellar workgroup services found in Novell Open Enterprise Server 2—the benefits of moving your Novell GroupWise environment to Linux are more readily apparent than ever. This session will cover how you can use available Novell tools to move a GroupWise system to the Linux platform. In addition to tools and utilities, we'll share practical tips, tricks and demonstrations.
This session was presented at Reboot IT conference in Bangalore, India. This session intends to introduce the Hyper-V Replica feature and the related technologies such as Azure Hyper-V Recovery Manager.
Chef conf-2015-chef-patterns-at-bloomberg-scaleBiju Nair
This document discusses various patterns used at Bloomberg for managing infrastructure at scale using Chef. It describes how dedicated bootstrap servers are used to regularly build clusters in an isolated manner. The use of lightweight VMs for bootstrapping is explained. Techniques for building the bootstrap server, cleaning up configurations and converting it to an admin client are outlined. The document also covers topics like dynamic resource creation, injecting logic into community cookbooks, handling service restarts and implementing pluggable alerts.
Always On - Wydajność i bezpieczeństwo naszych danych - High Availability SQL...SQLExpert.pl
SQL Server 2012 udostępnia zupełnie nowe spojrzenie na zagadnienia związanie z Wysoką dostępnością (High Avability). Najbardziej oczekiwaną nową funkcjonalnością jest AlwaysOn.
W trakcie sesji chcemy pokazać praktyczne zastosowanie AlwaysOn, odpowiadając na pytania:
• Co daje AlwaysOn czego nie było do tej pory
• W jakim stopniu AlwaysOn może zastąpić mirroring oraz logshiping
• Jak zbudować High Avability na potrzeby naszej organizacji
• Jak efektywnie skorzystać z wielu replik danych.
• Czym jest AlwaysOn Failover Cluster
W ramach sesji przedstawiona będzie koncepcja budowy rozwiązania spełniające oczekiwania w zakresie High Avability, dotyczące wydajności jak i bezpieczeństwa naszych danych, a także zgodne z potrzebami Distaster Recovery.
The document discusses best practices for virtualizing Microsoft Exchange Server 2013. It recommends virtualizing Exchange for standardization, optimizations in deployment, management, and monitoring. However, virtualization may increase complexity and impact performance. The Microsoft Exchange team supports virtualization when sized correctly to avoid oversubscription issues. Key best practices include avoiding single points of failure, properly sizing virtual machines for CPU and memory, and using features like storage area networks and database availability groups for high availability rather than host-based clustering or hypervisor snapshots.
This document is an agenda for a presentation on SQL Server 2012 high availability and disaster recovery options. The presentation will cover new features in SQL Server 2012 like AlwaysOn availability groups which allow for high availability and disaster recovery without requiring shared storage. It will also discuss SQL Server failover clustering and how it has been enhanced in 2012 to support multi-subnet configurations and flexible failover policies. The presentation objectives are to explain SQL Server high availability and disaster recovery, how clustering and availability groups work, and what's new in high availability and disaster recovery in SQL Server 2012.
Hvordan administrerer og bruker du din VDI-løsning best mulig? Hva kan Microsoft tilby av VDI-drift og hvordan benyttes det i praksis? Vi ser på hvordan bl.a. System Center kan benyttes i en VDI-løsning.
SQL In The City - Understanding and Controlling Transaction Logs by Nigel Peter Sammy.
- Relational DBMS Basics
- Introduction to Transaction Logs
- The Architecture
- Recovery Models
- Managing the Transaction Logs
- Red Gate Tools
Deploying Maximum HA Architecture With PostgreSQLDenish Patel
This document proposes a "Maximum HA architecture" for PostgreSQL that aims to provide 99.99% application uptime and reduce mean time to recovery (MTTR) for both planned and unplanned outages. It discusses using techniques like streaming replication, failover, hot backups, log shipping, PITR, and pg_reorg to achieve high availability and minimize downtime from system failures, data failures, planned maintenance, and data growth.
The document summarizes several industry standard benchmarks for measuring database and application server performance including SPECjAppServer2004, EAStress2004, TPC-E, and TPC-H. It discusses PostgreSQL's performance on these benchmarks and key configuration parameters used. There is room for improvement in PostgreSQL's performance on TPC-E, while SPECjAppServer2004 and EAStress2004 show good performance. TPC-H performance requires further optimization of indexes and query plans.
Learn about RHEL 6 performance for better scalability. Learn how to reduce the amount of manual tuning needed.For more information, visit http://ibm.co/PNo9Cb.
The document provides information on choosing the appropriate logical partition (LPAR) type in PowerVM. It discusses the considerations for dedicated, dedicated donating, and shared LPAR types. Shared LPARs rely on uncapped capacity which adds hypervisor activity and can impact processor affinity, while dedicated LPARs waste idle cycles. The document recommends evaluating usage patterns and choosing LPAR types to minimize overhead and optimize performance.
Learn strategies to maintain your database's high availability even during peak use periods. MariaDB's Field CTO Max Mether offers best practices for high availability, disaster recovery and more.
The document proposes optimizations for crash dump in virtualized environments including performing core dump and system recovery concurrently to reduce downtime, selectively dumping only non-empty memory pages of a crashed virtual machine to improve dump speed, and controlling disk I/O rates between concurrent dump and recovery processes to enhance quality of service.
The document summarizes updates to the Xen Hypervisor project, including:
- Plans for future stable releases in late 2011 and early 2012.
- Improvements to HVM device model and BIOS support in Qemu.
- Dom0 Linux kernel support now in Linux 3.0.
- Work to optimize performance for PV guests through lightweight HVM containers.
Database virtualization technologies allow for cloning database instances while sharing data. This avoids consuming large amounts of storage for full copies. Technologies like CloneDB, Oracle ZFS Storage Appliance, Delphix, and Data Director create clone instances that only store changed data, sharing read-only data from snapshots. They provide benefits like faster provisioning of clones, reduced storage usage, and easier testing and development.
Asynchronous cascading master to multiple replicas
Asynchronous multi-master
Can be used for:
Improved performance for geographically dispersed users
High availability
Load distribution (OLTP vs. reporting)
The document summarizes configurations for high availability SQL Server environments including OLTP, OLAP, standalone servers, and clusters. It discusses:
1) Specifying OLTP and OLAP servers differently based on their needs - OLTP is CPU/disk intensive for short queries while OLAP is memory intensive for long queries.
2) Configuring standalone servers optimally by separating services, locking pages in memory, and setting service-specific options like CPU affinity and minimum/maximum memory.
3) Using clusters for high availability but not as a "SQL solution" - clusters require careful configuration and SSRS is not cluster aware so load balancing is needed.
In this session we examined the Xen PV performance on the latest platforms in a few cases that covers CPU/memory intensive, disk intensive and network intensive workloads. We compared Xen PV guest vs. HVM/PVOPS to see whether PV guest still have advantage over HVM on a system with state-of-the-art VT features. KVM was also compared as a reference. We also compared PV driver performance against bare-metal and pass-through/SR-IOV. The identified issues were discussed and we presented our proposal on fixing those issues.
- SQL code is loaded into RAM for parsing during a hard parse, while a soft parse does not require reloading into RAM.
- Excessive hard parsing can occur when the shared pool size is too small or queries contain non-reusable SQL statements without bind variables.
- Using bind variables rather than concatenating values into the SQL statement allows for soft parsing rather than hard parsing, improving performance by reducing parsing time and memory usage.
This document discusses the Alfresco SDK 2.0, which features major improvements for rapid application development using Maven and hot code reloading. It highlights how the SDK has been migrated to GitHub and now supports reloading of Java classes, tests, webscripts and resources with no webapp context reloads. A demo is shown of rapidly developing an AMP project in Eclipse using these new capabilities.
HBase now includes a built-in snapshot feature that allows users to take point-in-time backups of tables with minimal impact on the running cluster. Snapshots can be taken in an offline, globally consistent, or timestamp consistent manner. The snapshots can then be exported to another cluster, used to clone a new table, or restore an existing table to a prior state captured by the snapshot. The snapshot functionality provides a simple, distributed, and high performance solution for backup and recovery of large HBase datasets.
Every now and then something comes along that has the potential to change the face of business as we know it. Currently big data is being touted as that thing, and CIOs everywhere need to get a grip on what it is and how it can benefit their company.
Este documento presenta una lista de 20 clientes con su información básica como nombre, edad, documento de identidad, dirección, teléfono, estado civil, empresa y detalles adicionales como años de antigüedad, vehículos y propiedades. La información está organizada en tablas con columnas para cada tipo de dato.
This document is an agenda for a presentation on SQL Server 2012 high availability and disaster recovery options. The presentation will cover new features in SQL Server 2012 like AlwaysOn availability groups which allow for high availability and disaster recovery without requiring shared storage. It will also discuss SQL Server failover clustering and how it has been enhanced in 2012 to support multi-subnet configurations and flexible failover policies. The presentation objectives are to explain SQL Server high availability and disaster recovery, how clustering and availability groups work, and what's new in high availability and disaster recovery in SQL Server 2012.
Hvordan administrerer og bruker du din VDI-løsning best mulig? Hva kan Microsoft tilby av VDI-drift og hvordan benyttes det i praksis? Vi ser på hvordan bl.a. System Center kan benyttes i en VDI-løsning.
SQL In The City - Understanding and Controlling Transaction Logs by Nigel Peter Sammy.
- Relational DBMS Basics
- Introduction to Transaction Logs
- The Architecture
- Recovery Models
- Managing the Transaction Logs
- Red Gate Tools
Deploying Maximum HA Architecture With PostgreSQLDenish Patel
This document proposes a "Maximum HA architecture" for PostgreSQL that aims to provide 99.99% application uptime and reduce mean time to recovery (MTTR) for both planned and unplanned outages. It discusses using techniques like streaming replication, failover, hot backups, log shipping, PITR, and pg_reorg to achieve high availability and minimize downtime from system failures, data failures, planned maintenance, and data growth.
The document summarizes several industry standard benchmarks for measuring database and application server performance including SPECjAppServer2004, EAStress2004, TPC-E, and TPC-H. It discusses PostgreSQL's performance on these benchmarks and key configuration parameters used. There is room for improvement in PostgreSQL's performance on TPC-E, while SPECjAppServer2004 and EAStress2004 show good performance. TPC-H performance requires further optimization of indexes and query plans.
Learn about RHEL 6 performance for better scalability. Learn how to reduce the amount of manual tuning needed.For more information, visit http://ibm.co/PNo9Cb.
The document provides information on choosing the appropriate logical partition (LPAR) type in PowerVM. It discusses the considerations for dedicated, dedicated donating, and shared LPAR types. Shared LPARs rely on uncapped capacity which adds hypervisor activity and can impact processor affinity, while dedicated LPARs waste idle cycles. The document recommends evaluating usage patterns and choosing LPAR types to minimize overhead and optimize performance.
Learn strategies to maintain your database's high availability even during peak use periods. MariaDB's Field CTO Max Mether offers best practices for high availability, disaster recovery and more.
The document proposes optimizations for crash dump in virtualized environments including performing core dump and system recovery concurrently to reduce downtime, selectively dumping only non-empty memory pages of a crashed virtual machine to improve dump speed, and controlling disk I/O rates between concurrent dump and recovery processes to enhance quality of service.
The document summarizes updates to the Xen Hypervisor project, including:
- Plans for future stable releases in late 2011 and early 2012.
- Improvements to HVM device model and BIOS support in Qemu.
- Dom0 Linux kernel support now in Linux 3.0.
- Work to optimize performance for PV guests through lightweight HVM containers.
Database virtualization technologies allow for cloning database instances while sharing data. This avoids consuming large amounts of storage for full copies. Technologies like CloneDB, Oracle ZFS Storage Appliance, Delphix, and Data Director create clone instances that only store changed data, sharing read-only data from snapshots. They provide benefits like faster provisioning of clones, reduced storage usage, and easier testing and development.
Asynchronous cascading master to multiple replicas
Asynchronous multi-master
Can be used for:
Improved performance for geographically dispersed users
High availability
Load distribution (OLTP vs. reporting)
The document summarizes configurations for high availability SQL Server environments including OLTP, OLAP, standalone servers, and clusters. It discusses:
1) Specifying OLTP and OLAP servers differently based on their needs - OLTP is CPU/disk intensive for short queries while OLAP is memory intensive for long queries.
2) Configuring standalone servers optimally by separating services, locking pages in memory, and setting service-specific options like CPU affinity and minimum/maximum memory.
3) Using clusters for high availability but not as a "SQL solution" - clusters require careful configuration and SSRS is not cluster aware so load balancing is needed.
In this session we examined the Xen PV performance on the latest platforms in a few cases that covers CPU/memory intensive, disk intensive and network intensive workloads. We compared Xen PV guest vs. HVM/PVOPS to see whether PV guest still have advantage over HVM on a system with state-of-the-art VT features. KVM was also compared as a reference. We also compared PV driver performance against bare-metal and pass-through/SR-IOV. The identified issues were discussed and we presented our proposal on fixing those issues.
- SQL code is loaded into RAM for parsing during a hard parse, while a soft parse does not require reloading into RAM.
- Excessive hard parsing can occur when the shared pool size is too small or queries contain non-reusable SQL statements without bind variables.
- Using bind variables rather than concatenating values into the SQL statement allows for soft parsing rather than hard parsing, improving performance by reducing parsing time and memory usage.
This document discusses the Alfresco SDK 2.0, which features major improvements for rapid application development using Maven and hot code reloading. It highlights how the SDK has been migrated to GitHub and now supports reloading of Java classes, tests, webscripts and resources with no webapp context reloads. A demo is shown of rapidly developing an AMP project in Eclipse using these new capabilities.
HBase now includes a built-in snapshot feature that allows users to take point-in-time backups of tables with minimal impact on the running cluster. Snapshots can be taken in an offline, globally consistent, or timestamp consistent manner. The snapshots can then be exported to another cluster, used to clone a new table, or restore an existing table to a prior state captured by the snapshot. The snapshot functionality provides a simple, distributed, and high performance solution for backup and recovery of large HBase datasets.
Every now and then something comes along that has the potential to change the face of business as we know it. Currently big data is being touted as that thing, and CIOs everywhere need to get a grip on what it is and how it can benefit their company.
Este documento presenta una lista de 20 clientes con su información básica como nombre, edad, documento de identidad, dirección, teléfono, estado civil, empresa y detalles adicionales como años de antigüedad, vehículos y propiedades. La información está organizada en tablas con columnas para cada tipo de dato.
O documento lista 21 produtos de calçados com seus respectivos códigos, referências, descrições, preços de custo e venda, e fornecedores. Os produtos variam entre sapatos, tênis e botas para bebês, crianças e adultos.
The document discusses the economic concept of demand, including the determinants of demand and how demand curves shift in response to changes in price and other factors. It specifically addresses how demand for a good is affected by changes in the prices of substitutes and complements, as well as by changes in consumer tastes, income, population, and expectations about future prices. Graphs and examples are provided to illustrate these concepts.
Taming Latency: Case Studies in MapReduce Data AnalyticsEMC
This session discusses how to achieve low latency in MapReduce data analysis, with various industrial and academic case studies. These illustrate various improvements on MapReduce for squeezing out latency from whole data processing stack, covering batch-mode MapReduce system, as well as stream processing systems. This session also introduces our BoltMR project efforts on this topic and discloses some interesting benchmark results.
Objective 1: Understand why low-latency matters for many MapReduce-based big data analytics scenarios.
After this session you will be able to:
Objective 2: Learn the root causes of MapReduce latency, the obstacles to lowering the latency and the various (im)mature solutions.
Objective 3: Understand the extent of MapReduce low-latency that is needed for their own applications and which optimization techniques are potentially applicable.
El documento presenta un informe de empleados que incluye la información de 21 personas. Proporciona los datos personales de cada empleado como nombre, identificación, teléfono, fecha de nacimiento y cargo dentro de la empresa.
The document discusses how converged TV and on-demand viewing habits will affect viewership. It provides data on consumer expectations for more convenient, personalized content access across devices. The data shows growing preference for on-demand and pay-for-content options when legal alternatives are available. However, many markets still lack sufficient legal digital content, which the report argues must be addressed through policies that increase availability of lawful digital services and accommodate reasonable consumer expectations like time- and place-shifting to promote innovation while displacing illegal access.
4. referencing not plagiarising presentation (1)Khendle Christie
This document discusses referencing sources and avoiding plagiarism. It defines referencing as citing sources used in academic writing and explains that referencing allows readers to find the sources and supports arguments. Plagiarism is copying others' work without proper citation or referencing. The document provides examples of when sources need to be referenced, such as for quotations, statistics, and ideas. It also discusses paraphrasing versus summarizing and emphasizes the importance of consistently referencing sources to avoid plagiarism.
This white paper from Goode Intelligence explores how existing provisioning solutions are failing to support the business in an era where new IT service models are rapidly being deployed. New IT service models that support mobile and cloud computing have created problems for organizations that are already struggling with outdated identity and access governance tools. The paper explores a vision for Provisioning 2.0 where the goal is to weave provisioning into the very fabric of business process. Provisioning 2.0 is business driven, is easy to deploy and maintain and is built for today’s agile IT.
This document discusses the benefits and costs of working while in high school. It states that there are 2 benefits to working during high school, but does not specify what they are. Similarly, it states there are 2 costs to working in high school, but does not provide details on the costs. The document presents this information in point form.
This document contains questions about economic concepts such as inflation, recessions, and monetary policy. It also provides data on the declines in GDP, unemployment, and world trade during the Great Depression and Great Recession. Additionally, it poses questions about what monetary policy the Federal Reserve should pursue in different time periods between the 1950s and 2010s based on the economic conditions of each respective era. The document instructs students to write a letter to the Federal Reserve Chairman outlining whether they support an expansionary or contractionary policy given 2013 unemployment and inflation rates. Finally, it prompts drafting a letter to a Congressman regarding two fiscal policy changes agreed with and their potential effects.
The document discusses how governments can help societies fully realize the benefits of networked technologies. It argues that while technology provides great promise for well-being, living standards, and social progress, its benefits are not automatic and require resilient public policies to maximize gains. Policymakers need to craft reform agendas that promote widespread adoption of ICT, address regulatory challenges, and get the fundamentals of innovation and diffusion right to achieve an ICT-led transformation of the economy and society.
Software Defined Data Center: The Intersection of Networking and StorageEMC
There has been quite a bit of marketing rhetoric around Software Defined Data Center (SDDC) since VMware’s acquisition of Nicira. In this session we explore the components of a SDDC. Our specific focus is on the composition of a SDDC’s resource model: Compute, Networking, and Storage. The emphasis is on the disaggregated I/O for Network and Storage resources.
Objective 1: Describe the disaggregated I/O resource model employed to facilitate the use of virtualized Ethernet and Block devices in a Software Defined Data Center.
After this session you will be able to:
Objective 2: Explain how end-user driven provisioning of virtual Ethernet devices and Block devices serve to decouple resource use from infrastructure management.
Objective 3: Describe some of the opportunities and challenges associated with employing disaggregate I/O.
The document summarizes the discovery and features of the underground city of Derinkuyu in Turkey. In 1963, a local resident discovered a hidden room behind a wall in his house, which led to the discovery of the extensive underground city. Archaeologists have since uncovered 20 levels that could house up to 10,000 people and provided refuge from frequent invasions. Key features included stone doors to block passages, ventilation wells, stables, dining rooms, a church, and connections to other underground cities. The underground city provided an advanced shelter system for inhabitants over different periods.
Este documento contiene información de contacto de 35 personas incluyendo su nombre, número de identificación, número de teléfono y ciudad. La información está organizada en una tabla con un código de identificación para cada registro.
This white paper discusses the various cyber threats targeting healthcare organizations and the challenges security professionals face in securing access to protected health information.
This document summarizes AlwaysOn availability groups in SQL Server 2016. It discusses how AlwaysOn works, the components of an availability group like primary and secondary replicas, and prerequisites for setting up AlwaysOn. It also provides an overview of a demo that will configure high availability with AlwaysOn and how backups can be performed on secondary replicas.
SQL 2012 AlwaysOn Availability Groups for SharePoint 2013 - SharePoint Connec...Michael Noel
Using SQL Server 2012 AlwaysOn Availability Groups allows for high availability and disaster recovery of SharePoint 2013 farms. It provides zero data loss failover between nodes and readable secondary replicas. The document outlines the requirements and provides a step-by-step guide to implementing AlwaysOn Availability Groups for a SharePoint farm, including creating an availability group, adding databases, and creating an availability group listener.
This document discusses high availability and disaster recovery options in SQL Server 2012. It begins with an introduction and agenda. It then covers what's new in SQL Server 2012 including the new AlwaysOn Availability Groups feature. It discusses SQL Server failover clustering architecture and how it works. It also covers other high availability options like mirroring and log shipping. Finally, it demonstrates how to set up an availability group for high availability and disaster recovery.
Perforce Administration: Optimization, Scalability, Availability and ReliabilityPerforce
In this session, Michael Mirman of MathWorks describes the infrastructure and maintenance procedures that the company uses to provide disaster recovery mechanisms, minimize downtime and improve load balance.
SQL 2014 AlwaysOn Availability Groups for SharePoint Farms - SPS Sydney 2014Michael Noel
This document discusses SQL 2014 AlwaysOn Availability Groups for implementing high availability and disaster recovery for SharePoint farms. It provides an overview of AlwaysOn, requirements, design options, and a step-by-step guide for setting up an AlwaysOn Availability Group. Key points include that AlwaysOn allows multiple read-only copies of databases across servers, improves on previous mirroring technologies, and changes how the data tier should be designed for SharePoint.
The document discusses database backup and recovery concepts. It defines different types of database failures including statement failure, user process failure, network failure, user error, instance failure, and media failure. It explains how to configure the database for recoverability through techniques such as scheduling backups, multiplexing control files and redo log groups, retaining archived redo logs, and setting the database to ARCHIVELOG mode. The document also covers topics like checkpoints, redo logs, flashback technology, instance recovery phases, and tuning instance recovery.
SQL 2012 AlwaysOn Availability Groups (AOAGs) for SharePoint Farms - Norcall ...Michael Noel
This document discusses SQL Server 2012 AlwaysOn Availability Groups, which provide a high availability and disaster recovery solution. It covers the history and predecessors to AlwaysOn, compares it to other SQL high availability options, and discusses design options and requirements. Potential data loss and recovery times are listed for different solutions. The document then reviews the steps to set up an AlwaysOn Availability Group, including creating a Windows Server Failover Cluster, preparing nodes, enabling AlwaysOn, creating the availability group, and setting up a listener. It concludes with information on version requirements and a session summary.
SPSMEL 2012 - SQL 2012 AlwaysOn Availability Groups for SharePoint 2010 / 2013Michael Noel
This document discusses SQL Server 2012 AlwaysOn Availability Groups which can be used to provide high availability and disaster recovery for SharePoint 2010/2013 farms. It covers what AlwaysOn is, the requirements to implement it, different design options, and how it improves upon previous SQL mirroring technologies. A sample multi-replica design is presented with synchronous and asynchronous copies across primary, DR, and read-only farms.
Consistency Models in New Generation Databasesiammutex
The document discusses transaction and consistency models in databases as the database world changes. It covers the CAP theorem and how it is impossible to guarantee availability, consistency and partition tolerance simultaneously in an asynchronous distributed system. It then discusses various consistency models including eventual, monotonic read, and read your own writes (RYOW). Examples are provided of eventually consistent systems. The document also discusses Amazon Dynamo's consistency model and how MongoDB supports different consistency levels and strategies for handling transactions and writes in distributed systems.
The document discusses transaction and consistency models in databases as the database world changes. It covers the CAP theorem and how it is impossible to guarantee availability, consistency and partition tolerance simultaneously in an asynchronous distributed system. It then discusses various consistency models including eventual, monotonic read, and read your own writes (RYOW). Examples are provided of eventually consistent systems. The document also discusses Amazon Dynamo's consistency model and how MongoDB supports different consistency levels and strategies for handling multiple writers and network partitions.
Thoughts on Transaction and Consistency Modelsiammutex
The document discusses database transaction models and consistency in light of CAP theorem. It explains that RDBMS use ACID transactions while newer databases like NoSQL choose availability over consistency. Eventual consistency guarantees last write wins if no new updates. It discusses strategies for handling multiple writers like last write wins with vector clocks. MongoDB supports atomic operations on single documents and provides options for read and write scaling through replication and sharding.
Sql server 2012 ha and dr sql saturday bostonJoseph D'Antoni
This document summarizes a presentation about SQL Server 2012 high availability and disaster recovery options. It discusses key disaster recovery terms like RTO and RPO. It then reviews several high availability and disaster recovery solutions for SQL Server including log shipping, database mirroring, failover clustering, replication, and AlwaysOn availability groups. For each solution, it discusses requirements, pros, cons and how they work at a high level. The document concludes by noting some new features for clustering in SQL Server 2012.
SQL 2012 AlwaysOn Availability Groups for SharePoint 2010 - AUSPC2012Michael Noel
Using SQL Server 2012 AlwaysOn Availability Groups for failover of SharePoint 2010 Databases, as presented at the Australian SharePoint Conference - March 2012 in Melbourne.
Sql Server 2012 HA and DR -- SQL Saturday RichmondJoseph D'Antoni
The document discusses various strategies for achieving high availability and disaster recovery in SQL Server 2012, including log shipping, database mirroring, failover cluster instances, replication, and AlwaysOn availability groups. It provides an overview of each technology and their pros and cons for maintaining continuous access to database systems and protecting against data loss from hardware or site failures. Resources are also listed for attendees to learn more about high availability and disaster recovery options in SQL Server.
This document summarizes a presentation about SQL Server 2012 high availability and disaster recovery options. It discusses key concepts like RTO, RPO and risk management. It then reviews various SQL Server high availability and disaster recovery technologies like log shipping, database mirroring, failover clustering, replication, and AlwaysOn availability groups. It also covers new features in SQL Server 2012 like availability groups and Windows Server 2012 cluster-aware updating. The presentation concludes with a discussion of contacting the presenter for additional resources.
This document summarizes a presentation about SQL Server 2012 high availability and disaster recovery options. It discusses key disaster recovery terms, how to approach risk management, and different SQL Server high availability and disaster recovery solutions like log shipping, replication, failover clustering, and AlwaysOn availability groups. It also covers new features in SQL Server 2012 and Windows Server 2012 that improve high availability and disaster recovery capabilities.
Architecture for building scalable and highly available Postgres ClusterAshnikbiz
As PostgreSQL has made way into business critical applications, many customers who are using Oracle RAC for high availability and load balancing have asked for similar functionality for using PostgreSQL.
In this Hangout session we would discuss architecture and alternatives, based on real life experience, for achieving high availability and load balancing functionality when you deploy PostgreSQL. We will also present some of the key tools and how to deploy them for effectiveness of this architecture.
Sql server 2012 - always on deep dive - bob duffyAnuradha
The document provides an overview of a presentation by Bob Duffy on SQL Server 2012 Always On. It outlines Bob Duffy's background and experience, the agenda for the presentation which includes topics like typical high availability and disaster recovery requirements, installing and migrating to Always On availability groups, planned and automated failover, active secondary replicas, and integration with failover clustering. It also includes a case study on requirements for a fictional company and describes typical high availability architectures.
RMAN uses backups to clone databases, which takes time and storage space. Delphix clones databases virtually by linking to a source and sharing blocks, allowing near-instant clones that use minimal storage. The document compares RMAN and Delphix approaches to cloning databases for development environments.
Replication, Durability, and Disaster RecoverySteven Francia
This session introduces the basic components of high availability before going into a deep dive on MongoDB replication. We'll explore some of the advanced capabilities with MongoDB replication and best practices to ensure data durability and redundancy. We'll also look at various deployment scenarios and disaster recovery configurations.
Similar to Drop the Pressure on your Production Server (20)
During this session we will look into Windows 10 for the Enterprise.
Let’s explore the new management capabilities and choices.
Let’s understand the Windows 10 deployment infrastructure and mechanisms.
Let’s discover new Windows 10 features and improvements.
You are eager to learn about Windows 10 and want to gather early-stage info about this exciting Operating System… ?
Well you know what to do! See you there!
Compliance settings, formerly known as DCM, remains one of the often unexplored features in Configuration Manager. During this session we will walk through the new capabilities and improvements of this feature in ConfigMgr 2012, discuss implementation details, and demonstrate how you can start using it to fulfill actual business requirements.
Discover what’s new in Windows 8.1 regarding interface, settings, deployment, security, … How will Windows 8.1 fit in your enterprise? How do you upgrade? All answers are here!
The document discusses how to get started with monitoring after a successful installation of System Center Operations Manager (SCOM). It recommends doing an initial health check of the SCOM management server and database. It also covers installing SCOM agents, selecting appropriate management packs to monitor key components, and defining a phased approach for starting monitoring. The presentation provides tips on leveraging the community, backing up the SCOM environment, and finding quick wins to show management.
RMS, EFS, and BitLocker are Microsoft data protection technologies that can help prevent data leakage. RMS allows users to apply usage policies to files and encrypts files to control access. EFS transparently encrypts files stored locally on a computer. BitLocker encrypts fixed and removable drives to protect data at rest. The technologies provide different levels of protection and have varying capabilities for controlling access to data inside and outside an organization.
The document discusses Configuration Manager client deployment and health. It covers supported platforms for Windows, Linux, and Mac clients. Deployment methods include SUP, Group Policy, scripts, and manual installation. Client health is monitored from the server and client. Components include Client Check for prerequisites, dependencies and remediation, and Client Activity for tracking server interactions and status. Dashboards and reports provide visibility into client health and alerts surface issues.
This document discusses the history and evolution of self-service business intelligence (BI) tools from the 1980s to the present. It traces how BI tools have shifted from being developed primarily by IT to being user-focused end tools. It highlights key Microsoft products at different stages, from Excel in the 1980s to the addition of new apps like GeoFlow and Data Explorer in 2013. The document also demos some new self-service BI capabilities and resources.
This document discusses Cluster-Aware Updating (CAU) in Windows Server 2012. It provides an overview of how CAU works to update nodes in a failover cluster. The CAU update coordinator manages the updating process, pausing nodes, draining virtual machines, updating nodes, and failing back virtual machines in a coordinated manner. The document also provides links to Microsoft articles about CAU and integrating it with Dell server update tools.
The document discusses Microsoft's antimalware management platform which provides a common antimalware platform across Microsoft clients with proactive protection against known and unknown threats while reducing complexity. It integrates features such as early-launch antimalware, measured boot, and secure boot through UEFI to prevent malware from bypassing antimalware inspection during the boot process. The platform also provides simplified administration through a single console experience for endpoint protection and management.
This LiveMeeting presentation introduces Application Performance Monitoring (APM) in System Center Operations Manager 2012. APM allows monitoring of .NET and WCF applications to identify performance issues. It requires SCOM 2012 or later with the IIS management pack installed. APM bridges the gap between development and operations teams by integrating with Team Foundation Server and collecting traces in an IntelliTrace format. It provides various tools for client-side monitoring, server-side monitoring, and analyzing application diagnostics and advisors to help answer common support questions about application slowdowns and errors.
This document discusses Microsoft Lync Server 2013's persistent chat feature. It provides an overview of persistent chat's history and integration within Microsoft products. It also describes Lync 2013's unified client, improved server infrastructure and manageability, rich platform capabilities, and tools to easily migrate from previous versions. Configuration and management of persistent chat policies, categories, rooms and add-ins are examined. The document concludes with a section on licensing requirements for persistent chat.
The document discusses desktop virtualization and remote desktop services. It explains that with these services, the desktop workload is centralized on a virtual machine in the datacenter while the presentation of the UI is managed remotely via protocols like RDP. It also discusses mobility options that allow Lync to work across devices like PCs, Macs, smartphones and tablets through different applications. Finally, it provides a table comparing Lync support and requirements for various Windows Phone models.
Office 365 ProPlus can be deployed using Click-to-Run installation, which uses an App-V foundation for a streaming installation. This allows deploying Office fast without sacrificing control. The Office Deployment Tool can be used to download Click-to-Run packages, customize configurations, and deploy the packages across an organization. Telemetry data is collected to help optimize the user experience and identify issues, and a Telemetry Dashboard provides tools to manage data collection and settings.
This document discusses identity and authentication options for Office 365. It covers Directory Synchronization (DirSync) which synchronizes on-premises Active Directory with Azure Active Directory. It also discusses Active Directory Federation Services (ADFS) which provides single sign-on for federated identities and different ADFS topologies including on-premises, hybrid and cloud. Additionally, it covers Windows Azure Active Directory and how it can be used to provide identity services for cloud applications. The key takeaways are to check Active Directory health before using DirSync, understand the different Office 365 authentication flows with ADFS, and that WAAD can extend identity functionality to websites.
This document discusses options for upgrading a SharePoint environment from 2010 to 2013. It outlines the upgrade process which involves learning about the options, validating the environment, preparing by cleaning up and managing customizations, implementing the upgrade by building servers and upgrading content and services, and testing the upgraded environment. The key aspects are performing the upgrade on a new farm by attaching content databases to avoid downtime, allowing site collections to upgrade individually to minimize disruption, and thoroughly testing the upgraded environment.
This document discusses System Center Configuration Manager 2012's application model. It provides an overview of the application model, including the vision behind it of lifecycle management and user-centric deployment. Key concepts covered include requirement rules, detection methods, the application evaluation flow, application supersedence, and application uninstalls. Challenges and potential workarounds are also mentioned.
This document discusses FlexPod for Microsoft Private Cloud, an integrated solution from NetApp and Cisco for implementing a Microsoft Private Cloud using their technologies. It is a pre-validated reference implementation that is fully integrated with Microsoft System Center 2012 and provides a scalable Hyper-V platform. It accelerates private cloud deployments with reduced risk. Key components include Cisco UCS blade servers and switches, NetApp FAS storage, and tight integration and management capabilities through Cisco UCS Manager and NetApp OnCommand with Microsoft System Center.
Windows RT devices can be used in corporate environments if managed properly. Windows RT provides limited management capabilities compared to full Windows devices, but supports application deployment and some policy enforcement through Intune and ConfigMgr. Key challenges include application delivery restrictions, limited VPN configuration options, and lack of remote control and software metering capabilities. Proper infrastructure like Intune, ConfigMgr and VPN servers is required to securely connect and manage Windows RT devices in an enterprise.
The document discusses the evolution from device-centric management to user-centric management. Device-centric management involved managing individual devices, but user-centric management focuses on managing all of a user's devices through a single interface. The document outlines how Microsoft System Center Configuration Manager 2012 and Microsoft Intune can be used to implement user-centric management, including managing applications, settings, and security across devices. A hybrid approach using both Configuration Manager and Intune is also presented.
The document discusses steps for deploying a successful virtual network, including designing the network, building and configuring hardware, and configuring the virtual machine manager. It covers providing isolation through techniques like VLANs and software defined networking. Topics include logical network addressing, host configuration options, and creating logical switches. Tenant configuration using network virtualization is described for isolation.
More from Microsoft TechNet - Belgium and Luxembourg (20)
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
AI-Powered Food Delivery Transforming App Development in Saudi Arabia.pdfTechgropse Pvt.Ltd.
In this blog post, we'll delve into the intersection of AI and app development in Saudi Arabia, focusing on the food delivery sector. We'll explore how AI is revolutionizing the way Saudi consumers order food, how restaurants manage their operations, and how delivery partners navigate the bustling streets of cities like Riyadh, Jeddah, and Dammam. Through real-world case studies, we'll showcase how leading Saudi food delivery apps are leveraging AI to redefine convenience, personalization, and efficiency.
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
Ocean lotus Threat actors project by John Sitima 2024 (1).pptxSitimaJohn
Ocean Lotus cyber threat actors represent a sophisticated, persistent, and politically motivated group that poses a significant risk to organizations and individuals in the Southeast Asian region. Their continuous evolution and adaptability underscore the need for robust cybersecurity measures and international cooperation to identify and mitigate the threats posed by such advanced persistent threat groups.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
2. WHO AM I
• Pieter Vanhove
• SQL Server Database Consultant at Kohera
• MCTS, MCITP Database Administrator 2008
• Love to work with SQL HA/DR solutions
• E-mail: pieter.vanhove@kohera.be
• Twitter: http://twitter.com/#!/Pieter_Vanhove
• Blog: http://blogs.sqlug.be/pieter/
• MEET: http://www.microsoft.com/belux/meet/#Pieter+Vanhove
3. AGENDA
• AlwaysOn – In Short
• Offloading reporting workload in SQL Server 2008 R2
• Setting Up Readable Secondary Replicas
• Configure Connection Access on an Availability Group
• Backup on Secondary Replica
• Impact of Workloads on a Secondary Replica
• On High Availability
• On Primary
• On Query Plans & Statistics
4. ALWAYSON – IN SHORT
• HA and DR solution that provides an alternative to database mirroring
• A container for a discrete set of user databases that fail over together
• Multiple possible failover targets
5. ALWAYSON – IN SHORT
• Each availability group
defines a set of two or
more failover partners
known as availability
replicas
• Each replica hosts a copy
of the databases in the
availability group
Replica
6. ALWAYSON – IN SHORT Primary Role
• Every replica is assigned an
initial role – primary role or
the secondary role, which is
inherited by the availability
databases of that replica.
• Primary replica is assigned
the primary role and hosts
read-write databases
• Secondary replica hosts
read-only databases.
Secondary Role
7. ALWAYSON – IN SHORT
1 Commit 7 Acknowledge
6 Acknowledge
Constantly
Redoing on
Replica
2 Transmit to Replica
2 Write
to Local 3 Committed 4 Write to
Log in Log Remote Log
5
DB Log Log DB
8. ALWAYSON – IN SHORT - BENEFITS
• Supports on primary replica and up to four secondary replicas
• Supports Asynchronous-commit mode and Synchronous-commit
mode
• Read-Only access to the secondary databases
• Performing backup operations on secondary databases
• Provide fast application failover
9. READ-ONLY ACCESS TO SECONDARY REPLICA
• Take advantage of your existing investment
• Offload your read-only workloads from your primary replica
• Optimizes resources on your primary replica
• Near real time
• Read-Only access is configured at the replica level.
• You can determine the read-only access behavior whenever the replica is
a secondary replica
10. LIMITATIONS AND RESTRICTIONS
• Change tracking and change data capture are not supported on the
databases that belong to a readable secondary replica
• DBCC SHRINKFILE operation might fail on the primary replica if the
file contains ghost records that are still needed on the secondary
replica
• DBCC CHECKDB is blocked. “The database could not be exclusively
locked to perform the operation”
11. OFFLOADING WORKLOAD IN SQL SERVER 2008 R2
• Database Mirroring
• Snapshot on the mirror
• Name of the snapshot is different
• Snapshot is a static view, no real-time data
• Overhead
12. OFFLOADING WORKLOAD IN SQL SERVER 2008 R2
• Log Shipping
• Run a reporting workload on the log shipping target node
• Data Latency
• If the secondary database is open for reporting workload, the log backups
cannot be restored
13. OFFLOADING WORKLOAD IN SQL SERVER 2008 R2
• Replication
• Transactional replication
• You can create reporting-workload-specific indexes
• Filter the dataset on the subscriber database
• All tables require a primary key
• Not suitable for high transaction throughput
14. SETTING UP READABLE SECONDARY REPLICAS
• Yes
• Clients can connect to the secondary replica explicitly to run the reporting
workload
• Read-intent-only
• Connections that have the ApplicationIntent=ReadOnly are accepted.
• Allows clients to automatically connect to readable secondary
• Prevent read loads from running on the primary
15. CONFIGURE CONNECTION ACCESS
• For each readable secondary replica that is to support read-only
routing, you need to specify a read-only routing URL
• For each availability replica that you want to support read-only
routing when it is the primary replica, you need to specify a read-only
routing list
17. WHERE WOULD YOU PREFER TO PERFORM BACKUPS?
•
Preference Description
Only on the primary
Backups should always occur on the primary replica.
replica
On secondary Backups should occur on a secondary replica except
replicas when the primary replica is the only replica online.
Only on secondary Backups should never be performed on the primary
replicas replica.
Backup jobs should ignore the role of the availability
No preference
replicas when choosing the replica to perform backups.
18. BACKUP PRIORITY
Setting Description
The relative priority of a given replica relative to the
1..100 backup priorities of the other replicas in the availability
group. 100 is the highest priority.
The availability replica will never be chosen for
0
performing backups.
19. BACKUP – IMPORTANT POINTS
• The backup on the primary replica still works
• Only copy only full backup is allowed for secondaries
• Differential backups are not supported on secondary replicas
• BACKUP LOG supports only regular log backups, the COPY_ONLY option
is not supported
• Secondary replica must be able to communicate with the primary replica
and must be SYNCHRONIZED or SYNCHRONIZING
• Priority is not enforced by SQL Server, script your backup jobs
20. SCRIPTING OF BACKUP JOBS
Determine whether the current replica is the preferred backup replica?
sys.fn_hadr_backup_is_preferred_replica
IF (NOT sys.fn_hadr_backup_is_preferred_replica(@DBNAME))
BEGIN
Select 'This is not the preferred replica, exiting with success';
RETURN 0
END
BACKUP DATABASE @DBNAME TO DISK=<disk>
WITH COPY_ONLY;
Remark: If you use the Maintenance Plan Wizard, the job will
automatically include the scripting logic that calls and checks the
sys.fn_hadr_backup_is_preferred_replica function
21. IMPACT ON HIGH AVAILABILITY
• Recovery Point Objective
• Asynchronous commit mode, there is no additional impact
• Synchronous commit mode, no data loss
• Recovery Time Objective
• After a failover, the REDO thread needs to apply the transaction log records.
• The longer the REDO thread, the longer it takes to bring the DB online
22. A READ-ONLY WORKLOAD CAN IMPACT THE RTO
• If the reporting workload is I/O bound
• The REDO thread can be blocked by the reporting workload
• DML operations
• DDL operations
• Result = Data Latency
23. TROUBLESHOOTING REDO BLOCKING
• a lock_redo_blocked Extended Event is generated
• query the DMV sys.dm_exec_request on the secondary
• AlwaysOn Dashboard
24. IMPACT ON PRIMARY WORKLOAD
• 14-byte overhead occurs only when an existing row is updated or
deleted or when a new row is added.
• This is very similar to the impact of enabling RCSI/SI on the primary
• No row versions need to be generated on the primary replica.
• The extra 14 bytes can lead to more page splits
25. IMPACT OF A REPORTING WORKLOAD UNDER SNAPSHOT ISOLATION
• Snapshot Isolation
• When a row is modified, its previous version is saved in the version store
backed by tempdb and a 14-byte pointer is set from the modified row to the
versioned row.
• 4 scenarios
• SI and RCSI are not enabled on the primary - secondary not enabled for read
• SI and RCSI are not enabled on the primary - secondary is enabled for read
• SI and RCSI are enabled on the primary - secondary not enabled for read
• SI and RCSI are enabled on the primary - secondary is enabled for read
30. DO READ WORKLOADS RUNNING ON THE
SECONDARY REPLICA IMPACT THE
ACKNOWLEDGEMENT (ACK) FOR THE
1 Commit 7 Acknowledge TRANSACTION COMMIT?
6 Acknowledge
Constantly
Redoing on
Replica
2 Transmit to Replica
2 Write
to Local 3 Committed 4 Write to
Log in Log Remote Log
5
DB Log Log DB
31. QUERY PLANS & STATISTICS
• Statistics created on the primary are automatically available on the
secondary
• How are missing statistics created or stale statistics updated on the
secondary replica?
• Temporary statistics are created and stored in tempdb.
• Statistics can be lost if SQL Server is restarted
• Temporary statistics are removed when a primary replica fails over.
32. RESOURCES
• AlwaysOn Team Blog
• http://blogs.msdn.com/b/sqlalwayson/
• SQL Server 2012 Whitepapers
• http://msdn.microsoft.com/en-us/library/hh403491
• SQL Diablo Blog
• http://www.sqldiablo.com/alwayson/
This leads to the question: “Do read workloads running on the secondary replica impact the acknowledgement (ACK) for the transaction commit?” The answer is that this is unlikely. In the secondary replica in the preceding picture, there are essentially two background threads: one receives the log record over the network and the other hardens that log record. SQL Server gives priority to background threads over user threads (including the ones that are running read workload). This means that at least from the CPU perspective, a read workload cannot delay the ACK. An I/O intensive read workload could slow down the transaction log write, but this would only happen if the data and the transaction log were to share the same physical disk. In most production deployments, transaction log disks are not shared with data disks, so it is a nonissue. However, a network bottleneck can add to the latency of the transaction, but in that case it is unrelated to read workload. In summary, in a well-configured and well-managed system, it is unlikely that the read workload on the primary will add to the transactional latency.
Database mirroring: The mirror database in the database mirroring configuration is not readable, but you can create a database snapshot on the mirrored database, which is readable. The read-only workload can then run against the database snapshot. This approach has the following challenges:The name of the database snapshot is different from the name of the database in the database mirroring configuration. If your application has a hard-coded database name, you need to make modifications in order to connect to the snapshot. The database snapshot is a static view of the data at the time the snapshot was taken. If an application needs to access more recent data, you must create a new database snapshot with a new name unless you drop the old database snapshot. In other words, near real-time access to data is difficult, if not impossible, to achieve if you are using a database snapshot.A database snapshot employs copy-on-write operations. These operations can add significant overhead if there are multiple database snapshots. Also, queries that run on a database snapshot incur more random I/O. Together, these issues can cause significant performance degradation.
Log shipping: With log shipping, you can run a reporting workload on the log shipping target node, but the data latency incurred by the reporting workload depends on the frequency of the transaction log restore. However, if the secondary database is open for reporting workload, the log backups cannot be restored. This can be management challenge because you must choose between high availability and the latency incurred by a reporting workload. If you reduce the frequency of the log restores, both the data latency for the reporting workload and the recovery time objective (RTO) are negatively affected. If you increase the frequency of the transaction log restore, you must disconnect all users on the secondary database before the restore. That does not work for many scenarios, especially if you have long-running queries, because they may never run to completion due to RTO constraints.
Replication: Transactional replication can be used as a solution for offloading read and reporting workloads. A few key benefits of replication are that customers can create reporting-workload-specific indexes and filter the dataset on the subscriber database. The challenges here include the following: all tables require a primary key, which may require schema changes (for example, unique key), and replication is not suitable for high transaction throughput. There can be significant latency between publisher and subscriber, particularly under large batch jobs and high transaction volumes.
Yes option: Supported TDS clients can connect to the secondary replica explicitly to run the reporting workload. The client is responsible for ensuring that it is connecting to readable secondary, because the roles of replicas can change in case of failover. The key benefit of this option is that older clients can run reporting workloads on the readable secondary.Read-intent-only option: Only connections that have the property ApplicationIntent set to ReadOnly are accepted. The word intent indicates that you want to use the connection as read-only; it does not prevent read/write connections. It is still possible to connect using a read/write application if the ApplicationIntent option is set to ReadOnly, but the application fails on the first DML or DDL operation. This option allows clients to automatically connect to an available readable secondary, and you can use it to prevent read workloads from running on the primary replica. For more information about how to use this setting, see Connecting to Secondary Replicas later in this white paper.
For each readable secondary replica that is to support read-only routing, you need to specify a read-only routing URL. This URL takes effect only when the local replica is running under the secondary role. The read-only routing URL must be specified on a replica-by-replica basis, as needed. Each read-only routing URL is used for routing read-intent connection requests to a specific readable secondary replica. Typically, every readable secondary replica is assigned a read-only routing URL. For information about calculating the read-only routing URL for an availability replica, see Calculating read_only_routing_url for AlwaysOn.For each availability replica that you want to support read-only routing when it is the primary replica, you need to specify a read-only routing list. A given read-only routing list takes effect only when the local replica is running under the primary role. This list must be specified on a replica-by-replica basis, as needed. Typically, each read-only routing list would contain every read-only routing URL, with the URL of the local replica at the end of the list.
Read-only routing uses the following algorithm to locate a readable secondary:Client connects to an Availability Group listener endpoint.Note this endpoint always points to the primary replica for the availability groupClient specifies ApplicationIntent=ReadOnly in the connection string, this is transmitted to the server during loginOn server side, server checks that incoming connection is using an Availability Group listener endpointOtherwise, read-only routing is disabledServer checks the target database and determines if it is in an availability groupIf database is in an availability group, we check if the read_only_routing_list is set on the primary replicaIf list is not set, routing is disabledIf list is set, then routing is enforcedServer then enumerates the replicas in the read_only_routing_list and checks each replica in the listFirst replica it finds that is synchronizing and accepts readers (allow_connections=read_only or all) is the routing targetServer next reads the read_only_routing_url from this replica and sends this response to the clientClient reads routing URL and re-directs to the readable secondary instance
PreferenceDescriptionOnly on the primary replicaBackups should always occur on the primary replica. This alternative is useful if you need backup features, such as creating differential backups, that are not supported when backup is run on a secondary replica.On secondary replicasBackups should occur on a secondary replica except when the primary replica is the only replica online. In that case, the backup should occur on the primary replica. This is the default behavior.Only on secondary replicasBackups should never be performed on the primary replica. If the primary replica is the only replica online, the backup should not occur.No preferenceBackup jobs should ignore the role of the availability replicas when choosing the replica to perform backups. Note backup jobs might evaluate other factors such as backup priority of each availability replica in combination with its operational state and connected state.
When a secondary replica is enabled for running reporting workloads, any changes to the rows done as part of DML operation start incurring a 14-byte overhead, as explained earlier. Note that no changes to the size of rows need to be made to existing rows. This 14-byte overhead occurs only when an existing row is updated or deleted or when a new row is added. This is very similar to the impact of enabling RCSI/SI on the primary, except that no row versions need to be generated on the primary replica. The extra 14 bytes can lead to more page splits as the size of the row or rows is increased. However, the reporting workload does not affect the transactional throughput on the primary replica. When a transaction is run on the primary replica, the transaction log records are writtento the log buffer and at the same time are sent to the log pool tobe sent to the secondary replica (in thisexamplethere is onlyonesecondary replica, but the same logic holdsfor multiple replicas) as shown in the following picture.
In this case, SI and/or RCSI are not enabled on the primary replica and the secondary replica is not enabled for read workload. As shown in the following picture, there is no row versioning overhead on either the primary replica or the secondary replica.
In this case, SI and/or RCSI are not enabled on the primary replica, but the secondary replica is enabled for read workload. There are two interesting points to note here. First, the row version is only generated on the secondary replica; because RCSI or SI is not enabled on the primary replica, there is really no need to create row versions there. Second, the row versions need to be generated on the secondary replica, which means that the 14-byte overhead needs to be added to the new and modified rows on the primary, because the primary and secondary replicas must be physically identical. Existing rows that are not modified do not incur the 14-byte overhead. The following picture shows the 14-byte overhead on the primary replica and the generation of the row version on the secondary replica.
In this instance, SI and/or RCSI are enabled on the primary replica but the secondary replica is not enabled for read workload. This case is a bit simpler because the 14-byte versioning overhead is already added to the data rows on the primary replica independent of the status of secondary replica. As shown in the following picture, if the secondary replica is not enabled for read workload, there is still a 14-byte overhead on the rows on the secondary replica, but there is no row version generation on the secondary because the read workload has not been enabled.
In this case, SI and/or RCSI are enabled on the primary replica, and the secondary replica is enabled for read workload. This case is similar to the previous configuration except that row versions must also be generated on the secondary replica. The following picture shows the 14-byte overhead in the data/index row and the row version.
This leads to the question: “Do read workloads running on the secondary replica impact the acknowledgement (ACK) for the transaction commit?” The answer is that this is unlikely. In the secondary replica in the preceding picture, there are essentially two background threads: one receives the log record over the network and the other hardens that log record. SQL Server gives priority to background threads over user threads (including the ones that are running read workload). This means that at least from the CPU perspective, a read workload cannot delay the ACK. An I/O intensive read workload could slow down the transaction log write, but this would only happen if the data and the transaction log were to share the same physical disk. In most production deployments, transaction log disks are not shared with data disks, so it is a nonissue. However, a network bottleneck can add to the latency of the transaction, but in that case it is unrelated to read workload. In summary, in a well-configured and well-managed system, it is unlikely that the read workload on the primary will add to the transactional latency.
The reporting workload running on the secondary replica will incur some data latency, typically a few seconds to minutes depending upon the primary workload and the network latency. The data latency exists even if you have configured the secondary replica to synchronous mode. While it is true that a synchronous replica helps guarantee no data loss in ideal conditions (that is, RPO = 0) by hardening the transaction log records of a committed transaction before sending an ACK to the primary, it does not guarantee that the REDO thread on secondary replica has indeed applied the associated log records to database pages. So there is some data latency. You may wonder if this data latency is more likely when you have configured the secondary replica in asynchronous mode. This is a more difficult question to answer. If the network between the primary replica and the secondary replica is not able to keep up with the transaction log traffic (that is, if there is not enough bandwidth), the asynchronous replica can fall further behind, leading to higher data latency. In the case of synchronous replica, the insufficient network bandwidth does not cause higher data latency on the secondary but it can slow down the transaction response time and throughput for the primary workload.If your reporting workload cannot tolerate any data latency, you must run it on the primary replica. The good news is that generally most reporting workloads can tolerate some data latency and therefore can safely be migrated to secondary replica.
The first thing to note is that any statistics created on the primary replica are automatically available on the secondary replica for usage. The challenge is in allowing missing statistics to be created or stale statistics to be updated on the secondary replica. The short answer is that this is not possible because it violates the rule that the primary and secondary database must be physically identical. However, statistics on an object can be created and re-created using the data in the table. Based on this fact, temporary statistics are created and stored in tempdb. This change guarantees that up-to-date statistics are available on the secondary replica just like they are on the primary replica for the query optimizer. The implication of creating temporary statistics is that these statistics can be lost if SQL Server is restarted, but this is not a true data-loss situation because, as noted earlier, these statistics can be re-created at a relatively low cost by querying the underlying objects. Similarly, the temporary statistics are removed when a primary replica fails over.