What are all the high availability (HA) and disaster recovery (DR) options for SQL Server in a Azure VM (IaaS)? Which of these options can be used in a hybrid combination (Azure VM and on-prem)? I will cover features such as AlwaysOn AG, Failover cluster, Azure SQL Data Sync, Log Shipping, SQL Server data files in Azure, Mirroring, Azure Site Recovery, and Azure Backup.
In this presentation, I have talked about Resiliency in Azure.
I have also talked about how you can do Azure VM Improvements and Maintenance. Along with that, I have also talked about Disaster Recovery with ASR.
This presentation is for those of you who are interested in moving your on-prem SQL Server databases and servers to Azure virtual machines (VM’s) in the cloud so you can take advantage of all the benefits of being in the cloud. This is commonly referred to as a “lift and shift” as part of an Infrastructure-as-a-service (IaaS) solution. I will discuss the various Azure VM sizes and options, migration strategies, storage options, high availability (HA) and disaster recovery (DR) solutions, and best practices.
** Edureka Certification Training: https://www.edureka.co **
This Edureka "VMware Tutorial for Beginners” video will give you a thorough and insightful overview of Virtualization and help you understand other related terms that revolve around VMware and Virtualization. Following are the offering of this video:
1. What is VMware?
2. What is Virtualization?
3. Types Of Virtualization
4. What Is Hypervisor?
5. Hypervisor Types
6. Demo- Creating a VM using VMware Workstation Player
In this presentation, I have talked about Resiliency in Azure.
I have also talked about how you can do Azure VM Improvements and Maintenance. Along with that, I have also talked about Disaster Recovery with ASR.
This presentation is for those of you who are interested in moving your on-prem SQL Server databases and servers to Azure virtual machines (VM’s) in the cloud so you can take advantage of all the benefits of being in the cloud. This is commonly referred to as a “lift and shift” as part of an Infrastructure-as-a-service (IaaS) solution. I will discuss the various Azure VM sizes and options, migration strategies, storage options, high availability (HA) and disaster recovery (DR) solutions, and best practices.
** Edureka Certification Training: https://www.edureka.co **
This Edureka "VMware Tutorial for Beginners” video will give you a thorough and insightful overview of Virtualization and help you understand other related terms that revolve around VMware and Virtualization. Following are the offering of this video:
1. What is VMware?
2. What is Virtualization?
3. Types Of Virtualization
4. What Is Hypervisor?
5. Hypervisor Types
6. Demo- Creating a VM using VMware Workstation Player
Azure SQL Database (SQL DB) is a database-as-a-service (DBaaS) that provides nearly full T-SQL compatibility so you can gain tons of benefits for new databases or by moving your existing databases to the cloud. Those benefits include provisioning in minutes, built-in high availability and disaster recovery, predictable performance levels, instant scaling, and reduced overhead. And gone will be the days of getting a call at 3am because of a hardware failure. If you want to make your life easier, this is the presentation for you.
What to Expect From Oracle database 19cMaria Colgan
The Oracle Database has recently switched to an annual release model. Oracle Database 19c is only the second release in this new model. So what can you expect from the latest version of the Oracle Database? This presentation explains how Oracle Database 19c is really 12.2.0.3 the terminal release of the 12.2 family and the new features you can find in this release.
Lessons Learned: Implementing Azure Synapse Analytics in a Rapidly-Changing S...Cathrine Wilhelmsen
Lessons Learned: Implementing Azure Synapse Analytics in a Rapidly-Changing Startup (Presented at SQLBits on March 11th, 2022)
What happens when you mix one rapidly-changing startup, one data analyst, one data engineer, and one hypothesis that Azure Synapse Analytics could be the right tool of choice for gaining business insights?
We had no idea, but we gave it a go!
Our ambition was to think big, start small, and act fast – to deliver business value early and often.
Did we succeed?
Join us for an honest conversation about why we decided to implement Azure Synapse Analytics alongside Power BI, how we got started, which areas we completely messed up at first, what our current solution looks like, the lessons learned along the way, and the things we would have done differently if we could start all over again.
Want to see a high-level overview of the products in the Microsoft data platform portfolio in Azure? I’ll cover products in the categories of OLTP, OLAP, data warehouse, storage, data transport, data prep, data lake, IaaS, PaaS, SMP/MPP, NoSQL, Hadoop, open source, reporting, machine learning, and AI. It’s a lot to digest but I’ll categorize the products and discuss their use cases to help you narrow down the best products for the solution you want to build.
Azure Synapse Analytics is Azure SQL Data Warehouse evolved: a limitless analytics service, that brings together enterprise data warehousing and Big Data analytics into a single service. It gives you the freedom to query data on your terms, using either serverless on-demand or provisioned resources, at scale. Azure Synapse brings these two worlds together with a unified experience to ingest, prepare, manage, and serve data for immediate business intelligence and machine learning needs. This is a huge deck with lots of screenshots so you can see exactly how it works.
Deep Dive: a technical insider's view of NetBackup 8.1 and NetBackup AppliancesVeritas Technologies LLC
Together, NetBackup 8.0 and 8.1 are perhaps the two most significant consecutive releases in NetBackup history. Attend this session to learn how the newly released NetBackup 8.1 builds on version 8.0 to deliver the promise of modern data protection and advanced information management like never before. This session will feature a detailed technical overview of the new security architecture in NetBackup 8.1 that keeps data secure across any network, new dedupe to the cloud capabilities that deliver industry-leading performance, instant recovery for Oracle, added support for virtual and next-gen workloads, faster and easier deployments, and many other new features and capabilities.
Oracle Database Migration to Oracle Cloud InfrastructureSinanPetrusToma
This slide deck highlights the benefits of Oracle Cloud, describes the different Oracle database cloud services and their characteristics, which one to choose and what to consider, and more than 20 methods and solutions Oracle offers to migrate Oracle databases across platforms.
MySQL InnoDB Cluster: Management and Troubleshooting with MySQL ShellMiguel Araújo
MySQL InnoDB Cluster and MySQL Shell session presented at Oracle CodeOne2019.
Abstract:
MySQL InnoDB Cluster provides a built-in high-availability solution for MySQL. Combining MySQL Group Replication with MySQL Router and MySQL Shell into an integrated full-stack solution, InnoDB Cluster provides easy setup and management of MySQL instances into a fault-tolerant database service. MySQL Shell is the “control panel” of InnoDB Cluster, enabling the easy and straightforward configuration and management of InnoDB clusters by providing a scriptable and interactive API: the AdminAPI. Recent enhancements and features added to MySQL Shell make the management of InnoDB clusters even more powerful and smoother. Attend this session to get an overview of the latest developments and improved InnoDB Cluster administration tasks.
Notes:
The slideshow includes a video that cannot be seen in slideshare/PDF. If you're interested in it please check the following blog post: https://mysqlhighavailability.com/mysql-innodb-cluster-automatic-node-provisioning/
Wars of MySQL Cluster ( InnoDB Cluster VS Galera ) Mydbops
MySQL Clustering over InnoDB engines has grown a lot over the last decade. Galera began working with InnoDB early and then Group Replication came to the environment later, where the features are now rich and robust. This presentation offers a technical comparison of both of them.
Azure Virtual Machines Deployment ScenariosBrian Benz
Architecture and Scenarios for deploying Database and middleware applications on Azure Virtual Machines including SQL Server, Oracle, Hadoop, and others.
Azure SQL Database (SQL DB) is a database-as-a-service (DBaaS) that provides nearly full T-SQL compatibility so you can gain tons of benefits for new databases or by moving your existing databases to the cloud. Those benefits include provisioning in minutes, built-in high availability and disaster recovery, predictable performance levels, instant scaling, and reduced overhead. And gone will be the days of getting a call at 3am because of a hardware failure. If you want to make your life easier, this is the presentation for you.
What to Expect From Oracle database 19cMaria Colgan
The Oracle Database has recently switched to an annual release model. Oracle Database 19c is only the second release in this new model. So what can you expect from the latest version of the Oracle Database? This presentation explains how Oracle Database 19c is really 12.2.0.3 the terminal release of the 12.2 family and the new features you can find in this release.
Lessons Learned: Implementing Azure Synapse Analytics in a Rapidly-Changing S...Cathrine Wilhelmsen
Lessons Learned: Implementing Azure Synapse Analytics in a Rapidly-Changing Startup (Presented at SQLBits on March 11th, 2022)
What happens when you mix one rapidly-changing startup, one data analyst, one data engineer, and one hypothesis that Azure Synapse Analytics could be the right tool of choice for gaining business insights?
We had no idea, but we gave it a go!
Our ambition was to think big, start small, and act fast – to deliver business value early and often.
Did we succeed?
Join us for an honest conversation about why we decided to implement Azure Synapse Analytics alongside Power BI, how we got started, which areas we completely messed up at first, what our current solution looks like, the lessons learned along the way, and the things we would have done differently if we could start all over again.
Want to see a high-level overview of the products in the Microsoft data platform portfolio in Azure? I’ll cover products in the categories of OLTP, OLAP, data warehouse, storage, data transport, data prep, data lake, IaaS, PaaS, SMP/MPP, NoSQL, Hadoop, open source, reporting, machine learning, and AI. It’s a lot to digest but I’ll categorize the products and discuss their use cases to help you narrow down the best products for the solution you want to build.
Azure Synapse Analytics is Azure SQL Data Warehouse evolved: a limitless analytics service, that brings together enterprise data warehousing and Big Data analytics into a single service. It gives you the freedom to query data on your terms, using either serverless on-demand or provisioned resources, at scale. Azure Synapse brings these two worlds together with a unified experience to ingest, prepare, manage, and serve data for immediate business intelligence and machine learning needs. This is a huge deck with lots of screenshots so you can see exactly how it works.
Deep Dive: a technical insider's view of NetBackup 8.1 and NetBackup AppliancesVeritas Technologies LLC
Together, NetBackup 8.0 and 8.1 are perhaps the two most significant consecutive releases in NetBackup history. Attend this session to learn how the newly released NetBackup 8.1 builds on version 8.0 to deliver the promise of modern data protection and advanced information management like never before. This session will feature a detailed technical overview of the new security architecture in NetBackup 8.1 that keeps data secure across any network, new dedupe to the cloud capabilities that deliver industry-leading performance, instant recovery for Oracle, added support for virtual and next-gen workloads, faster and easier deployments, and many other new features and capabilities.
Oracle Database Migration to Oracle Cloud InfrastructureSinanPetrusToma
This slide deck highlights the benefits of Oracle Cloud, describes the different Oracle database cloud services and their characteristics, which one to choose and what to consider, and more than 20 methods and solutions Oracle offers to migrate Oracle databases across platforms.
MySQL InnoDB Cluster: Management and Troubleshooting with MySQL ShellMiguel Araújo
MySQL InnoDB Cluster and MySQL Shell session presented at Oracle CodeOne2019.
Abstract:
MySQL InnoDB Cluster provides a built-in high-availability solution for MySQL. Combining MySQL Group Replication with MySQL Router and MySQL Shell into an integrated full-stack solution, InnoDB Cluster provides easy setup and management of MySQL instances into a fault-tolerant database service. MySQL Shell is the “control panel” of InnoDB Cluster, enabling the easy and straightforward configuration and management of InnoDB clusters by providing a scriptable and interactive API: the AdminAPI. Recent enhancements and features added to MySQL Shell make the management of InnoDB clusters even more powerful and smoother. Attend this session to get an overview of the latest developments and improved InnoDB Cluster administration tasks.
Notes:
The slideshow includes a video that cannot be seen in slideshare/PDF. If you're interested in it please check the following blog post: https://mysqlhighavailability.com/mysql-innodb-cluster-automatic-node-provisioning/
Wars of MySQL Cluster ( InnoDB Cluster VS Galera ) Mydbops
MySQL Clustering over InnoDB engines has grown a lot over the last decade. Galera began working with InnoDB early and then Group Replication came to the environment later, where the features are now rich and robust. This presentation offers a technical comparison of both of them.
Azure Virtual Machines Deployment ScenariosBrian Benz
Architecture and Scenarios for deploying Database and middleware applications on Azure Virtual Machines including SQL Server, Oracle, Hadoop, and others.
Azure Site Recovery and System Center Tudor Damian
Azure Site Recovery is a cloud-based service that automates virtual machine fail-over across sites. The service integrates with Virtual Machine Manager which manages on-premises Hyper-V servers. Hyper-V Replica technology replicates virtual machine configuration and data across sites. Based on customer feedback, support for SAN replication is important. This session covers the scenarios in scope, solution architecture, and SAN integration using SMI-S.
SQL Server High Availability Solutions (Pros & Cons)Hamid J. Fard
Proper SQL Server High Availability Solution Is Highly Depends on the Business Objective and IT Operation Objectives. It Happens Sometimes that We Might Have Few Solutions on the Table to Implement.
Deploy Java, PHP, Ruby, Node.js, Go, .NET, Python and Docker applications with no code changes using GIT, SVN, archives or integrated plugins like Maven, Ant, Eclipse, NetBeans,
IntelliJ IDEA
CloudJiffy will automatically scale your application containers vertically and horizontally, ensuring you only pay for the resources you consume. No capacity planning or resouce wastage. CloudJiffy uses granular 128MB cloudlets.
CloudJiffy dashboard provides intuitive application topology wizard, deployment manager, access to log and config files, team collaboration functionality and integration
with CI/CD tools
UK WVD User Group January - Jim Moyle - BC/DR with WVDNeil McLoughlin
Presentation from the UK WVD User Group meeting where Jim Moyle presented on how to handle Business Continuity and Disaster Recovery when using Windows Virtual Desktop
Similar to HA/DR options with SQL Server in Azure and hybrid (20)
Microsoft Fabric is the next version of Azure Data Factory, Azure Data Explorer, Azure Synapse Analytics, and Power BI. It brings all of these capabilities together into a single unified analytics platform that goes from the data lake to the business user in a SaaS-like environment. Therefore, the vision of Fabric is to be a one-stop shop for all the analytical needs for every enterprise and one platform for everyone from a citizen developer to a data engineer. Fabric will cover the complete spectrum of services including data movement, data lake, data engineering, data integration and data science, observational analytics, and business intelligence. With Fabric, there is no need to stitch together different services from multiple vendors. Instead, the customer enjoys end-to-end, highly integrated, single offering that is easy to understand, onboard, create and operate.
This is a hugely important new product from Microsoft and I will simplify your understanding of it via a presentation and demo.
Agenda:
What is Microsoft Fabric?
Workspaces and capacities
OneLake
Lakehouse
Data Warehouse
ADF
Power BI / DirectLake
Resources
Data Lakehouse, Data Mesh, and Data Fabric (r2)James Serra
So many buzzwords of late: Data Lakehouse, Data Mesh, and Data Fabric. What do all these terms mean and how do they compare to a modern data warehouse? In this session I’ll cover all of them in detail and compare the pros and cons of each. They all may sound great in theory, but I'll dig into the concerns you need to be aware of before taking the plunge. I’ll also include use cases so you can see what approach will work best for your big data needs. And I'll discuss Microsoft version of the data mesh.
Data Lakehouse, Data Mesh, and Data Fabric (r1)James Serra
So many buzzwords of late: Data Lakehouse, Data Mesh, and Data Fabric. What do all these terms mean and how do they compare to a data warehouse? In this session I’ll cover all of them in detail and compare the pros and cons of each. I’ll include use cases so you can see what approach will work best for your big data needs.
Data Warehousing Trends, Best Practices, and Future OutlookJames Serra
Over the last decade, the 3Vs of data - Volume, Velocity & Variety has grown massively. The Big Data revolution has completely changed the way companies collect, analyze & store data. Advancements in cloud-based data warehousing technologies have empowered companies to fully leverage big data without heavy investments both in terms of time and resources. But, that doesn’t mean building and managing a cloud data warehouse isn’t accompanied by any challenges. From deciding on a service provider to the design architecture, deploying a data warehouse tailored to your business needs is a strenuous undertaking. Looking to deploy a data warehouse to scale your company’s data infrastructure or still on the fence? In this presentation you will gain insights into the current Data Warehousing trends, best practices, and future outlook. Learn how to build your data warehouse with the help of real-life use-cases and discussion on commonly faced challenges. In this session you will learn:
- Choosing the best solution - Data Lake vs. Data Warehouse vs. Data Mart
- Choosing the best Data Warehouse design methodologies: Data Vault vs. Kimball vs. Inmon
- Step by step approach to building an effective data warehouse architecture
- Common reasons for the failure of data warehouse implementations and how to avoid them
Azure Synapse Analytics is Azure SQL Data Warehouse evolved: a limitless analytics service, that brings together enterprise data warehousing and Big Data analytics into a single service. It gives you the freedom to query data on your terms, using either serverless on-demand or provisioned resources, at scale. Azure Synapse brings these two worlds together with a unified experience to ingest, prepare, manage, and serve data for immediate business intelligence and machine learning needs. This is a huge deck with lots of screenshots so you can see exactly how it works.
The data lake has become extremely popular, but there is still confusion on how it should be used. In this presentation I will cover common big data architectures that use the data lake, the characteristics and benefits of a data lake, and how it works in conjunction with a relational data warehouse. Then I’ll go into details on using Azure Data Lake Store Gen2 as your data lake, and various typical use cases of the data lake. As a bonus I’ll talk about how to organize a data lake and discuss the various products that can be used in a modern data warehouse.
Power BI Overview, Deployment and GovernanceJames Serra
Deploying Power BI in a large enterprise is a complex task, and one that requires a lot of thought and planning. The purpose of this presentation is to help you make your Power BI deployment a success. After a quick Power BI overview, I’ll discuss deployment strategies, common usage scenarios, how to store and refresh data, prototyping options, how to share externally, and then finish with how to administer and secure Power BI. I’ll outline considerations and best practices for achieving an optimal, well-performing, enterprise level Power BI deployment.
Power BI has become a product with a ton of exciting features. This presentation will give an overview of some of them, including Power BI Desktop, Power BI service, what’s new, integration with other services, Power BI premium, and administration.
The breath and depth of Azure products that fall under the AI and ML umbrella can be difficult to follow. In this presentation I’ll first define exactly what AI, ML, and deep learning is, and then go over the various Microsoft AI and ML products and their use cases.
Embarking on building a modern data warehouse in the cloud can be an overwhelming experience due to the sheer number of products that can be used, especially when the use cases for many products overlap others. In this talk I will cover the use cases of many of the Microsoft products that you can use when building a modern data warehouse, broken down into four areas: ingest, store, prep, and model & serve. It’s a complicated story that I will try to simplify, giving blunt opinions of when to use what products and the pros/cons of each.
AI for an intelligent cloud and intelligent edge: Discover, deploy, and manag...James Serra
Discover, manage, deploy, monitor – rinse and repeat. In this session we show how Azure Machine Learning can be used to create the right AI model for your challenge and then easily customize it using your development tools while relying on Azure ML to optimize them to run in hardware accelerated environments for the cloud and the edge using FPGAs and Neural Network accelerators. We then show you how to deploy the model to highly scalable web services and nimble edge applications that Azure can manage and monitor for you. Finally, we illustrate how you can leverage the model telemetry to retrain and improve your content.
Power BI for Big Data and the New Look of Big Data SolutionsJames Serra
New features in Power BI give it enterprise tools, but that does not mean it automatically creates an enterprise solution. In this talk we will cover these new features (composite models, aggregations tables, dataflow) as well as Azure Data Lake Store Gen2, and describe the use cases and products of an individual, departmental, and enterprise big data solution. We will also talk about why a data warehouse and cubes still should be part of an enterprise solution, and how a data lake should be organized.
In three years I went from a complete unknown to a popular blogger, speaker at PASS Summit, a SQL Server MVP, and then joined Microsoft. Along the way I saw my yearly income triple. Is it because I know some secret? Is it because I am a genius? No! It is just about laying out your career path, setting goals, and doing the work.
I'll cover tips I learned over my career on everything from interviewing to building your personal brand. I'll discuss perm positions, consulting, contracting, working for Microsoft or partners, hot fields, in-demand skills, social media, networking, presenting, blogging, salary negotiating, dealing with recruiters, certifications, speaking at major conferences, resume tips, and keys to a high-paying career.
Your first step to enhancing your career will be to attend this session! Let me be your career coach!
Is the traditional data warehouse dead?James Serra
With new technologies such as Hive LLAP or Spark SQL, do I still need a data warehouse or can I just put everything in a data lake and report off of that? No! In the presentation I’ll discuss why you still need a relational data warehouse and how to use a data lake and a RDBMS data warehouse to get the best of both worlds. I will go into detail on the characteristics of a data lake and its benefits and why you still need data governance tasks in a data lake. I’ll also discuss using Hadoop as the data lake, data virtualization, and the need for OLAP in a big data solution. And I’ll put it all together by showing common big data architectures.
Differentiate Big Data vs Data Warehouse use cases for a cloud solutionJames Serra
It can be quite challenging keeping up with the frequent updates to the Microsoft products and understanding all their use cases and how all the products fit together. In this session we will differentiate the use cases for each of the Microsoft services, explaining and demonstrating what is good and what isn't, in order for you to position, design and deliver the proper adoption use cases for each with your customers. We will cover a wide range of products such as Databricks, SQL Data Warehouse, HDInsight, Azure Data Lake Analytics, Azure Data Lake Store, Blob storage, and AAS as well as high-level concepts such as when to use a data lake. We will also review the most common reference architectures (“patterns”) witnessed in customer adoption.
Databricks is a Software-as-a-Service-like experience (or Spark-as-a-service) that is a tool for curating and processing massive amounts of data and developing, training and deploying models on that data, and managing the whole workflow process throughout the project. It is for those who are comfortable with Apache Spark as it is 100% based on Spark and is extensible with support for Scala, Java, R, and Python alongside Spark SQL, GraphX, Streaming and Machine Learning Library (Mllib). It has built-in integration with many data sources, has a workflow scheduler, allows for real-time workspace collaboration, and has performance improvements over traditional Apache Spark.
Azure SQL Database Managed Instance is a new flavor of Azure SQL Database that is a game changer. It offers near-complete SQL Server compatibility and network isolation to easily lift and shift databases to Azure (you can literally backup an on-premise database and restore it into a Azure SQL Database Managed Instance). Think of it as an enhancement to Azure SQL Database that is built on the same PaaS infrastructure and maintains all it's features (i.e. active geo-replication, high availability, automatic backups, database advisor, threat detection, intelligent insights, vulnerability assessment, etc) but adds support for databases up to 35TB, VNET, SQL Agent, cross-database querying, replication, etc. So, you can migrate your databases from on-prem to Azure with very little migration effort which is a big improvement from the current Singleton or Elastic Pool flavors which can require substantial changes.
Microsoft Data Platform - What's includedJames Serra
The pace of Microsoft product innovation is so fast that even though I spend half my days learning, I struggle to keep up. And as I work with customers I find they are often in the dark about many of the products that we have since they are focused on just keeping what they have running and putting out fires. So, let me cover what products you might have missed in the Microsoft data platform world. Be prepared to discover all the various Microsoft technologies and products for collecting data, transforming it, storing it, and visualizing it. My goal is to help you not only understand each product but understand how they all fit together and there proper use case, allowing you to build the appropriate solution that can incorporate any data in the future no matter the size, frequency, or type. Along the way we will touch on technologies covering NoSQL, Hadoop, and open source.
Learning to present and becoming good at itJames Serra
Have you been thinking about presenting at a user group? Are you being asked to present at your work? Is learning to present one of the keys to advancing your career? Or do you just think it would be fun to present but you are too nervous to try it? Well take the first step to becoming a presenter by attending this session and I will guide you through the process of learning to present and becoming good at it. It’s easier than you think! I am an introvert and was deathly afraid to speak in public. Now I love to present and it’s actually my main function in my job at Microsoft. I’ll share with you journey that lead me to speak at major conferences and the skills I learned along the way to become a good presenter and to get rid of the fear. You can do it!
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
2. About Me
Microsoft, Big Data Evangelist
In IT for 30 years, worked on many BI and DW projects
Worked as desktop/web/database developer, DBA, BI and DW architect and developer, MDM
architect, PDW/APS developer
Been perm employee, contractor, consultant, business owner
Presenter at PASS Business Analytics Conference, PASS Summit, Enterprise Data World conference
Certifications: MCSE: Data Platform, Business Intelligence; MS: Architecting Microsoft Azure
Solutions, Design and Implement Big Data Analytics Solutions, Design and Implement Cloud Data
Platform Solutions
Blog at JamesSerra.com
Former SQL Server MVP
Author of book “Reporting with Microsoft SQL Server 2012”
3. Agenda
VM storage
Always On AG
Always On FCI
Basic Availability Groups
Database Mirroring
Log Shipping
Backup to Azure
SQL Server data files in Azure
Azure Site Recovery
Azure VM Availability Set
Azure SQL Data Sync
4. Virtual Machine storage architecture
C:
OS disk (127 GB)
Usually 115 GB free
E:, F:, etc.
Data disks (1 TB)
Attach SSD/HDD up to 1TB. These
are .vhd files
D:
Temporary disk
(Contents can be lost)
SSD/HDD and size depends on VM
chosenDisk Cache
5. Azure Default Blob Storage
Azure Storage Page Blobs, 3 copies
Storage high durability built-in (like have RAID)
VHD disks, up to 1 TB per disk (64 TB total)
6. Geo-storage replication
3 copies locally, another 3 copies in different region
Disable for SQL Server VM disk (consistent write
order across multiple disks is not guaranteed).
Instead use DR techniques in this deck
Defend against regional disasters
Geo replication
7. HA/DR deployment architectures
Azure Only
Availability replicas
running across
multiple Azure
regions in Azure VMs
for disaster recovery.
Cross-region solution
protects against
complete site outage.
Replicas running in
same Azure Region
for HA.
Hybrid
Some availability
replicas running in
Azure VMs and other
replicas running on-
premises for cross-
site disaster recovery.
HA only, not DR
FCI on a two-node
WSFC running in
Azure VMs with
storage supported by
storage spaces direct.
Azure Only
Principal and mirror
and servers running
in different
datacenters for
disaster recovery.
Principal, Mirror, and
Witness run within
same Azure data
center, deployed
using a DC or server
certificates for HA.
Hybrid
One partner running
in an Azure VM and
the other running on-
premises for cross-
site disaster recovery
using server
certificates.
For DR only /
Hybrid only
One server running in
an Azure VM and the
other running on-
premises for cross-
site disaster recovery.
Log shipping
depends on Windows
file sharing, so a VPN
connection between
the Azure virtual
network and the on-
premises network is
required.
Requires AD
deployment on DR
site.
On-prem or Azure
production databases
backed up directly to
Azure blob storage
for disaster recovery.
SQL 2016: Backup to
Azure with file
snapshots
Simpler BCDR story
Site Recovery makes
it easy to handle
replication, failover
and recovery for your
on-premises
workloads and
applications (not
data!).
Flexible replication
You can replicate on-
premises servers,
Hyper-V virtual
machines, and
VMware virtual
machines.
Eliminate the need
for secondary
Native support for
SQL Server data files
stored as Azure blobs
8. HA/DR Defined
• High Availability (HA) – Keeping your database up 100% of the time with no data loss
during common problems. Redundancy at system level, focus on failover, addresses
single predictable failure, focus is on technology
• Always On FCI
• Always On AG (in same Azure region)
• SQL Server data files in Azure
• Disaster Recovery (DR) – Protection if major disaster or unusual failure wipes out your
database. Use of alternate site, focus on re-establishing services, addresses multiple
failures, includes people and processes to execute recovery. Usually includes HA also
• Log Shipping
• Database Mirroring
• Always On AG (different Azure regions)
• Backup to Azure
9. RPO/RTO
RTO – Recover Time Objective. How much time after a failure until we have to be up
and running again?
RPO – Recover Point Objective. How much data can we lose?
• HA – High Availability
• RTO: seconds to minutes
• RPO: Zero to seconds
• Automatic failover
• Well tested (maybe with each patch or release)
• DR – Disaster Recovery
• RTO: minutes to hours
• RPO: seconds to minutes
• Manual failover into prepared environment
• Tested from time to time
How long does it take to fail over:
• Backup-Restore: Hours
• Log Shipping: Minutes
• Always On FCI: Seconds to minutes
• Always On AG/Mirroring: Seconds
10. Always On Availability Groups
Azure Only
Availability replicas
running across
multiple Azure
regions in Azure VMs
for disaster recovery.
Cross-region solution
protects against
complete site outage.
Replicas running in
same Azure Region
for HA.
Hybrid
Some availability
replicas running in
Azure VMs and other
replicas running on-
premises for cross-
site disaster recovery.
Availability replicas running across multiple datacenters in Azure
VMs for disaster recovery. This cross-region solution protects
against complete site outage.
Within a region, all replicas should be within the same cloud
service and the same VNet. Because each region will have a
separate VNet, these solutions require VNet to VNet connectivity.
For more information, see Configure a Site-to-Site VPN in the
Azure classic portal.
NOTE: US East should show a FSW.
All availability replicas running in Azure VMs for high
availability within the same region. You need to configure a
domain controller VM, because Windows Server Failover
Clustering (WSFC) requires an Active Directory domain.
For more information, see Configure Always On Availability
Groups in Azure (GUI).
With Windows Server 2016 replicas, you can use a Cloud
Witness instead of a File Share Witness (FSW). A WSFC
always requires a FSW to handle quorum (and Always On
Availability Groups require WSFC).
11. Always On Availability Groups (Hybrid)
Azure Only
Availability replicas
running across
multiple Azure
regions in Azure VMs
for disaster recovery.
Cross-region solution
protects against
complete site outage.
Replicas running in
same Azure Region
for HA.
Hybrid
Some availability
replicas running in
Azure VMs and other
replicas running on-
premises for cross-
site disaster recovery.
Some availability replicas running in Azure VMs and other
replicas running on-premises for cross-site disaster recovery.
The production site can be either on-premises or in an Azure
datacenter.
Because all availability replicas must be in the same WSFC
cluster, the WSFC cluster must span both networks (a multi-
subnet WSFC cluster). This configuration requires a VPN
connection between Azure and the on-premises network.
For successful disaster recovery of your databases, you should
also install a replica domain controller at the disaster recovery
site.
It is possible to use the Add Replica Wizard in SSMS to add an
Azure replica to an existing Always On Availability Group. For
more information, see Tutorial: Extend your Always On
Availability Group to Azure.
12. Distributed Always On Availability Groups
Azure Only
Availability replicas
running across
multiple Azure
regions in Azure VMs
for disaster recovery.
Cross-region solution
protects against
complete site outage.
Replicas running in
same Azure Region
for HA.
Hybrid
Some availability
replicas running in
Azure VMs and other
replicas running on-
premises for cross-
site disaster recovery.
Distributed Availability Groups differ from an availability group on a single Windows Server Failover Cluster in the following ways:
Pros
• Each WSFC maintains its own quorum mode and node voting configuration. This means that the health of the secondary WSFC does not
affect the primary WSFC
• The data is sent one time over the network to the secondary WSFC and then replicated within that cluster. In a single WSFC, the data is sent
individually to each replica. For a geographically dispersed secondary site, distributed availability groups are more efficient
• The operating system version used on the primary and secondary clusters can differ. In a single WSFC, all servers must be on the same
version of the OS. This has the potential to use Distributed Availability Groups with rolling updates/upgrades of the operating system
Cons
• The primary and secondary availability groups must have the same configuration of databases
• Automatic failover to the secondary availability group is not supported
• The secondary availability group is read-only
13. Always On Availability Groups failover modes
• Primary role and secondary role of availability replicas are interchangeable
• A secondary replica will be the failover target
• Database level issues (i.e. database deletion, corrupted transaction log) do not cause an availability group to failover
• During the failover, the failover target takes over the primary role, recovers its databases, and brings them online as the new primary
databases. The former primary replica, when available, switches to the secondary role, and its databases become secondary databases.
Three forms of failover:
• Automatic failover: No data loss
• Planned manual failover: No data loss
• Forced manual failover: Also called forced failover. With possible data loss
*If you issue a forced failover command
on a synchronized secondary replica, the
secondary replica behaves the same as
for a manual failover
14. Always On Failover Cluster Instances (FCI)
HA only, not DR
FCI on a two-node
WSFC running in
Azure VMs with
storage supported by
storage spaces direct.
You can use FCI to
host an availability
replica for an
availability group
Windows Server 2016 Storage Spaces Direct (S2D) provides virtual shared storage on top of the disks attached to the VMs hosting the
FCI replicas by replicating the disk contents.
We plan to support FCI natively on top of Premium Azure Files (physical SMB shared storage) this year.
17. Basic Availability Groups
Basic Availability Groups replaces the deprecated Database Mirroring feature, providing a similar level of features
and is used for SQL Server 2016 Standard Edition (normal Availability Groups requires Enterprise Edition).
Limitations:
• Limit of two replicas (primary and secondary)
• No read access on secondary replica
• No backups on secondary replica
• No support for replicas hosted on servers running a version of SQL Server prior to SQL Server 2016
Community Technology Preview 3 (CTP3)
• No support for adding or removing a replica to an existing basic availability group
• Support for one availability database
• Basic availability groups cannot be upgraded to advanced availability groups. The group must be dropped
and re-added to a group that contains servers running only SQL Server 2016 Enterprise Edition
• Basic availability groups are only supported for Standard Edition servers
18. Database Mirroring
Azure Only
Principal and mirror
and servers running
in different
datacenters for
disaster recovery.
Principal, Mirror, and
Witness run within
same Azure data
center, deployed
using a DC or server
certificates for HA.
Hybrid
One partner running
in an Azure VM and
the other running
on-premises for
cross-site disaster
recovery using server
certificates.
Principal and mirror and servers running in
different datacenters for disaster recovery. You
must deploy using server certificates because an
Active Directory domain cannot span multiple
datacenters.
Principal, mirror, and witness servers all running in the
same Azure datacenter for high availability. You can
deploy using a domain controller.
You can also deploy the same database mirroring configuration
without a domain controller by using server certificates instead.
20. Database Mirroring (Hybrid)
Azure Only
Principal and mirror
and servers running
in different
datacenters for
disaster recovery.
Principal, Mirror, and
Witness run within
same Azure data
center, deployed
using a DC or server
certificates for HA.
Hybrid
One partner running
in an Azure VM and
the other running
on-premises for
cross-site disaster
recovery using server
certificates.
One partner running in an Azure VM and the other running on-premises for
cross-site disaster recovery using server certificates. Partners do not need to be
in the same Active Directory domain, and no VPN connection is required.
Another database mirroring scenario involves one partner running in an Azure
VM and the other running on-premises in the same Active Directory domain for
cross-site disaster recovery. A VPN connection between the Azure virtual
network and the on-premises network is required.
For successful disaster recovery of your databases, you should also install a
replica domain controller at the disaster recovery site.
21. Log Shipping (Hybrid)
For DR only /
Hybrid only
One server running in
an Azure VM and the
other running on-
premises for cross-
site disaster recovery.
Log shipping
depends on Windows
file sharing, so a VPN
connection between
the Azure virtual
network and the on-
premises network is
required.
Requires AD
deployment on DR
site.
22. Block blobs
Reduced storage costs
Significantly improved
restore performance
More granular control
over Azure Storage
Azure Storage snapshot
backup
Fastest method for creating
backups and running restores
Support of SQL Server database
files on Azure Blob Storage
Backup to Azure
Managed backup
On-prem to Azure
Granular control of the backup
schedule
Local staging for faster recovery and
greater network resiliency
System database support
Simple recovery mode support
On-prem or Azure
production databases
backed up directly to
Azure blob storage
for disaster recovery.
SQL 2016: Backup to
Azure with file
snapshots
Azure production databases backed up directly to Azure blob
storage in a different datacenter for disaster recovery
On-premises production databases backed up directly
to Azure blob storage for disaster recovery.
23. Backup to Azure with file snapshots (SQL Server 2016)
BACKUP DATABASE database TO
URL = N'https://<account>.blob.core.windows.net/<container>/<backupfileblob1>‘
WITH FILE_SNAPSHOT
Instance
Azure Storage
MDF
Database
MDF
LDF
LDF
BAK
Hybrid solutions
24. SQL Server data files in Azure (Hybrid)
Native support for
SQL Server data files
stored as Azure blobs
- Easy and fast migration
benefits
- Cost and limitless storage
benefits
- High availability and
disaster recovery benefits
- Security benefits
- Snapshot backup
25. Azure Site Recovery (Hybrid)
Simpler BCDR story
Site Recovery makes
it easy to handle
replication, failover
and recovery for your
on-premises
workloads and
applications (not
data!).
Flexible replication
You can replicate on-
premises servers,
Hyper-V virtual
machines, and
VMware virtual
machines.
Eliminate the need
for secondary
SQL Server on-prem DR example:
Standalone SQL Server instance
residing on-premises and replicating
to an Azure Storage account by using
Azure Site Recovery. The replication
targets are page blobs containing the
vhd files (C drive) of Azure IaaS virtual
machines hosting SQL Server
instances that are brought online
during failover. SQL Server data files
are not handled with ASR.
26. Azure VM Availability Set
Create redundant
VMs that are spread
across multiple racks
in the Windows Azure
Data Centers. This
means redundant
power supply,
switches and servers
99.95% SLA
guaranteed (99.9%
SLA for single
instance)
Each virtual machine
in your Availability
Set is assigned an
Update Domain (UD)
and a Fault Domain
(FD)
In ARM it is not yet
possible to add an
existing VM to an
availability set.
VMs in an Availability Set can be
different sizes, but they need to be
within a range of sizes supported
by the hardware where the first VM
lands. Generally we recommend to
keep the VMs within the same
family for a reliable deployment.
This means only using VMs of the
following sizes in the same set:
A0 – A7
A8 – A11
D1 – D14
DS1 – DS14
D1v2 – D14v2
G1 – G5
GS1 – GS5
27. Azure SQL Data Sync (preview)
SQL Azure Data Sync is a Microsoft Windows Azure web service that provides data synchronization capabilities for SQL
databases. SQL Azure Data Sync allows data to be synchronized between on-premises SQL Server databases and Azure SQL
databases; in addition, it can also keep multiple Azure SQL databases in sync.
SQL Data Sync targets the reference data replication
scenario. Its key capabilities are:
Sync between SQL Server (2005 SP2 and later)
and Azure SQL databases, or between Azure
SQL databases
One-way and bi-directional sync
One-to-one and hub-spoke
Table filter and column filter
Scheduled and on-demand
Eventual consistency
Active Geo-Replication, in contrast, targets GeoDR
scenario for Azure SQL Database by replicating the
database to another region. It only supports one-
way replication (secondaries are read-only),
replication is at database granularity and no
database or column/row filter support, and it is only
available for Premium service tier.
28. Stretch database architecture
How it works
Creates a secure linked server
definition in the on-premises SQL
Server
Linked server definition has the
remote endpoint as the target
Provisions remote resources and
begins to migrate eligible data, if
migration is enabled
Queries against tables run against
both the local database and the
remote endpoint
Remote
Endpoint
Remote
Data
Azure
InternetBoundary
Local
Database
Local
Data
Eligible
Data
29. Resources
SQL Server in VM best practices: https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-sql-
server-performance-best-practices/
https://azure.microsoft.com/en-us/documentation/articles/azure-subscription-service-
limits/#virtual-machines-limits
https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-size-specs/
https://azure.microsoft.com/en-us/pricing/details/virtual-machines/
Disaster Recovery and High Availability for Azure Applications: https://msdn.microsoft.com/en-
us/library/azure/dn251004.aspx
30. Q & A ?
James Serra, Big Data Evangelist
Email me at: JamesSerra3@gmail.com
Follow me at: @JamesSerra
Link to me at: www.linkedin.com/in/JamesSerra
Visit my blog at: JamesSerra.com (where this slide deck is posted via the “Presentations” link on the top menu)
Editor's Notes
HA/DR options with SQL Server in Azure and hybrid
What are all the high availability (HA) and disaster recovery (DR) options for SQL Server in a Azure VM (IaaS)? Which of these options can be used in a hybrid combination (Azure VM and on-prem)? I will cover features such as Always On AG, Failover cluster, Azure SQL Data Sync, Log Shipping, SQL Server data files in Azure, Mirroring, Azure Site Recovery, and Azure Backup.
https://blogs.msdn.microsoft.com/mast/2013/12/06/understanding-the-temporary-drive-on-windows-azure-virtual-machines/
SSD/HDD storage included in A-series, D-series, and Dv2-series VMs is local temporary storage.
DS-series, G-series, GS-series SSD's have less local temporary storage due to storage used for caching purposes to ensure predictable levels of performance associated with premium storage.
DS-series and GS-series support premium storage disks, which means you can attach SSD's to the VM (the other series support only standard storage disks).
The pricing and billing meters for the DS sizes are the same as D-series and the GS sizes are the same as G-series.
When you create an Azure virtual machine, it has a disk for the operating system mapped to drive C (size is 127GB) that is on Blob storage and a local temporary disk mapped to drive D. You can choose standard disk type or premium (if DS-series or GS-series) for your local temporary disk - the size of which is based on the series you choose (i.e. A0 is 20GB). You can also attach new disks - specify standard or premium, for standard: specify size (1GB-1023GB), for premium: specify P10, P20, or P30. the disks are .vhd files that reside in an Azure storage account
Because Azure stores three copies of your SQL databases in Azure Blob Storage, and Azure has redundant hardware (99.9% SLA for VMs), most companies don’t need to do anything more for HA (DR is a different story and Always On Availability Groups in different regions is one option for that). If the customer still feels like they need something more for HA, then Always On FCI or Always On Availability Groups in the same region would be the way to go. I lay out all the options in my deck at https://www.slideshare.net/jamserra/hadr-options-with-sql-server-in-azure-and-hybrid .
http://www.jamesserra.com/archive/2015/11/redundancy-options-in-azure-blob-storage/
Geo-replication in Azure disks does not support the data file and log file of the same database to be stored on separate disks. GRS replicates changes on each disk independently and asynchronously. This mechanism guarantees the write order within a single disk on the geo-replicated copy, but not across geo-replicated copies of multiple disks. If you configure a database to store its data file and its log file on separate disks, the recovered disks after a disaster may contain a more up-to-date copy of the data file than the log file, which breaks the write-ahead log in SQL Server and the ACID properties of transactions. If you do not have the option to disable geo-replication on the storage account, you should keep all data and log files for a given database on the same disk. If you must use more than one disk due to the size of the database, you need to deploy one of the disaster recovery solutions listed above to ensure data redundancy. From https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-windows-sql-high-availability-dr/
https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-windows-classic-sql-dr/
It is up to you to ensure that your database system possesses the HADR capabilities that the service-level agreement (SLA) requires. The fact that Azure provides high availability mechanisms, such as service healing for cloud services and failure recovery detection for the Virtual Machines (https://azure.microsoft.com/en-us/blog/service-healing-auto-recovery-of-virtual-machines), does not itself guarantee you can meet the desired SLA. These mechanisms protect the high availability of the VMs but not the high availability of SQL Server running inside the VMs. It is possible for the SQL Server instance to fail while the VM is online and healthy. Moreover, even the high availability mechanisms provided by Azure allow for downtime of the VMs due to events such as recovery from software or hardware failures and operating system upgrades.
I do not consider transactional replication as a HA/DR solution. Yes, data modifications are being pushed to subscribers but we're talking at the publication/article level. This is going to be a subset of the data (could include all the data, but that won't be enforced. I.e. you create a new table in the publisher database, and that will not automatically be pushed to the subscribers). Plus, tables without a primary key cannot be replicated. Can do SQL Server on-prem replication to Azure SQL Database. The intent of replication was very different than HA/DR. "I want this table pushed to a data mart" or something like that. However, replication is the de facto solution for customers requiring subsets of data to be available.
https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-windows-sql-high-availability-dr/
https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-sql-server-alwayson-availability-groups-gui-arm/
https://blogs.msdn.microsoft.com/igorpag/2013/09/02/sql-server-2012-alwayson-availability-group-and-listener-in-azure-vms-notes-details-and-recommendations/
https://blogs.msdn.microsoft.com/igorpag/2014/07/03/deep-dive-sql-server-alwayson-availability-groups-and-cross-region-virtual-networks-in-azure/
https://blogs.technet.microsoft.com/dataplatforminsider/2014/06/19/sql-server-alwayson-availability-groups-supported-between-microsoft-azure-regions/
http://sqlha.com/2012/04/13/allans-alwayson-availability-groups-faq/
From Luis Carlos Vargas Herring:
https://blogs.msdn.microsoft.com/clustering/2014/11/13/introducing-cloud-witness/
Both SQL FCI and AGs depend on Windows Cluster to handle quorum.
Yes, in Windows Server 2016 they can leverage its Cloud Witness.
This removes the need for a separate Azure VM to host the witness.
We plan to enhance our AlwaysOn template to use this in H1CY17.
File Share Witness: https://www.derekseaman.com/tag/file-share-witness, https://technet.microsoft.com/en-us/library/cc731739.aspx
Availability Group listeners: https://msdn.microsoft.com/en-us/library/hh213417.aspx, https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sql/virtual-machines-windows-portal-sql-alwayson-int-listener
Does an always on availability group listener require an additional VM?
No, the listener is merely a networking concept (a way for client connections to be routed to the primary replica). It needs an Azure Load Balancer (https://azure.microsoft.com/en-us/services/load-balancer/, https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sql/virtual-machines-windows-portal-sql-ps-alwayson-int-listener)
An availability group has been configured with 2 replicas (primary P and secondary S1) for automatic failover and a Listener within the virtual network VNET1 in Region 1 (e.g. West US). This guarantees high availability of SQL Server in case of failures within the region. A secure tunnel has been configured between VNET1 and another virtual network VNET2 in Region 2 (e.g. Central US). The availability group has been expanded with a third replica (S2) configured for manual failover in this VNET to enable disaster recovery in case of failures impacting Region1. Finally, the Listener has been configured to route connections to the primary replica, irrespective of which region hosts it. This allows client applications connect to the primary replica, with the same connection string, after failing over between Azure regions.
https://blogs.technet.microsoft.com/dataplatforminsider/2014/06/19/sql-server-alwayson-availability-groups-supported-between-microsoft-azure-regions/
https://blogs.msdn.microsoft.com/igorpag/2014/12/22/sql-server-2014-high-availability-and-multi-datacenter-disaster-recovery-with-multiple-azure-ilbs/
https://azure.microsoft.com/en-us/blog/high-availability-for-a-file-share-using-wsfc-ilb-and-3rd-party-software-sios-datakeeper/
Replaces SQL Server failover cluster
Requires shared storage
The only cost advantage for FCI is if a customer must use SQL Standard and an older version of SQL Server.
AGs were only supported on SQL Enterprise before SQL16.
In SQL16 they’re supported also on Standard edition.
https://msdn.microsoft.com/en-us/library/mt614935.aspx
Basic availability groups use a subset of features compared to advanced availability groups on SQL Server 2016 Enterprise Edition. Basic availability groups include the following limitations:
Limit of two replicas (primary and secondary).
No read access on secondary replica.
No backups on secondary replica.
No support for replicas hosted on servers running a version of SQL Server prior to SQL Server 2016 Community Technology Preview 3 (CTP3).
No support for adding or removing a replica to an existing basic availability group.
Support for one availability database.
Basic availability groups cannot be upgraded to advanced availability groups. The group must be dropped and re-added to a group that contains servers running only SQL Server 2016 Enterprise Edition.
Basic availability groups are only supported for Standard Edition servers.
https://www.concurrency.com/blog/july-2016/sql-server-2016-basic-availability-groups
https://blogs.technet.microsoft.com/uspartner_ts2team/2016/11/22/azure-single-instance-virtual-machine-sla/
https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-windows-manage-availability/
https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-windows-classic-configure-availability/
In ARM:
You can’t change the VM’s Availability Set once the VM is created
You can’t add an Azure VM to an Availability Set once the VM is created
You can’t remove a VM from an Availability Set
Instead use Powershell: https://buildwindows.wordpress.com/2016/02/25/add-or-change-an-arm-virtual-machines-availability-set/
From Luis Carlos Vargas Herring:
“Availability Set” is an Azure concept (not a SQL technology). The Availability Set merely tells Azure to host VMs in different failure domains (racks) and upgrade domains. This ensures that a failure/upgrade only brings down one VM at a time.
This works well for stateless apps (like web servers), but it’s not enough for stateful apps that need replicas with an exact copy of the data (like SQL Server). SQL Availability Groups solve this by replicating data from a primary SQL instance to one or more secondary SQL instances and orchestrating a failover in case of a failure.
An Azure Availability Set is a deployment constraint. It tells Azure to allocate those VMs inside of it into different fault and upgrade domains. This ensures that a rack failure or a Host maintenance operation doesn’t impact more than one VM at the same time. This is a core measure for replicated components (e.g. web servers or SQL Server replicas). To configure the SQL Server replicas, you need AlwaysOn Availability Groups.
https://azure.microsoft.com/en-us/blog/azure-sql-data-sync-refresh/?v=17.23h
https://azure.microsoft.com/en-us/documentation/articles/sql-database-get-started-sql-data-sync/
https://www.mssqltips.com/sqlservertip/3062/understanding-sql-data-sync-for-sql-server/
https://msdn.microsoft.com/en-us/library/hh868047.aspx
Compare SQL Data Sync to Active Geo-Replication
When it comes to hybrid solution we are introducing new industry first concepts of stretch tables into Azure for operational databases. So think about it
We are making migrating SQL Server from on-prem to Azure VMs much easier. Currently when you move SQL Server we only migrated the schema and data. With SQL v-next you will able to migrate systems objects, SQL Settings so that migration is literally point and click type of migration. We will provide a wizard drive experience that will provide gallery image, vm size recommendations.
Notes
Now let’s talk about some of these innovative hybrid scenarios that can compliment your on-premises SQL Server investments. Stretch database, is one of these unique hybrid scenarios that only Microsoft provides and can be extremely valuable in your data strategy.
We know your OLTP databases are growing rapidly and you need to think about how to cost effectively manage your data. More importantly, you need to think about what you’re doing with historical data and whether you want it to go offline onto tape. When it goes to tape, it’s not queryable anymore. What if you wanted the historical data at your fingertips but didn’t want to have it reside on premium storage with your hot data because it’s too costly.
We can solve that problem with this stretch database hybrid scenario. Without modifying your app, you can stretch your database to Azure. You don’t even have to do any of the heavy lifting to make this work. You just have to set the policy you want to apply on the historical data and run the query as you normally would. SQL Server determines if the table has been stretched and retrieves the data from Azure. The other thing to note is that this technology does not impact the performance of writes on the same table being stretched as part of the engineering design point for this solution. A question you might ask is I am stretching historical customer data which has sensitive data, what about data security. The best part is that stretch database works with our new Always Encrypted technology that protects the columns you desire at rest and in motion, even in the memory buffer pool, so sensitive customer information is secure.
Stretch database offers a great value prop in terms of saving you money and providing easy access to historical customer data so you can make improve customer experiences without requiring application modifications.
This is just one example of how cloud-first innovation allows us to deliver new hybrid scenarios that our competitor simply can’t deliver or haven’t thought of.
Source: https://msdn.microsoft.com/en-us/library/dn935011(v=sql.130).aspx
Stretch Database lets you archive your historical data transparently and securely. In SQL Server 2016 Community Technology Preview 2 (CTP2), Stretch Database stores your historical data in the Microsoft Azure cloud. After you enable Stretch Database, it silently migrates your historical data to an Azure SQL Database.
You don't have to change existing queries and client apps. You continue to have seamless access to both local and remote data.
Your local queries and database operations against current data typically run faster.
You typically enjoy reduced cost and complexity.
Source: https://msdn.microsoft.com/en-us/library/mt169378(v=sql.130).aspx
Concepts and architecture for Stretch Database
Terms
Local database. The on-premises SQL Server 2016 Community Technology Preview 2 (CTP2) database.
Remote endpoint. The location in Microsoft Azure that contains the database’s remote data. In SQL Server 2016 Community Technology Preview 2 (CTP2), this is an Azure SQL Database server. This is subject to change in the future.
Local data. Data in a database with Stretch Database enabled that will not be moved to Azure based on the Stretch Database configuration of the tables in the database.
Eligible data. Data in a database with Stretch Database enabled that has not yet been moved, but will be moved to Azure based on the Stretch Database configuration of the tables in the database.
Remote data. Data in a database with Stretch Database enabled that has already been moved to Azure.
Architecture
Stretch Database leverages the resources in Microsoft Azure to offload archival data storage and query processing.
When you enable Stretch Database on a database, it creates a secure linked server definition in the on-premises SQL Server. This linked server definition has the remote endpoint as the target. When you enable Stretch Database on a table in the database, it provisions remote resources and begins to migrate eligible data, if migration is enabled.
Queries against tables with Stretch Database enabled automatically run against both the local database and the remote endpoint. Stretch Database leverages processing power in Azure to run queries against remote data by rewriting the query. You can see this rewriting as a "remote query" operator in the new query plan.