This document provides an overview of deploying and configuring Azure SQL resources. It discusses deploying SQL Server on virtual machines using various methods like the Azure portal, PowerShell scripts, and ARM templates. It also covers deploying SQL databases using the PaaS offerings like single databases, elastic pools, and hyperscale. Key aspects covered include high availability and disaster recovery options, storage configurations, licensing models, and migration strategies.
Module 1 provides an introduction to Azure database administration. It describes the roles of Azure database administrators and other data platform roles. It also describes the different SQL deployment options on Azure including SQL Server VMs, Azure SQL Database, and Azure SQL Managed Instance. Key features of each option like high availability, backups, and scaling are discussed.
This document describes a case study for the DP-300 exam on administering relational databases on Microsoft Azure. It provides background on the existing environment including the network, identity, and database configurations. It then outlines requirements, planned changes, technical requirements, security and compliance requirements, and business requirements for the scenario. The first question asks about migrating databases from an on-premises SQL Server to Azure SQL Database and minimizing downtime. Subsequent questions cover topics like compression, migration options, and backup configurations.
This document provides an overview of SQL Server clustering. It discusses the importance of high availability and introduces some key concepts in clustering like nodes, shared storage, heartbeats, failover and failback. It also covers the basic architecture of a SQL Server cluster, including the virtual server and different types of clusters. Some advantages and disadvantages of clustering are outlined. Finally, it discusses some terminology used in clustering and provides a checklist for preparing Windows clustering.
This document provides an overview of Module 5: Optimize query performance in Azure SQL. The module contains 3 lessons that cover analyzing query plans, evaluating potential improvements, and reviewing table and index design. Lesson 1 explores generating and comparing execution plans, understanding how plans are generated, and the benefits of the Query Store. Lesson 2 examines database normalization, data types, index types, and denormalization. Lesson 3 describes wait statistics, tuning indexes, and using query hints. The lessons aim to help administrators optimize query performance in Azure SQL.
Microsoft Azure Sentinel is a new Cloud native SIEM service with built-in AI for analytics that removes the cost and complexity of achieving a central and focused near real-time view of the active threats in your environment. Koby Koren from the Azure Sentinel engineering team walks through the entire solution with an end-to-end demonstration from how to set it up, perform queries, investigations and more.
Azure Sentinel is in preview today. Follow the link to try for yourself https://aka.ms/AzureSentinel
This one-hour presentation covers the tools and techniques for migrating SQL Server databases and data to Azure SQL DB or SQL Server on VM. Includes SSMA, DMA, DMS, and more.
Snowflake concepts & hands on expertise to help get you started on implementing Data warehouses using Snowflake. Necessary information and skills that will help you master Snowflake essentials.
Module 1 provides an introduction to Azure database administration. It describes the roles of Azure database administrators and other data platform roles. It also describes the different SQL deployment options on Azure including SQL Server VMs, Azure SQL Database, and Azure SQL Managed Instance. Key features of each option like high availability, backups, and scaling are discussed.
This document describes a case study for the DP-300 exam on administering relational databases on Microsoft Azure. It provides background on the existing environment including the network, identity, and database configurations. It then outlines requirements, planned changes, technical requirements, security and compliance requirements, and business requirements for the scenario. The first question asks about migrating databases from an on-premises SQL Server to Azure SQL Database and minimizing downtime. Subsequent questions cover topics like compression, migration options, and backup configurations.
This document provides an overview of SQL Server clustering. It discusses the importance of high availability and introduces some key concepts in clustering like nodes, shared storage, heartbeats, failover and failback. It also covers the basic architecture of a SQL Server cluster, including the virtual server and different types of clusters. Some advantages and disadvantages of clustering are outlined. Finally, it discusses some terminology used in clustering and provides a checklist for preparing Windows clustering.
This document provides an overview of Module 5: Optimize query performance in Azure SQL. The module contains 3 lessons that cover analyzing query plans, evaluating potential improvements, and reviewing table and index design. Lesson 1 explores generating and comparing execution plans, understanding how plans are generated, and the benefits of the Query Store. Lesson 2 examines database normalization, data types, index types, and denormalization. Lesson 3 describes wait statistics, tuning indexes, and using query hints. The lessons aim to help administrators optimize query performance in Azure SQL.
Microsoft Azure Sentinel is a new Cloud native SIEM service with built-in AI for analytics that removes the cost and complexity of achieving a central and focused near real-time view of the active threats in your environment. Koby Koren from the Azure Sentinel engineering team walks through the entire solution with an end-to-end demonstration from how to set it up, perform queries, investigations and more.
Azure Sentinel is in preview today. Follow the link to try for yourself https://aka.ms/AzureSentinel
This one-hour presentation covers the tools and techniques for migrating SQL Server databases and data to Azure SQL DB or SQL Server on VM. Includes SSMA, DMA, DMS, and more.
Snowflake concepts & hands on expertise to help get you started on implementing Data warehouses using Snowflake. Necessary information and skills that will help you master Snowflake essentials.
Microsoft SQL Server Database Administration.pptxsamtakke1
This document provides instructions for installing Microsoft SQL Server 2019 and SQL Server Management Studio (SSMS) on a Windows system. It discusses downloading the appropriate SQL Server 2019 installation file, running the setup wizard to install SQL Server and configure authentication modes. It also covers downloading and installing SSMS, and restoring the sample AdventureWorks2019 database for querying.
Introduction to Snowflake Datawarehouse and Architecture for Big data company. Centralized data management. Snowpipe and Copy into a command for data loading. Stream loading and Batch Processing.
Practical examples of using extended eventsDean Richards
Many presentations I have seen about SQL Server extended events focus on the mechanics of setting them up and querying the definitions back from the DMVs. While useful, I had always been missing the points about when and how to use extended events. What are they and why should you be using them? This presentation will show three real-world examples of using extended events. Each example will include demos on configuring the event and using the data and you will learn:
• Use extended events rather than queries against DMVs or tracing
• Collect query information
• Catch and examine deadlocks
• Collect actual plans
This document provides an overview of OpenStack. It begins with session goals of making the audience familiar with OpenStack, its community and architecture. It then covers the history, terminology, services, architecture, installation methods and risks. Key components discussed include Nova (compute), Neutron (networking), Cinder (block storage), Swift (object storage), Glance (image repository), Keystone (identity), Horizon (dashboard) and Heat (orchestration). The document provides details on each component and the OpenStack project timeline.
Learn how Amazon RDS makes it easy to deploy and operate a highly available and scalable SQL Server database in the cloud with cost-efficient and resizable capacity.
SQL Server High Availability Solutions (Pros & Cons)Hamid J. Fard
Proper SQL Server High Availability Solution Is Highly Depends on the Business Objective and IT Operation Objectives. It Happens Sometimes that We Might Have Few Solutions on the Table to Implement.
This document provides a summary of a presentation on automation and orchestration for 5G and NFV. The presentation covers Cisco's vision for 5G automation using ETSI MANO frameworks. It demonstrates NFV Manager and Service Design Manager apps for automating NFV lifecycle management and network service design. The presentation also discusses use cases for 5G end-to-end orchestration and cross-domain automation architectures.
This document provides an agenda and introduction for a presentation on Apache Cassandra and DataStax Enterprise. The presentation covers an introduction to Cassandra and NoSQL, the CAP theorem, Apache Cassandra features and architecture including replication, consistency levels and failure handling. It also discusses the Cassandra Query Language, data modeling for time series data, and new features in DataStax Enterprise like Spark integration and secondary indexes on collections. The presentation concludes with recommendations for getting started with Cassandra in production environments.
Microsoft Azure Data Factory Hands-On Lab Overview SlidesMark Kromer
This document outlines modules for a lab on moving data to Azure using Azure Data Factory. The modules will deploy necessary Azure resources, lift and shift an existing SSIS package to Azure, rebuild ETL processes in ADF, enhance data with cloud services, transform and merge data with ADF and HDInsight, load data into a data warehouse with ADF, schedule ADF pipelines, monitor ADF, and verify loaded data. Technologies used include PowerShell, Azure SQL, Blob Storage, Data Factory, SQL DW, Logic Apps, HDInsight, and Office 365.
This document outlines an agenda for a 90-minute workshop on Snowflake. The agenda includes introductions, an overview of Snowflake and data warehousing, demonstrations of how users utilize Snowflake, hands-on exercises loading sample data and running queries, and discussions of Snowflake architecture and capabilities. Real-world customer examples are also presented, such as a pharmacy building new applications on Snowflake and an education company using it to unify their data sources and achieve a 16x performance improvement.
(BDT208) A Technical Introduction to Amazon Elastic MapReduceAmazon Web Services
"Amazon EMR provides a managed framework which makes it easy, cost effective, and secure to run data processing frameworks such as Apache Hadoop, Apache Spark, and Presto on AWS. In this session, you learn the key design principles behind running these frameworks on the cloud and the feature set that Amazon EMR offers. We discuss the benefits of decoupling compute and storage and strategies to take advantage of the scale and the parallelism that the cloud offers, while lowering costs. Additionally, you hear from AOL’s Senior Software Engineer on how they used these strategies to migrate their Hadoop workloads to the AWS cloud and lessons learned along the way.
In this session, you learn the benefits of decoupling storage and compute and allowing them to scale independently; how to run Hadoop, Spark, Presto and other supported Hadoop Applications on Amazon EMR; how to use Amazon S3 as a persistent data-store and process data directly from Amazon S3; dDeployment strategies and how to avoid common mistakes when deploying at scale; and how to use Spot instances to scale your transient infrastructure effectively."
The document discusses Azure Data Factory v2. It provides an agenda that includes topics like triggers, control flow, and executing SSIS packages in ADFv2. It then introduces the speaker, Stefan Kirner, who has over 15 years of experience with Microsoft BI tools. The rest of the document consists of slides on ADFv2 topics like the pipeline model, triggers, activities, integration runtimes, scaling SSIS packages, and notes from the field on using SSIS packages in ADFv2.
In addition to running databases in Amazon EC2, AWS customers can choose among a variety of managed database services. These services save effort, save time, and unlock new capabilities and economies. In this session, we make it easy to understand how they differ, what they have in common, and how to choose one or more. We explain the fundamentals of Amazon DynamoDB, a fully managed NoSQL database service; Amazon RDS, a relational database service in the cloud; Amazon ElastiCache, a fast, in-memory caching service in the cloud; and Amazon Redshift, a fully managed, petabyte-scale data-warehouse solution that can be surprisingly economical. We’ll cover how each service might help support your application, how much each service costs, and how to get started.
Speaker:
Shaun Pearce, AWS Solutions Architect
This document discusses using the Cloud Adoption Framework (CAF) Terraform modules to create Azure landing zones. It begins with an introduction to Azure landing zones and their purpose. It then discusses everything-as-code and using Terraform to deploy environments. The remainder of the document focuses on the benefits of using the CAF Terraform modules, including consistency, maintainability, reusability, and delivering value. It provides an overview of the core principles and fundamental building blocks of the CAF modules. Finally, it demonstrates how to get started with the CAF Terraform landing zones.
This document discusses SQL Server 2012 AlwaysOn, a high availability and disaster recovery solution. It provides an overview of AlwaysOn availability groups, which allow for multiple synchronous or asynchronous copies of databases across instances. Key features include readable secondary replicas, automatic instance and database failover, and the ability to perform backups on secondary replicas. The document also demonstrates AlwaysOn configuration and functionality through a virtual machine-based lab environment.
This document provides resources for learning about the different phases and components of Azure Purview including documentation, training courses, how to create subscriptions and accounts, set up collections and scans, understand the data map and lineage, best practices, and connect data sources. It also lists some competitors to Azure Purview and provides pricing information for development/trial usage based on capacity units and hours for the data map, scanning, and resource set processing.
Microsoft Azure Offerings and New Services Mohamed Tawfik
Microsoft Azure offers a wide range of computing services including networking, compute, storage, databases, developer tools, and analytics services. It provides benefits such as pay-as-you-go pricing, quick setup, scalability, redundancy, and high availability. Microsoft has seen incredible growth in Azure due to its ability to convert its large enterprise customer base into Azure customers and build hybrid cloud solutions. The presentation highlights several new Azure services and features in networking, compute, storage, databases, and security.
Microsoft SQL Server Database Administration.pptxsamtakke1
This document provides instructions for installing Microsoft SQL Server 2019 and SQL Server Management Studio (SSMS) on a Windows system. It discusses downloading the appropriate SQL Server 2019 installation file, running the setup wizard to install SQL Server and configure authentication modes. It also covers downloading and installing SSMS, and restoring the sample AdventureWorks2019 database for querying.
Introduction to Snowflake Datawarehouse and Architecture for Big data company. Centralized data management. Snowpipe and Copy into a command for data loading. Stream loading and Batch Processing.
Practical examples of using extended eventsDean Richards
Many presentations I have seen about SQL Server extended events focus on the mechanics of setting them up and querying the definitions back from the DMVs. While useful, I had always been missing the points about when and how to use extended events. What are they and why should you be using them? This presentation will show three real-world examples of using extended events. Each example will include demos on configuring the event and using the data and you will learn:
• Use extended events rather than queries against DMVs or tracing
• Collect query information
• Catch and examine deadlocks
• Collect actual plans
This document provides an overview of OpenStack. It begins with session goals of making the audience familiar with OpenStack, its community and architecture. It then covers the history, terminology, services, architecture, installation methods and risks. Key components discussed include Nova (compute), Neutron (networking), Cinder (block storage), Swift (object storage), Glance (image repository), Keystone (identity), Horizon (dashboard) and Heat (orchestration). The document provides details on each component and the OpenStack project timeline.
Learn how Amazon RDS makes it easy to deploy and operate a highly available and scalable SQL Server database in the cloud with cost-efficient and resizable capacity.
SQL Server High Availability Solutions (Pros & Cons)Hamid J. Fard
Proper SQL Server High Availability Solution Is Highly Depends on the Business Objective and IT Operation Objectives. It Happens Sometimes that We Might Have Few Solutions on the Table to Implement.
This document provides a summary of a presentation on automation and orchestration for 5G and NFV. The presentation covers Cisco's vision for 5G automation using ETSI MANO frameworks. It demonstrates NFV Manager and Service Design Manager apps for automating NFV lifecycle management and network service design. The presentation also discusses use cases for 5G end-to-end orchestration and cross-domain automation architectures.
This document provides an agenda and introduction for a presentation on Apache Cassandra and DataStax Enterprise. The presentation covers an introduction to Cassandra and NoSQL, the CAP theorem, Apache Cassandra features and architecture including replication, consistency levels and failure handling. It also discusses the Cassandra Query Language, data modeling for time series data, and new features in DataStax Enterprise like Spark integration and secondary indexes on collections. The presentation concludes with recommendations for getting started with Cassandra in production environments.
Microsoft Azure Data Factory Hands-On Lab Overview SlidesMark Kromer
This document outlines modules for a lab on moving data to Azure using Azure Data Factory. The modules will deploy necessary Azure resources, lift and shift an existing SSIS package to Azure, rebuild ETL processes in ADF, enhance data with cloud services, transform and merge data with ADF and HDInsight, load data into a data warehouse with ADF, schedule ADF pipelines, monitor ADF, and verify loaded data. Technologies used include PowerShell, Azure SQL, Blob Storage, Data Factory, SQL DW, Logic Apps, HDInsight, and Office 365.
This document outlines an agenda for a 90-minute workshop on Snowflake. The agenda includes introductions, an overview of Snowflake and data warehousing, demonstrations of how users utilize Snowflake, hands-on exercises loading sample data and running queries, and discussions of Snowflake architecture and capabilities. Real-world customer examples are also presented, such as a pharmacy building new applications on Snowflake and an education company using it to unify their data sources and achieve a 16x performance improvement.
(BDT208) A Technical Introduction to Amazon Elastic MapReduceAmazon Web Services
"Amazon EMR provides a managed framework which makes it easy, cost effective, and secure to run data processing frameworks such as Apache Hadoop, Apache Spark, and Presto on AWS. In this session, you learn the key design principles behind running these frameworks on the cloud and the feature set that Amazon EMR offers. We discuss the benefits of decoupling compute and storage and strategies to take advantage of the scale and the parallelism that the cloud offers, while lowering costs. Additionally, you hear from AOL’s Senior Software Engineer on how they used these strategies to migrate their Hadoop workloads to the AWS cloud and lessons learned along the way.
In this session, you learn the benefits of decoupling storage and compute and allowing them to scale independently; how to run Hadoop, Spark, Presto and other supported Hadoop Applications on Amazon EMR; how to use Amazon S3 as a persistent data-store and process data directly from Amazon S3; dDeployment strategies and how to avoid common mistakes when deploying at scale; and how to use Spot instances to scale your transient infrastructure effectively."
The document discusses Azure Data Factory v2. It provides an agenda that includes topics like triggers, control flow, and executing SSIS packages in ADFv2. It then introduces the speaker, Stefan Kirner, who has over 15 years of experience with Microsoft BI tools. The rest of the document consists of slides on ADFv2 topics like the pipeline model, triggers, activities, integration runtimes, scaling SSIS packages, and notes from the field on using SSIS packages in ADFv2.
In addition to running databases in Amazon EC2, AWS customers can choose among a variety of managed database services. These services save effort, save time, and unlock new capabilities and economies. In this session, we make it easy to understand how they differ, what they have in common, and how to choose one or more. We explain the fundamentals of Amazon DynamoDB, a fully managed NoSQL database service; Amazon RDS, a relational database service in the cloud; Amazon ElastiCache, a fast, in-memory caching service in the cloud; and Amazon Redshift, a fully managed, petabyte-scale data-warehouse solution that can be surprisingly economical. We’ll cover how each service might help support your application, how much each service costs, and how to get started.
Speaker:
Shaun Pearce, AWS Solutions Architect
This document discusses using the Cloud Adoption Framework (CAF) Terraform modules to create Azure landing zones. It begins with an introduction to Azure landing zones and their purpose. It then discusses everything-as-code and using Terraform to deploy environments. The remainder of the document focuses on the benefits of using the CAF Terraform modules, including consistency, maintainability, reusability, and delivering value. It provides an overview of the core principles and fundamental building blocks of the CAF modules. Finally, it demonstrates how to get started with the CAF Terraform landing zones.
This document discusses SQL Server 2012 AlwaysOn, a high availability and disaster recovery solution. It provides an overview of AlwaysOn availability groups, which allow for multiple synchronous or asynchronous copies of databases across instances. Key features include readable secondary replicas, automatic instance and database failover, and the ability to perform backups on secondary replicas. The document also demonstrates AlwaysOn configuration and functionality through a virtual machine-based lab environment.
This document provides resources for learning about the different phases and components of Azure Purview including documentation, training courses, how to create subscriptions and accounts, set up collections and scans, understand the data map and lineage, best practices, and connect data sources. It also lists some competitors to Azure Purview and provides pricing information for development/trial usage based on capacity units and hours for the data map, scanning, and resource set processing.
Microsoft Azure Offerings and New Services Mohamed Tawfik
Microsoft Azure offers a wide range of computing services including networking, compute, storage, databases, developer tools, and analytics services. It provides benefits such as pay-as-you-go pricing, quick setup, scalability, redundancy, and high availability. Microsoft has seen incredible growth in Azure due to its ability to convert its large enterprise customer base into Azure customers and build hybrid cloud solutions. The presentation highlights several new Azure services and features in networking, compute, storage, databases, and security.
This presentation is for those of you who are interested in moving your on-prem SQL Server databases and servers to Azure virtual machines (VM’s) in the cloud so you can take advantage of all the benefits of being in the cloud. This is commonly referred to as a “lift and shift” as part of an Infrastructure-as-a-service (IaaS) solution. I will discuss the various Azure VM sizes and options, migration strategies, storage options, high availability (HA) and disaster recovery (DR) solutions, and best practices.
Get started With Microsoft Azure Virtual MachineLai Yoong Seng
The document discusses infrastructure as a service (IaaS) on Microsoft Azure, including how to use Azure storage, networking, and compute resources to deploy virtual machines. It provides an overview of key Azure services and features like virtual networks, availability sets, and disk types. The document also previews several demonstrations that will show how to create Azure storage and virtual machines.
The document provides information about upcoming presentations at the Brisbane Azure User Group from January 2019 to November 2019. It also includes announcements about new Azure services and features such as Azure Functions Premium Plan, Azure Search storage optimized tiers, data discovery and classification for Azure SQL Data Warehouse, Azure Front Door service reaching general availability, and Azure Backup for SQL Server in Azure VMs also reaching general availability. Additionally, it advertises events such as the Global Azure Bootcamp and Integration Down Under conference.
This document discusses Microsoft SQL Server options in Azure. It begins by explaining the differences between Azure SQL and on-premises SQL Server, noting that Azure SQL is based on the latest SQL Server Enterprise version in a PaaS model and is not fully compatible with on-premises SQL Server. It then outlines the various options for SQL in Azure, including SQL Server on VMs, containers, and Azure SQL with DTU or vCore pricing/scaling models. The document provides details on features, pricing tiers, scaling, security, and other considerations for using SQL in Azure. It concludes that while migration may require adjustments, Azure SQL provides many advantages over on-premises SQL Server.
The document summarizes key topics from a lecture on database design for enterprise systems, including:
1) Logical and physical database design steps such as conceptual modeling and converting models to schemas.
2) Database security topics like authentication, authorization, and data encryption.
3) Characteristics of enterprise database environments including high availability, load balancing, clustering, replication, and integrating databases with continuous integration systems.
The new Microsoft Azure SQL Data Warehouse (SQL DW) is an elastic data warehouse-as-a-service and is a Massively Parallel Processing (MPP) solution for "big data" with true enterprise class features. The SQL DW service is built for data warehouse workloads from a few hundred gigabytes to petabytes of data with truly unique features like disaggregated compute and storage allowing for customers to be able to utilize the service to match their needs. In this presentation, we take an in-depth look at implementing a SQL DW, elastic scale (grow, shrink, and pause), and hybrid data clouds with Hadoop integration via Polybase allowing for a true SQL experience across structured and unstructured data.
The document discusses monitoring and optimizing performance in Azure SQL. It covers establishing performance baselines using tools like Azure Monitor and Performance Monitor. It discusses using Extended Events and Query Performance Insights to identify expensive queries. It also covers configuring resources like storage, TempDB, and using the SQL VM resource provider. It discusses maintenance tasks like index maintenance and statistics updates. It covers database configuration options and intelligent query processing features in Azure SQL.
Azure templates can be used to deploy and manage Azure resources in a declarative and repeatable way. They define the resources to deploy, including virtual machines, databases, and networking components, as well as the relationships between resources. Azure templates allow for idempotent deployments, simplified orchestration of rollbacks and upgrades, and cross-resource configuration and updates. They are stored as JSON or ARM template files in source control and can be deployed via the Azure CLI, PowerShell, or REST APIs. A wide range of community-created quickstart templates are available on GitHub for common workload deployments.
Microsoft Azure is Microsoft's cloud computing platform which enables rapid development of great solutions using its compute, storage, network and application services. The presentation focuses on how to get started with Azure and on fundamentals of some of the core features of Azure which every developer needs to know like Virtual Machines, SQL Database, App Services, Storage accounts and so on. The session will also include some quick demos, best practices, and tips for Azure Development. There will be something for everyone who is looking for advancing their technical skills with Microsoft Azure.
During the “Architecting for the Cloud” breakfast seminar where we discussed the requirements of modern cloud-based applications and how to overcome the confinement of traditional on-premises infrastructure.
We heard from data management practitioners and cloud strategists from Amazon Web Services and NuoDB about how organizations are meeting the challenges associated with building new or migrating existing applications to the cloud.
Finally, we discussed how the right cloud-based architecture can:
- Handle rapid user growth by adding new servers on demand
- Provide high performance even in the face of heavy application usage
- Offer around-the-clock resiliency and uptime
- Provide easy and fast access across multiple geographies
- Deliver cloud-enabled apps in public, private, or hybrid cloud environments
This document provides an introduction to Azure SQL Database. It describes Azure SQL Database as a fully managed relational database service. It notes that Azure SQL Database differs from SQL Server in some ways, such as not supporting certain T-SQL constructs and commands. The document also discusses server provisioning, database deployment, monitoring, and new service tiers for Azure SQL Database that offer different levels of scalability, performance, and business continuity features.
Join us for a deep dive into Windows Azure. We’ll start with a developer-focused overview of this brave new platform and the cloud computing services that can be used either together or independently to build amazing applications. As the day unfolds, we’ll explore data storage, SQL Azure™, and the basics of deployment with Windows Azure. Register today for these free, live sessions in your local area.
The document compares different backup strategies for backing up data to Microsoft Azure, including using Azure Backup as a service, running backup software in an Azure IaaS VM, and other options. It notes that Azure Backup has unlimited storage and backup compute costs are included, while running backup software in an IaaS VM has storage limits and monthly compute fees. The document also discusses pricing models and capabilities for long-term retention of backups in Azure storage.
Designing azure compute and storage infrastructureAbhishek Sur
How to design compute and storage, description of premium tier machines and demonstration using Iometer to compare two different tier machines comparing cost and performance.
This document discusses business continuity challenges related to increasing data growth and insufficient data protection solutions. It presents Microsoft solutions for addressing these challenges, including Azure Site Recovery for orchestrated replication and recovery across on-premises and Azure environments. The solutions aim to automate processes, eliminate tape management, increase protection breadth and depth, and provide testable disaster recovery.
This document contains definitions and explanations of various Azure database and networking terminology. It includes definitions for concepts like license mobility, constant time, provisioned compute tiers, serverless compute tiers, elastic query, ExpressRoute connections, peering, BGP sessions, Microsoft peering and private peering, Azure virtual networks, hybrid Azure Active Directory, Azure Sentinel, geo-replicated databases, nodes, snapshots, Always On availability groups, auto-failover groups, Azure Arc, Azure SQL Edge, row-level security, privilege escalation, Azure IoT Edge, the Parquet format, Microsoft Volume Licensing, license mobility, software assurance, scale automatically, TDS, Azure Hybrid Use Benefits, Azure Key Vault, the principle
This document contains 30 interview questions about Microsoft Azure. It begins with basic questions like what is Azure and the Azure portal. It then covers Azure roles, storage, networking, security, databases, virtual machines, and other services. The questions range from conceptual to technical and would be useful for someone preparing for an Azure interview.
This document provides an overview of high availability and disaster recovery strategies for Azure SQL solutions. It discusses recovery time and point objectives and available options like Always On availability groups and failover cluster instances. It also covers backup and restore, factors to consider in a HADR strategy, and differences between infrastructure as a service and platform as a service solutions. Specific options covered include availability sets, availability zones, log shipping, Azure Site Recovery, temporal tables, active geo-replication, and auto failover groups.
This document provides an overview of automating database tasks for Azure SQL. It covers automating deployments using ARM templates, Bicep, and tools like PowerShell and CLI. It also discusses creating scheduled SQL Server Agent jobs for maintenance activities and configuring alerts and notifications. Finally, it introduces automation options for Azure PaaS services like Azure Policy, Automation, elastic jobs, and Logic Apps.
This document provides an overview of implementing a secure environment for an Azure SQL database. It discusses authentication options like Azure Active Directory authentication and SQL authentication. It also covers encrypting data at rest using Transparent Data Encryption (TDE) and encrypting data in transit. Additionally, it describes configuring firewall rules and private endpoints for network security. The document demonstrates configuring an Active Directory admin, permission chaining, and Always Encrypted for encrypting column values. It also discusses using Azure Key Vault for securely storing encryption keys.
Here are the key inputs, tools and techniques, and outputs of the Estimate Activity Durations process:
Inputs:
- Activity list
- Enterprise environmental factors
- Organizational process assets
Tools and Techniques:
- Expert judgment
- Analogous estimating
- Parametric estimating
- Three-point estimating
Outputs:
- Activity duration estimates
- Basis of estimates
- Project documents updates
The primary output is the activity duration estimates, which are time estimates for completing individual activities based on factors such as resource availability, complexity, past experience, etc. These estimates are an important input for developing the project schedule.
The document discusses various project management concepts including predictive, iterative, and adaptive life cycles; organizational structures; project influences; the role of the project manager; stakeholders; scope management techniques like work breakdown structures and user stories; sequencing and estimating activities; developing the project schedule; managing quality; resource management; communications management; risk management; procurement; and stakeholder management. Several tools are also presented such as precedence diagramming, Gantt charts, burndown charts, and stakeholder analysis matrices.
The document discusses project life cycles and the five process groups of project management: initiating, planning, executing, monitoring and controlling, and closing. It also discusses the ten knowledge areas that projects typically use: integration management, scope management, time management, cost management, quality management, human resource management, communications management, risk management, procurement management, and stakeholder management. Finally, it provides an overview of organizational structures, communications, and the role of the project management office in supporting projects.
This document provides an overview of various project management tools and techniques across multiple knowledge areas. It discusses predictive, iterative, and adaptive life cycles as well as variables that impact project life cycles. Various organizational structures are described including functional, weak matrix, balanced matrix, strong matrix, and projectized structures. Tools for developing schedules, managing costs, planning quality management, and performing risk analysis are also outlined.
This document provides a mind map overview of the processes involved in project management. It outlines 13 key processes including developing the project charter and management plan, directing project work, managing risk, procurement, and stakeholder engagement. For each process it lists the main inputs, tools and techniques, and outputs to plan, monitor, and control the various aspects of a project.
This document provides a mind map overview of the processes involved in project integration management. It outlines seven key processes: develop the project charter, develop the project management plan, direct and manage project work, manage project knowledge, monitor and control project work, perform integrated change control, and close the project or phase. For each process, it lists the typical inputs, tools and techniques, and outputs involved. The mind map provides a high-level view of the integration of all project management processes.
This document contains 20 multiple choice questions related to project scope management. The questions cover topics such as inputs and outputs of scope management processes, tools and techniques used in scope planning and control, and differences between scope documentation in waterfall and agile project approaches. Correct answers are provided for each question.
May Wong provides tips on memorizing the 10 knowledge areas and 5 process groups of the PMP. She suggests creating stories or using visual tools like charts and colors. As an example, she maps the knowledge areas to characters in a story about two crows having coffee and reading poetry. She also provides memory techniques like creating acronyms or using the knowledge areas and process groups to describe steps to make cookies. The document focuses on providing creative methods for memorizing the foundational elements of project management.
This document discusses the processes for managing project stakeholders: identify stakeholders through data gathering and analysis techniques to create a stakeholder register; plan stakeholder engagement using a stakeholder engagement assessment matrix to document current and desired engagement levels; manage engagement through communication skills and meetings; monitor engagement using the assessment matrix and provide updates based on feedback. The overall goal is to engage stakeholders appropriately throughout the project to ensure success.
The document discusses the three processes for managing procurements: plan procurement management, conduct procurements, and control procurements. In plan procurement management, inputs like the project charter and business documents are used with tools like expert judgment and data analysis to create outputs including the procurement management plan, procurement strategy, and bid documents. Conduct procurements uses the procurement documentation and seller proposals along with tools like bidder conferences and negotiation to select sellers and generate agreements. Control procurements monitors performance using tools like audits and trend analysis, and results in closing procurements and updating procurement documents and plans.
This document outlines the three main processes for procurement management: plan procurement management, conduct procurements, and control procurements. It details the key activities in each process such as gathering market research, analyzing make-or-buy decisions, selecting suppliers, negotiating agreements, administering contracts, and closing out procurements. A mind map is also provided to show the relationships between the three procurement management processes and their associated inputs, outputs, and activities.
This document outlines the four main processes for managing stakeholder engagement on a project: 1) identify stakeholders using data gathering and analysis to create a stakeholder register, 2) plan engagement using a stakeholder engagement matrix to develop an engagement plan, 3) manage engagement through communication, meetings and issue tracking, and 4) monitor engagement using metrics to update the engagement plan as needed. Key project documents like the project charter and change requests feed into and out of each process.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
A tale of scale & speed: How the US Navy is enabling software delivery from l...
02_DP_300T00A_Plan_implement.pptx
1. Module 2: Plan and implement
data platform resources
Introduction to deploying Azure resources, choosing an appropriate database offering,
configuring Azure resources, and migration strategies for Azure
DP-300 Administering Microsoft Azure
SQL Solutions
2. Module objectives
After this module you will be able to:
Deploy resources using manual methods
Recommend an appropriate database offering based on requirements
Configure Azure SQL resources
Evaluate and implement a strategy for moving a database to Azure SQL
4. Lesson 1
objectives
Explore the basics of SQL Server in an Infrastructure
as a Service (IaaS) Offering
Learn the available options for provisioning and
deployment
Explore performance and security options available
Understand the high availability and disaster
recovery options
5. Azure SQL platform offerings
SQL Server on Azure
Virtual Machines
Best for lift and shift
and/or workloads
requiring OS-level access
Azure SQL
Managed Instance
Best for modernizing
existing apps
Azure SQL
Database
Best for supporting
modern cloud apps
Best for extending
apps to IoT edge
Azure SQL enabled by Azure Arc
Run Azure SQL on premises and in multicloud environments
Elimination of
Hardware and
Software
Operational
Efficiency &
Savings
Improved Business
Agility
6. Azure deployment options
You can deploy Azure resources using
the following methods:
Using the Azure Portal
Running PowerShell or Azure CLI scripts
Deploying Azure Resource Manager
templates
7. This method is useful for single VM
deployments
The portal provides step by step
instructions for deployment
This is a manual process
Not easily repeatable in a consistent
matter
8. Deployment using PowerShell/Azure CLI
While not as flexible as using ARM templates, PowerShell and the Azure CLI provide
a repeatable methodology for deploying resources
Using PowerShell or the Azure CLI are an imperative framework, which means they
indicate a specific order of operations to execute. For example:
1. Create a resource group
2. Create a virtual network
3. Add a virtual network interface card (NIC) to the network
4. Create a VM in the resource group, on the virtual network with that NIC
This approach can offer flexibility and ease of use for some situations compared to
the relative complexity of ARM templates
9. Azure Resource Manager templates
ARM Templates provide a
repeatable, declarative process for
deploying Azure resources at scale
Allow for integration with popular
continuous integration and
deployment CI/CD tools
Allow you to create nearly any
Azure resource or option
10. Reduce
Administrative
overhead.
Allows you to view
information about
your SQL Server’s
configuration and
storage utilization.
Code that is executed on your VM
post-deployment - for deploying
configurations, anti-virus, or a
Windows feature.
SQL Server IaaS Agent extension
11. SQL Server IaaS Agent Extension
SQL Server Automated Backup
SQL Server Automated Patching
Azure Key Vault Integration
Defender for Cloud portal integration
View Disk utilization in the portal
Flexible licensing
Flexible version or edition
Flexible version or edition
12. SQL Server licensing models in Azure
Without Software Assurance
Use a SQL Server image from the
marketplace and pay-per-minute.
This is known as Pay as You Go
licensing
With Azure Hybrid Licensing Benefit your options are
Bring Your Own License (BYOL) – this is check box in the
Azure Portal
Manually installing SQL Server on a Windows or Linux VM
Uploading a SQL Server VM image (VHD) to Azure and
deploying that image as an Azure VM
13. Azure Virtual Machine families
General purpose:
Balanced CPU to memory ratio
Memory optimized:
High memory to CPU ratio
Storage optimized:
High disk throughput provided by local storage
Compute optimized:
High CPU to memory ratio
High performance compute:
Most powerful CPU
Type Sizes
General purpose
B, Dsv3, Dv3 Dasv4, Dav4,
DSv2, Dv2, Av2, DC, DCv2
Compute
optimized
Fsv2
Memory optimized
Esv3, Ev3, Easv4, Eav4, Mv2,
M, DSv2, Dv2
Storage optimized Lsv2
GPU
NC, NCv2, NCv3, ND,
NDv2(Preview), NV, NVv3,
Nvv4
High performance
compute
HB, HBv2, HC, H
14. Azure Virtual Machine sizing
A Series – Entry-level for dev/test
Bs Series – Economical bursting
D Series – General purpose compute
Dc Series – Protect data in use
E Series – In-memory hyper-threaded applications
optimized
F Series – Compute optimized
G Series – Memory and storage optimized
H Series – High performance computing
Ls Series – Storage optimized
M Series – Memory optimized
Mv2 Series – Largest memory optimized
N Series – GPU enabled
15. SQL Server configuration
Configure specific SQL Server settings like Security and Networking, SQL Authentication preferences, SQL
instance settings, and other options.
16. Understand hybrid scenarios
Organizations commonly have a mixture of
physical and virtualized deployments of
SQL Server
Offers the benefits of both on-premises
and cloud services – extends on-prem
solutions
Cloud component is usually used for
storage or SQL Server VMs
17. Hybrid scenarios for SQL Server – Disaster Recovery
Most common scenario for a hybrid
deployment of SQL Server.
Organizations ensure business continuity
during catastrophic events.
Azure is used for DR failover (to one or
more regions) while the regular day-to-
day processing continues to use on-
premises servers for local high availability.
18. Hybrid scenarios for SQL Server – SQL Server backups
Common hybrid scenario Backups may go directly into
Azure Storage via URL or Azure
file share (SMB)
Protects against data loss when
on-site backup storage fails –
can be restored to virtual
machines in Azure and tested as
part of Disaster Recovery
procedures
Store on-premises SQL Server
data files for user databases -
user files and not system
databases
In the case of local storage
failure, the user files are safely
stored in the cloud, preventing
data loss
Built in reliability guarantees
stored files in the cloud are
more resilient
Azure Storage:
19. Hybrid scenarios for SQL Server – Azure Arc enabled SQL Servers
Extends and centralizes Enables the inventory
Security threat
introspection
21. Azure Virtual Machine storage cont’d
SQL Server workloads on Azure should use Premium SSD or Ultra SSD
You should separate transaction logs and data files onto their own volumes, with
read-caching enabled on the data file volumes
You can stripe multiple disks using Storage Spaces or Logical Volume
Management to get increased IOPs and storage volumes
22. Azure Virtual Machine storage cont’d
Azure offers encryption at rest for Azure storage,
and allows you to provide a key, or can manage
the keys for you
Each VM will have an operating system disk and a
temporary disk (mounted as D: in Windows and
/dev/sdb1 in Linux)
The temporary disk will be erased if the VM is
deallocated, no permanent files should be stored
on disk
The temporary disk is a good candidate for
TempDB storage, as TempDB is recreated every
time SQL Server restarts
23. Performance consideration – Table and index partitioning
Can improve query performance of large tables,
while improving performance and scalability.
Used when the table becomes large enough that
starts compromising query performance.
Maintenance operations on a partitioned table will
reduce maintenance duration - by compressing
specific data in a particular partition or rebuilding
specific partitions of an index.
Four main components required: filegroups,
partition function, partition schema and table
24. Performance consideration – Data compression
In this example, both tables have clustered and
nonclustered indexes
Production.TransactionHistory_Page table is page
compressed
The query against the page compressed object performs
27% fewer logical reads than the uncompressed table
Compression is implemented at the index, table or
partition level
25. Performance consideration – Data compression cont’d
Row
compression
Page
compression:
Prefix
compression
Dictionary
compression
Columnstore
compression
Columnstore
archival
compression
Inline
compression
using
COMPRESS()
function
26. Performance consideration – Additional options
For production workload:
Enable backup compression
Enable instant file initialization for data files
Limit autogrowth of the database
Disable autoshrink/autoclose for the databases
Move all databases to data disks, including system databases
Move SQL Server error log and trace file directories to data disks
Set max SQL Server memory limit
Enable lock pages in memory
Enable optimize for adhoc workloads for OLTP heavy environments
Enable Query Store
Schedule SQL Server Agent maintenance jobs
Monitor and manage the health and size of the transaction log files
27. Azure platform High Availability (HA) and Disaster Recovery (DR)
Availability sets
Protection from planned or
unplanned Azure
maintenance events and
local hardware outage
Availability zones
Protection from datacenter
failures
Paired Azure regions
Protection from regional
failures while preserving
data residency and
compliance boundaries
If a multi-region disaster
happens, one region in each
pair will be prioritized for
recovery
28. Availability sets
Provide increased availability
for multi-VM deployments like
Always On Availability Groups,
by ensuring the VM are
deployed to different physical
hosts
Use fault domains and update
domains to protect workloads
against host update failure and
single points of hardware
failure
Region 1
VM Availability Set
VM0
(FD0)
(FD0)
Managed
Disk
VM1
(FD1)
(FD1)
Managed
Disk
VM2
(FD2)
(FD2)
Managed
Disk
Compute
Cluster
Virtual
Machine
Storage
Cluster
29. Availability zones
Allow workloads to be
deployed to different data
centers in the same Azure
region, protecting your
workloads against data
center failure
The latency between
availability zones is low, and
generally allows for
synchronous data
replication
30. Paired Azure regions
All Azure regions have a
“paired” region in which
deployments and updates
are applied
These paired regions are in
the same geography
If a multi-region disaster
happens, one region in each
pair will be prioritized for
recovery
Regional pairs are set
and cannot be changed
31. Always On Availability Groups
Provides HADR protection
Database transactions are committed to the primary replica, and then the transactions are sent either
synchronously or asynchronously to all secondary replicas
If the secondary replicas are geographically separate - asynchronous availability mode is recommended.
If the replicas are within the same Azure region - synchronous commit can be used
32. Azure Backup for SQL Server
Enterprise backup solution that provides
long-term data retention, automated
management and additional data protection
Offers a more complete backup feature set
than simply performing your own backups
Requires an agent to be installed on the
virtual machine to manage automatic
backups of your SQL Server databases
Central location to manage and monitor the
backups and to help meet any specified
RPO/RTO metrics
33. Azure Site Recovery (ASR)
Low-cost solution that performs block level
replication of your Azure virtual machine
Better suited for migrations with some
allowed downtime
Use with Availability Groups to provide a
lower RPO
Best utilized for stateless environments (e.g.
web servers) versus transactional database
virtual machines due to its relatively high
recovery point objective (up to an hour)
1. VM is registered with Azure Site Recovery
2. Data is continuously replicated to cache
3. Cache is replicated to the target storage account
4. During failover the virtual machine is added to the target environment
34. Lesson 1: Knowledge check
What type of storage offers the lowest latency in Azure?
Ultra Disk
Premium Disk
Standard SSD
Which option should you choose to spread workloads across data centers in a region?
Availability sets
Availability zones
Availability units
To reduce the cost of an Azure SQL Server VM you intend to run full time for 3 years,
which option should you choose?
Azure Reserved VM Instances
Availability Set
Pay as Your Go Licensing
35. Instructor led labs: Provision a SQL Server on an Azure Virtual Machine
Explore the Azure Portal
Deploy a SQL Server on an Azure Virtual Machine
Connect to SQL Server on an Azure Virtual Machine
37. Lesson 2
objectives
Gain an understanding of SQL Server in a Platform
as a Service (PaaS) offering
Understand PaaS provisioning and deployment
options
Understand elastic pools
Examine Managed Instances
Configure a template for PaaS deployment
38. Azure SQL Database deployment models
Single Database – A fully managed and isolated database that is
managed and billed independently. A single database has its own
dedicated resources
Elastic Pool – A collection of databases that share a pool of
resources. These resources are managed collectivity and this
architecture can reduce your costs and your management
overhead
Hyperscale – A single database deployment that supports very
large volumes of data (up to 100 TB)
Serverless – Allows you to spend less for databases that do not
need to be running 24x7. Best suited for irregular workloads (up
to 4 TB)
39. Azure SQL Database service tier options
Database Transaction Unit (DTU)—the original model which uses a
formula using memory, storage, and IO resources to assign a service tier
vCore—The vCore model allows you to choose a number of virtual CPUs,
which have a fixed relationship to memory and storage provided by the
database
40. Service tier options – DTU
Basic is best used for lightweight dev and test workloads
Standard is best used for a wide variety of production applications
Premium is best suited for workloads that need extremely low latency and have high
transactional volume
Basic Standard Premium
Maximum storage size 2 GB 1 TB 4 TB
Maximum DTUs 5 3000 4000
41. Platform as a service tiers – vCore
General Purpose:
Offers balanced compute and storage suited for most workloads
Similar feature set to business critical
Hyperscale:
Highly scalable storage
Read scale-out
Single SQL Database
Business Critical:
Local SSD storage provides very low storage Low-latency workloads
Allows for readable secondary databases and in-memory OLTP
44. Deploy single database via Azure Resource Manager templates
https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/101-sql-logical-server/azuredeploy.json
45. Demo: Explore single SQL Database and elastic pool
Explore single SQL Database and elastic pool
46. Azure SQL Database backups
Backups are performed automatically by the service
The backup schedule is as follows:
A weekly full backup
A differential backup every 12 hours
A transaction log backup every 5-10 minutes based on log utilization
vCore and Basic Databases have 7-day default retention, that can extend to 35 days
Standard and Premium Databases have a 35-day retention period
Long Term Retention (LTR):
Keep backups up to 10 years on Azure blob storage
Read-access geo-redundant storage
Backups are stored in geo-redundant storage accounts
47. Restoring an Azure SQL Database
Cannot manually restore a database utilizing the T-SQL command RESTORE DATABASE.
Not possible to restore over an existing database. Existing database must be dropped or
renamed prior to initiating the restore.
Restore options:
Azure portal – Can restore a database to the same Azure SQL Database server or create a
new database on a new server in any Azure region.
Use PowerShell or Azure CLI to restore a database.
48. Hyperscale SQL Database
Designed for very large OLTP databases – as
large as 100 TB - billed only for the capacity you
use
Able to auto-scale independently storage and
compute with no limits
Restores in minutes rather than hours and days
Scale up or down in real time to accommodate
workload changes
Other databases are limited by the resources
available in a single node, databases in the
Hyperscale service tier have no such limits
49. Hyperscale use cases
Large on-premises SQL Server databases
that you want to modernize their apps by
moving to the cloud
Using Azure SQL Database and want to
significantly expand the potential for
database growth
Workloads that need both high
performance and high scalability
50. Hyperscale performance considerations
Nearly instantaneous database backups (based on file snapshots stored in Azure Blob
storage) regardless of size with no IO effect on compute resources
Fast database restores – minutes rather than hours or days (not a size of data
operation)
Rapid scale out – provision one or more read-only replicas for offloading read workload
and for use as hot-standbys
Rapid scale up – scale up your compute resources to accommodate heavy workloads
when needed, and then scale the compute resources back down when not needed
51. Active geo-replication
Provides automatic asynchronous
replication of committed
transactions on primary database
to secondary database in the
Azure region(s) of your choice
using snapshot isolation.
Utilizes Always on Availability
Groups and Active Geo-
replication
Provides a readable secondary
copy of your database, no matter
your service tier
You can have up to 4 secondary
copies
Failover requires connection string
updates
52. Failover groups
Builds on the geo-replication
feature
Provides for automatic
failover
Allows a single connection
string to connect to the
group (no connection string
changes needed)
Multiple databases can be in
the same failover group
Disaster recovery for SQL MI
is slightly different, allowing
only one replica in the paired
Azure region to the primary
53. Azure managed networking
SQL Managed
Instance is deployed
into a virtual cluster in
a Virtual Network
By default, it does not
have a public
endpoint and
connections must
come from its virtual
network or a peered
network
54. Azure SQL Managed Instance link
Connects your SQL Servers hosted
anywhere to SQL Managed Instance,
providing hybrid flexibility and
database mobility
Supports Azure SQL Managed
Instance provisioned on any service
tier
After a disastrous event, you can
continue running your read-only
workloads on SQL Managed
Instance in Azure.
55. Azure SQL Managed Instance link cont’d
Supported scenarios:
Use Azure services
without migrating to the
cloud
Offload read-only
workloads to Azure
Migrate to Azure
56. Azure SQL Managed Instance – Instance Pools
This feature supports
multiple managed instances
on the same virtual machine
Supported by general
purpose service tier only
Benefits
• Ability to host 2-vCore
instances. Only for
instances in the instance
pools
• Predictable and fast
instance deployment time
(up to 5 minutes).
• Minimal IP address
allocation.
Vnet subnet
Virtual Machine
instance1
(2 vCores)
instance4
(2 vCores)
instance2
(2 vCores)
instance5
(2 vCores)
instance3
(4 vCores)
instance6
(2 vCores)
instance7
(2 vCores)
GW node
General Purpose 16 vCores pool
57. Azure SQL Managed Instance backups
Nearly the same as Azure SQL Database
Backups are performed automatically by the service
The backup schedule is as follows:
A weekly full backup
A differential backup every 12 hours
A transaction log backup every 5-10 minutes based on log utilization
Database backups have 7-day default retention, that can extend to 35 days
Backups are stored in geo-redundant storage accounts
Ability to perform manual COPY_ONLY backups to Azure blob storage
You can restore your own backups from Azure blob storage
58. Azure SQL Edge
Optimized relational database engine geared for IoT and IoT Edge deployments. It is a
containerized Linux application that runs on a process based on ARM64 or x64
Recommended scenarios:
• To capture continuous data streams in real time
• Integrate the data in a comprehensive
organizational data solution
• Synchronize and connect to back-end systems
• Overcome connectivity limitations
• Overcome slow or intermittent broadband
connection
59. Lesson 2: Knowledge check
You need to migrate a set of databases that use distributed transactions from
on-premises SQL Server. Which of the following options should you choose?
Azure SQL Managed Instance
Azure SQL Database
Azure SQL Database Hyperscale
You are building a new cloud database that you expect to grow to 50 TB. Which is the
best option for your database?
Azure SQL Managed Instance
Azure SQL Database
Azure SQL Database Hyperscale
You are building a database for testing purposes that will be used less than 8 hours a day
and is expected to be 20 GB in size. What is your most cost-effective option?
Azure SQL Database Serverless
Azure SQL Database Hyperscale
Azure SQL Managed Instance
60. Instructor led labs: Provision an Azure SQL Database
Create a Virtual Network
Deploy an Azure SQL Database
Connect to an Azure SQL Database using Azure Data Studio
Query an Azure SQL Database using SQL Notebook
62. Lesson 3
objectives
How SQL Server Compatibility Level affects
database behavior
Microsoft’s support policy for SQL Server
The differences between private and public preview
Describe the various options when you are
performing a migration
63. Compatibility level
Database level setting
Currently (SQL Server 2019/Azure Services) supports compatibility levels
100-150 (2008-2019)
Allow query optimizer behavior and most T-SQL syntax to maintain behavior
of older versions of database engine
Effects behavior of the given database, and not the entire server
64. SQL Server support model
SQL Server releases are in primary
support for five years
This means performance, security, and
functional updates in Cumulative
Updates
SQL Sever provides extended
support for the next five years
Security fixes will be addressed during
this period
65. Currently supported releases of SQL Server
SQL
Server 2019
SQL
Server 2017
SQL
Server 2016
SQL
Server 2014*
SQL
Server 2012*
*Extended Support
66. Compatibility level-based certification for applications
Applications should stop certifying for specific version or platform
(for example, SQL Server 2019 or Azure SQL Database)
Azure PaaS service versions are evergreen (always latest) so it makes the
most sense to certify to a compatibility level
Any application certification process should be aimed at a certification level
67. Type of Azure preview
Private preview – Your subscription needs to be added to allowed list in
order use the feature. May or may not have portal support
Public preview – Visible either in the portal, or at
https://azure.microsoft.com/updates/
68. Preview feature caveats
May be limited to specific regions
Preview features are often at discounted pricing
May not have full GUI support
Different support policies than GA features
There are features that may stay in preview for extended periods of time:
Azure Data Sync
Azure SQL DB Query Editor
69. Migrating to Azure
Most IaaS migrations are “Lift and Shift” - the on-premises architecture is
recreated in Azure and workloads and data are migrated. Consider “Lift and
Modernize”
The Azure Migrate tool automates migration allowing you to automatically
migrate all of your workloads and data to VMs running in Azure
The Azure Migrate tool can discover your environment and optionally
execute a migration of your databases
70. Assesses your environment for moving from on-premises to either IaaS
or PaaS
Detects issues that can affect SQL Server version upgrades or migration to
Azure SQL Database or Managed Instance
Assesses the T-SQL code in your application to identity breaking changes
Recommends new features that would benefit your database
Can also migrate your database either to a VM, Azure SQL Database or
Managed Instance
Database Migration Assistant (DMA)
72. Lesson 3: Knowledge check
Which version of SQL Server does compatibility level 100 equate to?
SQL Server 2008
SQL Server 2014
SQL Server 2019
What is a valid reason for migrating your database to Azure SQL Database?
Your applications need to run older versions of SQL Server, such as SQL Server 2016
You want to eliminate the complexity of configuring and managing high availability, backups, and
other database tasks
You want to improve query performance of your application
Which tool would you use to assess and migrate your databases from an on-premises
SQL Server to an Azure Virtual Machine?
Microsoft Assessment and Planning Toolkit
Database Experimentation Assistant
Data Migration Assistant
74. Lesson 3
objectives
Learn how to migrate to an Azure SQL Database
Understand service continuity to Azure SQL
Database
Explore options for migrating to Azure SQL
Database
75. Migration benefits of SQL Server to Azure SQL Database
Backup and
recovery
High
availability
Disaster
recovery
Service
scalability
Security Licensing
76. Tools to support your migration planning from SQL Server to
Azure SQL Database
Azure Database Migration Service
Data Migration Assistant
Database Experimentation Assistant
77. Helps you upgrade to a modern data platform by detecting compatibility issues that can impact database
functionality in your new version of SQL Server or Azure SQL Database.
Supported
Sources
- SQL Server 2005
- SQL Server 2008
- SQL Server 2008 R2
- SQL Server 2012
- SQL Server 2014
- SQL Server 2016
- SQL Server 2017 on
Windows
.
Supported
Targets
- SQL Server 2012
- SQL Server 2014
- SQL Server 2016
- SQL Server 2017 on
Windows and Linux
- Azure SQL Database
- Azure SQL Managed
Instance
What is Data Migration Assistant?
78. Data Migration Assistant Wizard
Installation wizard is simple and just
requires you to accept the license
Advanced configuration
You can fine-tune certain behavior of
Data Migration Assistant by setting
configuration values in the
dma.exe.config file located in
%ProgramFiles%Microsoft Data
Migration Assistant
Advanced Configuration can include:
*
Parallel
assessment
<advisorGroup>
<workflowSettings>
<assessment
parallelDatabases="8" />
</workflowSettings>
</advisorGroup>
Parallel
migration
<advisorGroup>
<workflowSettings>
<migration
parallelDatabases="8″ />
</workflowSettings>
</advisorGroup>
Connection
time-out
<appSettings>
<add
key="ConnectionTimeou
t" value="15" />
</appSettings>
Data Migration Assistant Configuration
81. Database Experimentation
Assistant (DEA)
Enables A/B testing between the source
system and the target system to
evaluate a targeted version of SQL
Server for a specific workload.
DEA outputs
Metrics include:
- Queries that have compatibility errors
- Degraded queries and query plans
- Other comparison data
The three tasks of the Database Experimentation Assistant are:
*
Capture
workloads
Capture trace file
information from a
production server
to capture real
world workloads.
Replay
workloads
Replay the
captured trace files
against a source
and target server
to generated
information
against each
version.
Analyze
workloads
View the results
so that a
workload
performance
comparison can
be made between
the source and
target workloads.
What is Database Experimentation Assistant?
82. There are main 3 high level tasks that you perform
Tasks Captures
You should define:
• Trace name
• Duration
• SQL Server instance
name
• Database name
• Path to store source trace
file on SQL Server
machine
Replays
You should define:
• Replay name
• Controller machine name
• Path to source trace file
on controller
• SQL Server instance
name
• Path to store target trace
file on SQL Server
machine
Analyze
You should define:
• Report name
• Trace for Target 1 SQL
Server
• Trace for Target 2 SQL
Server
Working with Database Experimentation Assistant
87. Azure Migrate provides a dashboard for a suite of migration tools.
What is Azure Migrate?
88. What is Azure Migrate?
Azure Migrate
Not to be confused with the Azure
Database Migration Service.
It can be used for migrations to Azure
SQL Server Virtual Machines. The source
system that it supports is limited to on-
premises VMware virtual machines
(VMs) managed by vCenter Server
version 5.5 or above.
Assessment
You can use the
Azure Migrate
Service to perform
assessment of
source systems.
Migrations
You can also use
the Azure Migrate
Service to
perform the
actual migration
from VMware
virtual machines.
90. SqlPackage.exe /a:import /tcs:"Data Source=mynewserver20170403.database.windows.net;Initial
Catalog=myMigratedDatabase;User
Id=<your_server_admin_account_user_id>;Password=<your_server_admin_account_password>"
/sf:AdventureWorks2008R2.bacpac /p:DatabaseEdition=Premium /p:DatabaseServiceObjective=P6
Migrate using the BACPAC
91. Transactional replication
Transactional replication can be used to
push out changes to data to a remote
SQL Server.
When to use it
Use transactional replication when you
need to minimize downtime and do not
have an Always On on-premises
deployment.
Three steps to migrate using transactional replication:
*
Distribution
database
Performed on the
source database
that will be
migrated to a SQL
Server hosted on
an Azure Virtual
Machine.
Create a
publication
You can selectively
choose the tables
within a database
that you wish to
publish. It can be
the whole
database or
specific tables and
views.
Create a
subscription
On the SQL
Server on the
Azure Virtual
Machine, you
create a
subscription that
will host the
migrated data.
Migrate using transactional replication
92. Lesson 3: Knowledge check
Before you create the Database Migration Service, which resource provider should you
register in your subscription?
Microsoft.DataMigration
Microsoft.DataMigrationAssistant
Microsoft.DataMigrationService
Which purchasing option allows you to use your SQL Server Enterprise Cores licenses to
pay for Azure SQL Database?
Both vCore and DTU purchasing models
vCore purchasing model
DTU purchasing model
When should transactional replication be considered during migration?
When you need to minimize downtime
When you need to minimize compatibility errors
When your source is Azure SQL Database
94. Lesson 3
objectives
Explore options for migrating to Azure SQL
Managed Instance
Synchronize data to Azure SQL Managed Instance
Considerations of migrating to Azure SQL
Managed Instance
96. Tools to support migration planning from SQL Server to Azure SQL
Managed Instance
Azure
Database
Migration
Service
Data
Migration
Assistant
Database
Experimentation
Assistant
97. Evaluating SQL Managed
Instance compatibility
Features supported Azure SQL Managed
Instance include:
Cross-database queries
Cross-database transactions
Linked servers
CLR
Global temp tables
Instance level views
Service Broker
Use the Data Migration Assessments to establish
any compatibility issues
98. BACKUP
Backup the database
to an URL that
points to an Azure
storage account
Create a
credential
Create a credential
that accesses the
storage account
from SQL Managed
Instance using
Shared Access
Signature (SAS)
LIST
Run the RESTORE
FILELISTONLY FROM
URL command to
identify which
backup to restore
RESTORE
Use the RESTORE
DATABASE
command to restore
the database onto
the managed
instance
CHECK
You can run SQL
commands to check
for the status of the
restore operation to
complete
Migrate using backup and restore from URL
CREDENTIAL
99. Transparent data encryption
Is a SQL Server technology that ensures
databases are encrypted at rest and can
only be read when the certificate used
to encrypt the data is leveraged to
decrypt the database and database
backups.
Transparent data encryption and
Azure Key Vault
Allows you to encrypt the Database
Encryption Key (DEK) with a customer-
managed asymmetric key called the
TDE Protector. This is also known as
Bring Your Own Key (BYOK) support
Steps to manually migrate the certificate:
:
Locate
certificate
Use a Transact-SQL
statement to get
the database name
and the certificate
for each database.
Backup the
certificate
Use the BACKUP
CERTIFICATE
statement to
create a backup to
a file location of
your choice.
Copy and
upload
certificate
Copy the
certificate to a
personal
information
exchange and
then upload to a
managed
instance.
Managing encrypted databases
100. Point
to
Point
Create a secure connection to
your virtual network from an
individual client computer.
Site
to
Site
Site to site is used to connect an
entire on-premises site to the
Azure network.
ExpressRoute
Create private connections
between Azure datacenters and
on-premises infrastructure.
Connectivity options with on-premises servers
102. Continue leveraging your SSIS packages with Azure Data Factory or create a brand-new Azure Data Factory pipeline
to execute your activities.
SQL Server Integration Services
ETL tool of choice for migrating data from point A to
point B and taking advantage of the powerful control
flow and data flow capabilities allowing nearly limitless
amounts of data manipulation, data transformation
options, ability to execute in parallel, and other
features SSIS provides.
Azure Data Factory
Azure Data Factory is essentially a fully-managed
data integration as a service in the cloud that can be
leveraged for your ELT workloads.
Synchronizing data using SSIS or Azure Data Factory
103. Transactional replication
Can be used to push out changes to
data to a remote SQL Server.
When to use it
When you need to minimize downtime
and do not have an Always On on-
premises deployment.
Three steps to migrating using transactional replication:
*
Distribution
database
Performed on the
source database
that will be
migrated to a SQL
Server hosted on
an Azure Virtual
Machine.
Create a
publication
You can selectively
choose the tables
within a database
that you wish to
publish. It can be
the whole
database or
specific tables and
views.
Create a
subscription
On the SQL
Server on the
Azure Virtual
Machine, you
create a
subscription that
will host the
migrated data.
Migrating SQL Server workloads using transactional replication
104. Lesson 3: Knowledge check
Which tool is recommended for large database migrations?
Data Migration Assistant
Database Migration Service
Database Experimentation Assistant
Which tool uses distributed replay feature to replay a captured trace against an upgraded
test environment?
Database Experimentation Assistant
Database Migration Service
Data Migration Assistant
Which tool compares workloads between source and target SQL Server?
Microsoft Assessment and Planning Toolkit
Database Experimentation Assistant
Data Migration Assistant
105. Module summary
Deploying IaaS solutions with
Azure SQL:
Understand ARM template structure
Deployment options for ARM
templates
Using quick start Azure templates
Deploying PaaS solutions with
Azure SQL:
Understand the benefits of Platform as
a Service offerings
Understand the differences between
Azure SQL offerings
Learn about Azure SQL Database
Hyperscale
Evaluating strategies for
migrating to Azure SQL:
Describe the various options when you
are performing a migration
Microsoft’s support policy for SQL
Server
How SQL Server Compatibility Level
affects database behavior
Migrating SQL workloads to Azure SQL
Database:
Explore options for migrating to Azure SQL Database
Learn how to migrate to an Azure SQL Database
Understand service continuity to Azure SQL Database
Migrating SQL workloads to Azure SQL
Managed Instance:
Explore options for migrating to Azure SQL Managed
Instance
Considerations of migrating SQL Managed Instance
Synchronize data to SQL Managed Instance
106. References
What is Managed Instance?
https://docs.microsoft.com/azure/sql-database/sql-database-managed-instance
What is the Azure SQL Database service?
https://docs.microsoft.com/azure/sql-database/sql-database-technical-overview
Data Migration Assistant
https://docs.microsoft.com/sql/dma/dma-overview?view=sql-server-ver15